Luma Labs just released Ray 2, the startup’s next-generation AI video model — which promises unprecedented motion quality and physics realism through a new multimodal architecture trained with 10x more computing power than its predecessor. The details:
- The model can generate high-quality video clips up to 10 seconds long from text prompts, and it has advanced motion and physics capabilities.
- Ray2 demonstrates a sophisticated understanding of object interactions, from natural scenes like water physics to complex human movements.
- Ray2 can currently handle text, image, and video-to-video generations, and Luma will soon add editing capabilities to the model.
- The system is launching first in Luma’s Dream Machine platform for paid subscribers, with API access coming soon.
Veo 2’s launch around the holidays felt like a new level of realism and quality for AI video, and now Luma punches back with some heat of its own. It’s becoming impossible to discern AI video from reality — and the question is which lab will crack longer-length, coherent outputs and unlock a new realm of creative power.