Ray3 brings 16-bit HDR generative video – now in Adobe Firefly

Luma AI’s Ray3 video model is now available inside Adobe Firefly, opening the door for filmmakers, VFX artists, and creative studios to experiment with production-grade generative video directly in Adobe’s ecosystem.

Luma AI’s Ray3 reasoning video model is capable of producing video in true 10-, 12-, and 16-bit High Dynamic Range (HDR) ACES2065-1 EXR format, making it suitable for high-end film and media pipelines. Unlike earlier models that offered only 8-bit SDR output suitable (which were useful for experiments or social content) Ray3 is built for production pipelines. This means AI-generated video can meet the technical bar set needed for VFX workflows.

Why HDR and Bit Depth Matter

In professional film and advertising, the bottleneck has never been about whether AI can generate “something.” It’s whether it can generate something usable. Studio pipelines demand imagery with wide dynamic range, deep color fidelity, and temporal coherence. Without 10-, 12-, or 16-bit precision, grading, compositing, and VFX integration simply break down. Banding, crushed shadows, or clipped highlights can render entire sequences useless, no matter how fast the AI can produce them.

Ray3’s leap to native HDR EXR means generated plates and sequences can finally sit side-by-side with live-action footage technically. Just as importantly, the model can even convert SDR footage into HDR, broadening the utility across capture and generative workflows.

Quality over speed

While consumer chatter often focuses on speed benchmarks, Google’s Gemini Flash 2.5 Nano Banana, being a notable comparison, Luma and Adobe emphasize that speed without fidelity is irrelevant to professional users. For VFX studios, the primary question is not “How many seconds per shot?” but “Can this footage survive a colorist’s grading session, a compositor’s keying, or a DI pipeline?”

Ray3 prioritizes image integrity over raw speed. The model’s multimodal reasoning system helps it understand intent, maintain character consistency, and refine outputs in-flight. This gives it a leg up where seconds saved matter far less than hours lost fixing unworkable imagery.

For all these significant improvements over earlier systems, it would be wrong to suggest that video generation AI is fully ready for production. Sample outputs tend to focus on people because that’s where the training data is richest. At fxguide, we’ve been using sailing yachts as our benchmark test for some time, precisely because they stress-test models in unique ways.

They highlight three weaknesses:

  1. The risk of confusing a sailing yacht with a motor yacht,
  2. The difficulty of maintaining temporal stability in thin rigging lines like backstays and halyards,
  3. And the challenge of rendering water spray and fluid dynamics.

Our early trials with Luma’s predecessors were laughably wrong, while Ray3 now produces results that are dramatically better, though still not reliable enough for production. It is possible to get usable material, but not guaranteed. That tension is the story: remarkable leaps forward, yet persistent flaws. The more interesting question isn’t whether today’s results are perfect, they aren’t always, but how soon, at the current rate of advancement, we’ll see generative imagery that consistently satisfies temporal stability, HDR, and high-resolution fidelity.

One year ago (2024 Luma)

 

Similar prompt in 2025 (Luma in Firefly)

 

Similar prompt, but with an original photographic starting frame (Luma in Firefly)

 

Draft Mode: exploration at scale

Alongside HDR, Ray3 introduces a new Draft Mode, allowing artists to generate iterations up to 10x faster. While these drafts aren’t final quality, they preserve identity, motion, and composition when promoted to full-fidelity renders. This means creatives can explore dozens of narrative or visual directions quickly, then select the most promising paths for 4K HDR mastering.

In practice, Draft Mode makes AI-assisted ideation feel closer to traditional pre-viz or animatics. It’s about giving storytellers freedom to explore without penalty, then locking down ideas with production-grade results.

Integration with Adobe Firefly

Adobe is the first to integrate Ray3 into its Firefly app, making the model immediately available to a global user base. Firefly Boards now allow filmmakers and storytellers to storyboard, plan shots, and prototype environments directly with Ray3. From there, output can be synced into Premiere Pro or other Creative Cloud tools for finishing.

Adobe’s partnership positions Ray3 not as a novelty tool but as a pipeline component—with Content Credentials attached for provenance and Creative Cloud integration for finishing.

The bigger picture

Ray3’s arrival marks another critical moment for generative video: on the journey from toy to tool. By delivering 16-bit HDR EXR outputs, higher native resolution with 4K upscaling, and professional controls (keyframes, looping, extendable shots), Ray3 places generative AI closer to the needs of VFX.

>
Your Mastodon Instance