Leading visualization studio The Third Floor (TTF) is announcing their move into producing Hollywood-caliber, stylized animated content using game engine technology. The company has developed a demo short with a new pipeline that uses both traditional tools such as Maya and Houdini but with the aim of primarily compositing and rendering in the Unreal Engine.

We spoke to Josh Wassung, The Third Floor’s co-founder and chief creative officer about the new move and technology. Although the aim is to finish in UE4/UE5 the pipeline is also established to just as easily export out layers and AOVs to a Foundry Nuke finish. It just depends on the show. The pipeline was built so the same assets and materials could produce a finished animated film, a VR experience, or an interactive promotion.

The company has for many years been developed cinematic tools that pre-visualize scenes before they’re filmed or rendered. TTF has now created a platform that transforms these early visual experimentations and explorations into cinema-quality finished animated films and TV shows. It is now possible for storytellers and directors to start using TTF tools to develop storyboard-like pre-visualizations, then refine their drafts through collaborative iteration into 4K Dolby Vision finished sequences.

The demo Skytribe short is indistinguishable from what was formerly drawn painstakingly by hand or rendered frame by frame using offline computers, yet is fully adjustable by animators, directors, and their collaborators, enabling significant savings in production costs and time.

There are many advantages of using a real-time game engine, iteration speed, interactivity and increasingly render quality as Josh Wassung outlined, “over half our regular 30 pipeline projects now use Unreal Engine.” But the TTF has needed to extend and build on the structures around Unreal, as the needs of a high-end game workflow are quite different in some important areas to a film or narrative pipeline.

When a shot is prepared for a film, the shot is dressed to the camera. The reality of cinema is that there are many per shot cheats. A chair may look better lifted up in the foreground of a long lens shot. In the real world, a crew member may place an apple box under it, – in 3D it might just float 6″ off the ground. Of course in the matching wide shot, this trick is removed, and as long as it appears that there is visual continuity, the audience is none the wiser. By contrast, in a game pipeline, one tends to build a world and then players move around it. Props don’t magically change in their position to improve one point of view. Thus TTF has had to develop shot-specific tools and metadata that allows not only does a snapshot of the digital scene but per shot – but also per take. Directors may do multiple takes as variations and seek to come back to earlier versions at almost anytime. When a take is reloaded it has to match exactly to what was there before, even if assets were changed in later takes.

There are many other examples. TTF needs to be able to also offer a pipeline that is completely repeatable. Josh Wassung has experienced projects in the past where game engines have subtlety altered aspects to achieve real-time performance. “I had a project that was great but that boggled my mind, coming from normal 3D animation,” he recalls.  “The animator put the character’s feet on a particular object and then later during playback sometimes the feet were there and sometimes not. That was because the engine was trying to optimize a real heavy scene,… that was a couple of years ago and it was a really heavy scene, but if I put a character on a spot, it needs to not just be ‘close-ish’ on playback.”

Others aspects, such as being able to do non-real-time high-quality renders are already widely supported in engine for producing cut-scenes and marketing materials. Although, TTF also built their new animation tools to be able to render out multiple passes for later Nuke compositing (with complex passes and AOVs), or be repurposed for adaption into AR or VR or any of a wide range of branching media options. For the narrative animation, all the rendering is done in Unreal, even if, in some cases, some work is comped externally in Nuke.

While the new animation tools inherit much of this core TTF film-making game-tech, it also has developed complex animation-specific tools for lookdev and especially painterly effects. A good example is the shading on the characters’ faces. It is both changing based on camera and head angle to the light sources and incorporating painterly aspects of traditional media. “One of the things that was important to us was how to get an artistic style that was interchangeable on a face,” Josh explains. “How to get ‘painted on lighting’, to blend with dynamic lighting.” Those are some of the more artistic tools that TTF has developed using the engine and added to their already impressive existing visualization arsenal.

The TTF team is quick to credit films such as Into the Spiderverse, as pioneers in producing stylistic animation which blends live-action elements with the very hand-drawn world. Technically to do this the team had to make sure that they could build tools such as their live link to allow their Maya work to move seamlessly back and forth into UE4. The team is flexible, but they tend to produce animation in Maya and import the animation cache to UE4. They have also developed strong round-tripping tools to allow for automatic updating from camera blocking or editing that is done in the engine, feeding back out. “We have something called a scene description,” he explains. “We wrote our own code that allows the system to sync up and to publish assets that are in the engine. Every time you render, it creates a perfectly synced version in Maya. ” Similar things also accommodate Houdini effects but TTF just as equally support hand draw elements being imported into Engine for very visceral and tactile effects.

This new final quality output pipeline both informs TTF work in virtual production and also was built on technology the company developed for The Mandalorian. The company is very active already with Virtual Art Departments (VAD), and allowing assets to be varied and changed throughout the process. After all, the company was founded on the notion that it was cheaper and easier to do 200 versions in pre-viz than 2 versions in post-production. The culture of TTF is to encourage change and experimentation. However, the reality in many animated feature films is the rule of three revisions. “When we talked to clients and animation directors, their number one limitation – and common complaint was – the three rounds of notes only,” says Josh. “We really lean into ‘visualization as change’ and we really want to take that philosophy to animation. We don’t want to limit our clients to three revisions. We wanna say, ‘Hey, here’s a period of time. We can change anything. We want to change as much as you want.’ And hopefully, get to a high-quality stage- as early as possible.”

Copy link
Powered by Social Snap