3Lateral

One of the amazing technology demos at GDC was Gene Splicer, from Vladimir Mastilovic from 3Lateral.

This technology demo was initially pitched at Middleware for creating large volumes of human characters in game scenes, but the technology has long term implications that will be even more significant.

The demo showed a human face, based on game assets, that could be mixed in realtime. The actual render face quality or lighting is not the important issue here, it is the notion of separating the core FACS poses and rig from a file of facial characteristics.

For any face a special DNA file is made of approximately 60,000 floating point constants. This defines each face. From this DNA file you can make the base rig be anyone. If you capture a face of a real person talking and it becomes part of the system, you can start mixing in anyone else's Digital DNA in real time.

For the last ten years, the company has focused on high end digital humans. 3Lateral are premier face riggers, their custom work is of the absolute highest quality, it takes time and is bespoke. Now this same artistry of rigs and animation can be applied to secondary characters without the budgetary requirements of their premium character rig work.

Immediately it is a great way to have parametric animated faces, any crowd in medium closeup could have wide variation of high quality animation mixing and matching from a core group of high quality scans. However  it does not stop there. The longer term opportunities are fascinating. With all the faces in an animation system built off a complex but common database, one can start using Deep Learning to predictively animate faces. If say you only get the neutral pose of an actor on set, the system could evaluate their face into the 3Lateral DNA and then accurately and plausibly predictable be able to show how that 'sort of face' would smile or frown. Up until now face rigs and animation were very specific to that project and that person. Companies relied on a large amount of complex FACS poses to get a good animation for any one actor. With this system animators would get

  1. A 'blendshape' like rig to modify the face, blending in more Asian ears, or anglo chin, etc... as a modeling tool
  2. A way to use new large deep learning programs to provide trained lip sync dialogue and expressions for characters.

The system will no doubt first be used on secondary characters. This is much in the same way that Massive Crowds were only in wide shots when that program first appeared. But just as Massive Agents can now be very close to camera, in time the Digital Gene Spliced characters will come closer to lead roles. 3Lateral already provide specific solutions that are the gold standard for face rigs on main characters. Their work has been been used in many AAA games including the HellBlade demo at last year's GDC.

 

Internally, the rigs shown at GDC were based on faces from one of their game partners Cloud Imperium Games. They provided NPC faces from their project Star Citizen. As a result, the GDC demo rig worked on face models that are using joints, the next version will use Blendshapes. The faces have all been densely tracked with pore level precision and then parameterized. The final look of the faces is only the UI preview.

The first tech demo at GDC showed only 16 faces in the database of  their "shopping for face parts" approach. All 16 characters have their skull aligned, male and female. The database available will explode as 3Lateral  move the project from a proof of concept tool to a real product. The user interface will also change and be much more complex.

The whole system is built on 3Lateral's high end Rig Logic program which for the last few years has been developed as the core technology for their high end rigs. The Rig Logic program has been refactored and split into the Logic, or set of universal contextual rules and the DNA set of floating constants which uniquely identify a face. Each person's DNA file is only 400KB, which Mastilovic describes as an "elegant solution". As 3Lateral work regularly with game companies it knows how to keep data small and clever.  "Working for open world games, - we know how to optimize" commented Mastilovic. This split alone would allow a new face to be sent to say a mobile device using tiny amounts of data, but still be just as powerful as a vastly bigger normal face rig and FACS set.

The system is not constrained to humans. While not shown at GDC the company has been testing with Animal database facial rigs and has even gene spliced a facial rig between a person and a dog for a werewolf appearance.

The whole system runs in UE4. Because the system just produces new skin weights and accesses real data, the skin weights are updated in milliseconds, (then streams the data to the new asset immediately) and dynamic per-instance reference poses. The demo also highlighted the audio driven animation curves in UE4, which enable audio sync over long clips, something Mastilovic has been working with extensively and find "brilliant".

 

ILM ADG

John Knoll, CCO at ILM and Rogue One visual effects supervisor, joined Epic on stage with Industrial Light & Magic's Advanced Development Group (ADG) Principal Engineer and Architect Naty Hoffman, "we obviously we want these renders to be as fast as they can be, but if they don't look good enough for John, they don't go in the movie" he said. "Quality was the hard stop".

A shot rendered in UE4 and used in the final film.

Knoll spoke at length about how Epic’s UE4 engine allowed the ILM team to render the new sarcastic droid K-2SO in real time in Rogue One. The group has its own render pipeline built on UE4 called the ADG’s GPU real-time renderer. Knoll showed a mixed reel of shots of K-2SO rendered in RenderMan and others rendered in ADG renderer. In the final film there were three actual K-2SO shots that were rendered using the ADG renderer.

ADG develops tools and techniques for high-quality computer graphics and simulation. Their goal for the last 3 to 4 years has been to expand and enhance the creative process of storytelling in the Star Wars universe through real-time rendering of visuals that approach motion picture quality. The team focuses on physically based rendering solutions. Working from the Letterman Digital Arts Center they service both the ILM xLAB team and ILM's productions.

John Knoll, Naty Hoffman and Kim Libreri at GDC

The team achieved final renders on screen while bypassing the traditional ILM pipeline process. This achievement, made possible by UE4 and the freedom ILM had to modify its source code.

There is a very important distinction to be made between pre-visualization and this execution, in which Naty’s team met John’s incredibly high standards of quality and delivered real-time rendered imagery created in UE4 that was deemed worthy of inclusion in the final film.

 


Thanks so much for reading our article.

We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.