DI4D recreate in their own Image

At GDC this year, DI4D showed a demo of a digital version of Colin Urquhart, founder and CEO of Dimensional Imaging (DI4D).

The facial performance was captured with DI4D HMC and applied to Snappers “digital double” rig. The head of Colin was created with scans from Protagon and rendered in real-time in Unreal UE4.

Above is the video of the real time rendering of Colin and below is a tech demo breakdown video showing each of the stages. It is worth noting that Colin’s head is very static, as his face is being driven by the HMC cameras, clearly attached to the head rig, tightly sitting on his head. For this demo to have any head or neck movement, the demo would require an additional MoCap suit. However, this particular demo is just about facial performance reproduction

The Colin pipeline above used DI4D’s own new HMC, which fxguide covered in a previous story. The new demo highlights not only the HMC capture unit but the DI4D processing pipeline. Their new DI4D HMC provides high fidelity facial performance data and is used in conjunction with the DI4D post processing pipeline. The DI4D HMC uses a stereo pair of 4MP cameras and synchronised on-board strobed LED illumination to ensure camera focus and even illumination across the face, while minimising the apparent brightness to the subject. The process is markerless, requiring no special makeup or structured light.

DI4D team was also involved in the Digital Doug project at Digital Domain which will be shown at SIGGRAPH 2019 next month in LA. DI4D also did facial performance MoCap in four of the episodes of the Netflix’s Love, Death and Robots. The DI4D team worked with Axis Animation and Blur studios for those episodes.

Above is the breakdown of DI4D GDC 2019 Tech Demo showing from left to right: original DI4D HMC footage, reconstructed per-frame 3D “scan” surface, tracked mesh point-ache, point-cache solved to Snappers rig controls, final render in Unreal UE4.

DI4D’s process of scanning and reconstructing faces is different from say a Light Stage scan. DI4D is not getting separate a specular and diffuse because it is not a polarised lighting process. Also, the Light Stage approach can scan to high resolutions and sample micro geometry. DI4D is however, getting a geometric reconstruction of a face, with temporal cohesion. As their demo illustrates, the DI4D process is designed to capture a performance more than just focusing on sampling an actor’s appearance. DI4D is significant because it incorporates a temporal aspect, the 4th dimension.

DI4D is a service provider and as such they offer a range of options and ways of working with a facility’s own pipeline. Colin Urquhart explains, “A typical pipeline for us starts with a high res neutral stance of the actor. This gets retopologized with the client’s own topology. We will then motion capture their performance or capture ROMs (range of motion) and/or FACS poses – whatever the client needs.” DI4D finds the neutral expression from their 4D data set that matches the neutral expression and that “allows us to essentially copy the topology from the neutral scan onto our neutral expressions from our 4D data set. Then we can track that mesh through the data set that we’ve captured and we can apply it back onto a full head mesh.”  This 4D data set allows cohesion through time, and that produces a much higher fidelity result than just moving from FACS keyframe to keyframe.

The key to DI4D’s cohesion is provided by what is essentially pixel level optical flow. “We know where every pixel flows from frame to frame” comments Urquhart. ” DI4D can output Maya point cloud caches and the company is looking at possibly supporting Alembic files. “The traditional VFX houses we mainly work with take Maya caches,” he adds. DI4D traditionally does no rigging work, so they joined forces with Snappers for this demo.

Snappers is a high-end animation studio that provides character based services for video games and feature films. Snappers is best known for their Snapper’s Facial Rig and their work in real-time marker-less facial Mo-Cap system. They also provide character art and look development services. For this project, they provided the rig that controls the final digital face.

Protagon provides 3D scanning and visual effects consulting services for all motion picture media, from feature films, to commercials, music videos, reality, and AR/VR. They work internationally, taking on projects from concept, through shooting & capture, editorial, visual effects and finishing. For this project co-founder Brian Adler and his team produced the digital scans of Colin Urquhart and integrated their work with Snappers and DI4D.

Brian Adler has been an award winning writer, producer, director and technologist for nearly 20 years. Previously he was at Gentle Giant, where Adler oversaw global production and technology development for feature films, TV and video games, including scanning, digital modelling, photography and 3D printing for visual effects, production design and costume departments. Recent titles include Furious 7Guardians of the GalaxyBatman vs. SupermanThe Hunger Games series, X-Men series, The Flash and Supergirl.