Epic face work with Ninja Theory

Last week at GDC 2016, Epic Games and game developer Ninja Theory showed a real time puppeteering system for a human character direct from the stage to near final game level visuals. Melina Juergens, Ninja Theory’s video editor, become the star of both the game at their offices and then at GDC by featuring as the face of Senua in this stunning live performance. The demo had many hurdles to overcome; the subtle facial expression, the quality of the skin rendering and of course doing all of this at a trade show with several thousand cel phones and interference possibilities. The result however seemed effortless. We caught up with the team at GDC after the demo to discuss this remarkable advance in interactive digital human performance.

The demo is both a proof of concept but also a very real production advance. Senua is a full game asset and this new approach of live high quality capture will now become the standard for Ninja Theory, according to Tameem Antoniades, the company’s Chief Creative Officer.

The whole project as a live performance piece came together in just seven weeks. Epic’s CTO Kim Libreri was working on an idea for this year’s GDC. The team at Epic were already working on demo material based around the Unreal Engine game Paragon for GDC (Paragon will be released for PC and PS4 in 2016). Paragon is a Multiplayer Online Battle Arena in which two teams of five players are pitted against each other in a contest of strategy and action. The facial work of the ‘gunslinger’ human character Twinblast seemed like a good option as the Epic team had special new hair tools, a new SSS and an advanced eye editor and shader that let them produce both realistic results but also highly customized results. Furthermore, the character was Lightstage calibrated from a full Lightstage scan at Otoy for Paragon.

newface1

Unfortunately, while the asset renders very well in Unreal, the team did not have a complex facial rig suitable for this sort of detailed FACS-based real time application. Libreri happened to be in London at the time visiting customers when he met with Ninja Theory and realized that the company had just done a very detailed and highly accurate facial rig for Senua with Vladimir Mastilovic, and the 3Lateral team. Epic and Libreri had worked with 3Lateral on last year’s highly successful Kite GDC demo.

Beyond just 3Lateral, Libreri was pleased to discover that the team at Ninja Theory were also working with Cubic Motion who would become a key company in making this year’s live demo work. The Manchester company’s computer vision technology is able to track more than 200 facial features at over 90 frames per second and automatically maps this data to extremely high-quality digital characters in real-time. (At GDC the whole system was running at 30fps for the demos). Their contribution was to manage the live capture and apply advanced analysis of the spatiotemporal data to drive the 3Lateral rig of the Ninja Theory character. 

Thus it was seven weeks from GDC when, sitting in the board room of Ninja Theory in the UK, Kim Libreri thought to himself, “This was like a bunch of the world already done, and it looks really good, and they are a super cool team – and it will be great to spotlight a customer rather than just do something in-house.”

Ninja Theory’s Tameem Antoniades jokes it took his team only about 10 minutes to think about it and commit to producing a breakthrough new demo for the world to see in less than two months. And so the Epic team ‘flipped the switch” and moved from a Paragon live demo to a Hellblade GDC demo. Interestingly, one of the big factors that helped the team was that Melina Juergens was always available (when she was not working in her day job as Ninja Theory’s online editor). Having her always available for scanning, reference, lighting tests and performance proved invaluable given the schedule. For example, to do her 3Lateral scan, she did over a hundred FACS expression poses with and without makeup, as her character has distinctive cracking head make-up in the final sequence.

“We started from their asset from a texture, model and rig perspective and we also took the work we had done – in particular Twinblast and the Paragon’s Sparrow – and ported it over – and made it better!”, explained Libreri. The modelling process for Senua was different than Epic’s own Twinblast. While Ninja Theory had done an earlier model and texturing of Melina Juergens’ face, for their latest work they had started over fairly much from scratch and used 3Lateral’s own proprietary scanning system.

But there was work that could be transferred over from the tech done for Paragon especially with regards to the eyes. While Senua did not need to use any of special new Epic hair shaders, the new eyes work and skin shaders were very relevant. The Epic team was led in this area by Senior Graphics Programmer Brian Karis working with, amongst others, Haarm-Pieter Duiker (formerly of ESC, ICT and known for his ACEScg work with the Academy). This allowed the 3Lateral Senua to use the same eye tech, but customized and edited to match Melina Juergens own eyes.

The team also deployed a new SSS which added again to the striking realism of the performance.  “The skin was mostly not changed from the Paragon characters, we did some work on more accurate pore detail and specular roughness across the face, but most of the tech we focused on was in the eye area, you can see a lot of the emotion in this piece was coming through the characters eyes,” explain Karis.

The Paragon eyes were significantly improved for Senua, in particular the eye shading, the eye materials and around the eyes – “the tear duct, the area around the eyes, everything you need to do to tie in the eyes to the rest of the face,” adds Duiker. While the eyes were not based on detailed eye scans of Juergens, they did do a detailed reference shoot and Senua’s eyes were made to match back faithfully to Juergens as much as possible. The work was finely detailed including, says Karis, “better occlusion around the eye, better integration of the eyeball to the flesh around (making that a smoother transition), making a wet tear line along the bottom tear of the eye, a smoother transition between the tear dust and the eye ball, softening the transition in both the top and bottom lids (there is a sort of ‘bluring’ that happens to soften the transition line).”

newface2

Naturally, all of the base technology from the Epic Paragon models had to be transferred and added to the 3Lateral rig. While Senua represents some major advances that will now feed back into Epic’s other work, some part of the live demo leaned directly on the work already done for Pargon, including the skin shaders and new SSS that the team had developed. The model is still fundamentally a screen space sub-surface diffusion, but with some optimizations done for the PS4 that switches the way the team separates the specular shading from the diffuse shading for more isolated control. There is also a micro detail map that goes over the skin and is in addition to the high resolution scan data, and 3Lateral has a very complex face wrinkle map that this dovetails with to produce the correct skin fine detail.

At one level higher than this there is also a new facial expression deformable blend shapes. The normals are being better handled now, “it used to compute the normals by just blending the normals for each one of the shapes to get the final, but now the mesh is deformed to the new pose and then the normals are recalculated – at that pose – it is a much better result,” explains Karis.

“It means you can now get a movie style rig in an Unreal character,” adds Libreri, “looking nice and defined and creased. You get the clarity and sharpness of a visual effects render now for in-engine blend shapes. For example, they now match what an artist would see in a Maya view port when in development.

On the day of the demo Juergens’ head rig was provided by LA based Technoprops. A stereo rig was used for the actual off-line (non-demo) game capture, but for the GDC show the team went with the mono rig. This mono rig used a Point Grey RGB computer vision camera (1288 x 964) with bright lights also on the rig. To help give the ‘live’ nature of the performance the team helped themselves by adding some facial dots and colouring Juergens top and bottom lips different colors. Understandably, while this may not have been essential, it helped reduce the risk of an incomplete solve on the day.

Given that the performance was not only live but being broadcast, recorded and mixed (picture in picture) there was a 275 milli-seconds lag between performer and Antoniades on stage watching on the Barco projector, this was matched with audio delays to maintain lip sync – critical to delivering a believable performance.

Once the face was solved in pre-production the team extended out to include Ikinema and Xsens so that not only was Juergens face solved and rendered in real time but her full body also, adding to the complexity of the mocap. “Stabilizing the Xsens, integrating it with Ikinema – and making it all work – those guys did a fabulous job – they came a bit late to the game – and especially Ikinema, their live plugin is already in Unreal Engine which was great,” commented Libreri.

fire

The lighting of the scene was also complex, there were nine shadow casters in the scene, and the character is moving in a foggy misty environment. What was the team’s reference for the demo? “Well, it just happened that The Revenant had just been nominated for an Oscar, and we were looking for reference – and we thought what movie has a look we could go for, something natural. And there is a scene in The Revenant that we saw online and we thought – yeah, let’s do that!” recalls Libreri.

Ninja Theory also crafted an extension to UE4 that does volumetic lighting – shafts of light with levels of diffusion and atmosphere. “It is a participating media – volumetric lighting implement,” explains Antoniades. The team placed a series of lights where the fire is (as the fire itself is not emissive) and this lit up the volume and produced the cinematic shafts of light seen in the demo live.

Antoniades in Oct. 2014 at DICE Europe commented that the Hellblade game project was “an effort to make game development a fun exciting and open process  – it is not a drip feed PR controlled strategy”. As such, the company has been publishing a great series of production diaries, some of which are below covering the facial design and the mocap set up back at their offices.

He believes that we as an industry are moving into a digital self publishing era, but Ninja Theory does not want to compromise on quality. This leads to a model of collaboration but with a core team of only sixteen people employed at the company making the actual game. Although Epic’s Kim Libreri interjected that “they are being humble – this is not an ordinary 16 person team – each one of them is awesome!”

The previous (NON EPIC) teaser version of Senua
The previous (NON EPIC) teaser version of Senua

Antoniades’ company has aimed to work in what he calls the ‘independent AAA’ space which sits between big AAA franchises and ‘indie’ games. As such the company has looked to collaborate and work with others to bring breakout visuals to their games without the mega budgets of the AAA franchises and it was this very spirit that lead to them working with Epic Games, Cubic Motion, 3Lateral, Xsens and Ikinema.

The face scanning of Melina at 3Lateral.

A look behind the regular mocap set up for Hellbalde.