ENDGAME: The Remarkable Faces of Avengers: THANOS (Part 1)

FACING THE ENDGAME: THANOS

Avengers: Endgame is a remarkable visual effects collaboration of artists and companies around the world led by production VFX Supervisor, Dan DeLeeuw. In the vast amount of VFX work done for the film, one finds three of the world’s leading visual effects houses: ILM, Weta Digital and Digital Domain 3.0, all demonstrating their unique and sometimes quite different approaches to facial character animation.

In this article, the first in a 2 part special series, we discuss THANOS with Weta Digital and then Digital Domain. In part 2 we discuss the new face pipeline at ILM and how it evolved while animating the HULK.

SPOILERS WARNING

Josh Brolin, returns as Thanos in Endgame, having been on screen for 40 minutes of the first Avenger’s Infinity War film’s 2½ hour running time. This time Brolin plays Thanos 4 years younger, as well as ‘farmer’ Thanos, and in vast fight sequences determined to defeat the Avengers.

WETA DIGITAL : Death of Thanos

As we commented in our coverage of  Avengers: Infinity War, one of the most remarkable aspects of the Avengers films is how much focus, screen time and empathy there is for the film’s antagonist Thanos.

Digital Domain and Weta Digital provided the complex and subtle performances of Thanos in Endgame. Digital Domain maintained much of the same pipeline as they had with Avengers: Infinity War, but with more Machine Learning. In comparison, Weta Digital was making Alita: Battle Angel at the same time as Avengers (see our fxg story on Alita here). Weta Digital was also planning for the multiple upcoming Avatar films. All of this meant that the Weta team was investing enormous resources in their already impressive face pipeline. Weta’s Endgame Supervisor Matt Aitken commented to fxguide, during the making of Avengers: Endgame “this was a really great time to be doing hero facial performance work at Weta”.

For Endgame, Aitken said the Weta team ” fixed a couple of things that we were quite happy with but didn’t have a chance to fix first time around and we also took advantage of some new developments that were happening in our facial animation pipeline”.


 

The new pipeline

Weta’s pipeline is still based on FACS, unlike Digital Domain that has been moving away from FACS with their facial pipeline. “We are still working with an incremental development from the original FACS pipeline that we’ve put in place for Gollum and have continued to refine with Kong, Caesar and so on. It still works for us to do things that way. We have solvers that we use to apply the track data procedurally to the puppet, and the character’s rig” explains Aitken. Weta has several solvers, one of which is based on Machine Learning, which the company has used on several films. These solvers solve straight from the sparse data set based on the facial dots, to the the first stage of the digital puppet.

The first stage in this case, was a normal fully matching digital Josh Brolin. This first actor puppet was a new approach for Weta. They solve to the digital Josh and then when the team is happy that they have matched the performance from the real Josh, they then retarget this to Thanos.

As Weta has built the rigs to match, after a calibration stage it is a relatively easy operation to retarget from Digital Josh to Thanos. This approach came out of some frustration the team had with the work Weta did on BFG, “we were just not comparing ‘apples to apples’.  We could be sure we were capturing all the nuances of Mark Rylance performance” explains Aitken.  After that film the team introduced the idea of first solving to a hero digital version of the performance actor. Weta then does a lot of iteration to match the performance between the sparse data that they have tracking of Josh’s on set performance and their digital Josh Brolin. This Actor puppet is fully built to the same level as they would for any digital puppet, for any of Weta’s creatures or characters.

The Josh Brolin puppet is built using both Lightstage scanning for detail and appearance and a set of models derived from the Medusa 4D data captures. “We were provided with Medusa data to help build our Josh facial puppet. The production provided a lot of Medusa data of Josh Brolin doing a whole lot of ROM tests (Range of Motion), that was incredibly useful, ” explains Aitken. “The data enabled us to build what we felt was a very accurate facial puppet for Josh that captured all of the nuances of his facial performance”.

Changes and New Technology

While the Thanos work Weta delivered for the first film was incredibly impressive, the team were unhappy with the way Thanos’ mouth animated. Weta ended up animating around the corners of Brolin’s mouth in Infinity War, and they took the opportunity between films to address this.  Aitken explains  “when we got to the end of Infinity War, we felt that there was some work needed around the corners of his mouth and we just didn’t have time to make this part the base facial rig. We were not getting, automatically, the correct behaviour out of the corner of his mouth with Thanos’ elongated jaw and his over size chin. We patched it for Infinity War, but we were able to use the time between the two films to rectify that and correct it”.

The new technology came in the form of Weta’s new Deep Shapes.  

Deep Shapes is one of the reasons the trailer for Gemini Man looks so good. It is a new approach to improve the animation and look between key blendshapes. The approach is analytical, and it produces a key level of high fidelity animation that can be dialed in or out by the aritst.

When Thanos is moving between two expressions, it will not change either, but it will produce more subtle detail in-between. These small movements add a sense of inertia or tissue behaviour without there being flesh simulations. The artist has one control between effectively 0 and 100% that they can use to adjust this intermediary stage contribution. The effect is to introduce what Aitken calls ” movement a bit like jiggle and pop”, but he is at pains to point out this is not “simulation because we want to retain complete control over the shape of the face of all times.” The shapes themselves are created through an analytic process and they’re applied automatically. They don’t affect the endpoints of the blend shapes. “So if you go from one expression to another, those two end points will stay exactly the same” he adds. The effect is subtle and Weta is not prepared to fully disclose how it is generating the Deep Shapes until after Gemini Man is released.

As mentioned Avengers was running in parallel with Alita, Battle Angel. In some respects, Alita was more demanding, as her skin was much more human. This meant that while Thanos had animated displacement maps to emulate the stretching and compressing of the skin pores, the Endgame team did not have to go as far as the Alita team did in this respect. Alita also drove the need to develop complex eyes, both in modelling and rendering. Both Thanos and Alita share similar new eye technology at Weta.




Performance

Perhaps the greatest testament to the quality of the animation was how strong a performance Thanos gives as he is removed from reality. Brolin delivered a key performance that made Thanos seem sympathetic even as he is defeated. The shot had very heavy scrutiny from both the Studio and Weta’s own team. Not only is it the last time we see Thanos but the emotional performance is played out slowly and without dialogue. Aitken and the team worked had to make Thanos “accessible emotionally because while you don’t want to agree with what he’s doing, – you need to understand his motivations. He believes what he’s doing is right, and it is really important that the audience get a sense of that. It makes for a more interesting story arc across the two films”, says Aitken. It is interesting that for the death of a mega villain, most people report that the cinema is very quiet at this point in the film, there are no cheers or applause for the death of the villain, instead the shot is quite moving and sombre.

Fighting for their lives

Weta Digital did most of the final act, with the exception of Captain Marvel destroying the Alien ships, and the water, dam magic sequence.

As Josh Brolin is considerably shorter than the imposing Thanos, he would often stand on a riser for dialogue or wear a pole on his back with a cardboard Thanos cutout above his head for the other actors to use as eye-line. None of which helps in the complex and elaborate fight sequences. Much of the fighting was just skilful key frame animation.

Thanos was built with Weta’s Muscle system. This system can automatically look ahead in the animation keyframes and fire muscles to flex a few frames before a movement, just as a real body does. But as Thanos was a larger than life figure, the animators also had full control on muscle twitches and flexes. They often activated muscles in Thanos when he was standing still, just facing off the Avengers.




Lighting

It was important for Thanos to have the correct hue, but as there were so many digital characters this was difficult. Weta wanted the one base lighting set-up for any shot in the end fight (normally based on HDRs from on set), so they decided to adjust Thanos’ hue and saturation when he was in the context of a shot, and not adjust his lighting. This meant that Thanos looks consistent, even when the environmental lighting would have in the real world, made his skin tones look different in different environments. ” For this film, as well the first, we dialled in his skin colour so that it works and reads as a correct Hue under the lighting scheme that we wanted to work with,” comments Aitken.

Pre-Viz
Final

DIGITAL DOMAIN 3.0:

Both Weta and Digital Domain (DD) shared the same on set data and facial capture material. Both companies worked off the same digital Marquette from Marvel Studio’s Art Department.  Actor Josh Brolin wore a vertical Stereo Technoprops head rig and had dots on his face.  The Thanos material was shot back to back with Infinity War so nothing about the on-set data collection of filming changed between the films.

The main Digital Domain VFX supervisor was Kelly Port. Scott Edelstein was Associate VFX Supervisor and Jan Philip Cramer, Senior Animation Director.

As Phil Cramer explained “both of these movies were filmed back to back and for DD the work was also somewhat back to back”. For the first film’s Thanos, the DD team, including Cramer, won the VES Award for Outstanding Animated Character in a Photoreal Feature. “the biggest innovation for us on the first film was to develop the machine learning techniques”, he explains. “We did this to generate very detailed high resolution facial data on a per shot level”. This process is part of DD’s Masquerade software tool.

Masquerade takes the sparse 150 facial training points predicts roughly 70,000 points of a new high-res 3D face and its MoCap data.

Josh Brolin as part the Masquerade pipeline at Digital Domain

While DD didn’t have much down time between the films, it did have some. Given the success of Masquerade, the team decided to apply a similar Machine Learning approach to how the company tracks markers on the face of the actor. “We used to do this manually and that took between a few days to a week or two weeks per shot” he adds. “The solution we used on Endgame is now an automated process. The new software we wrote is called Bulls Eye and it ends up basically taking an hour or so for what used to take a few weeks. For us, this was pretty critical”.

The Masquerade software is still used just as it was on Infinity War in terms of the low mesh reconstruction of Brolin’s face. Bullseye is processing the HMC footage, tracking the markers in three space and feeding the solver. Bulleye trains off new training data each day as part of the calibration process, since even with guides, the dots are never exactly in the same position from day to day.

The team also looked at how it could improve their pipeline efficiency and increase the level of fidelity around the lips of Brolin.

The DD pipeline is no longer built on FACS. It seeks to gain greater fidelity using a series of Deep Learning approaches. The system still produces a rig control system that presents FACS like controls to the animators, but it is now more just a way to facilitate having a front end into this complex ML data set. One that is intuitive for artists and their understanding of emotional responses.

The overall pipeline does not utilize FACS shapes. “Basically we’re using our Direct Drive scenario. This is where the capture data directly drives the final match”. Normally this would lose all sorts of control for the animator, since at this point “you just have thousands of vertices or ride along clusters that you’ve used to build it with. The DD team is solving with data against the performance onto a FACS Rig”  explains Cramer. ” We make that FACS rig and put it underneath the direct drive performance. The animators can then interact with a very classical control UI, while they are actually controlling a much more complicated data driven solution.

Modelling and Performance

In the film Thanos gets severely damaged from the double snap which he uses to destroy the Infinity stones. “We really wanted to showcase a different level of detail on the face to show the damage and also introduce a paralysed part on the left side of his face. We wanted to show that his nerve endings are not fully functioning. He is experiencing the misfiring of muscles, especially around the eyes”. The team at DD added a lot of detail to the area beside the eyes and around the lips. This was intended to indicate to the audience that the tendons on the face, don’t fully connect anymore.

 

DD have many tools to improve upon what the direct drive system generates. “We call them WPSD (Pose Shape Deformation) shapes. These are shapes that are like corrective shapes to the process. Overall DD was able to process and turn around shots faster during production and maintain more control in final animation.

As we have reported DD has also been working on Digital Doug. While many of the techniques and approaches appear similar to Thanos, Digital Doug is a real time process and very much a test sandbox for new ideas. Perhaps one day they will be used in a production just like Avengers.