Audiences first learned of the events that hardened the heart of Disney’s most notorious villain with the release of “Maleficent” in 2014…events that drove her to curse a newborn baby princess to sleep forever. The film put a fresh spin on the traditional fairy tale with a villain as the story’s protagonist and went on to gross more than $750 million worldwide. In the new film, by Director Joachim Rønning, Maleficent and her goddaughter Aurora begin to question the complex family ties that bind them as they are pulled in different directions by impending nuptials, unexpected allies, and dark new forces at play.
Visual effects supervisor Greg Brozenich (Pirates of the Caribbean: On Stranger Tides, Clash of the Titans) supervised Maleficent: Mistress of Evil. The new film opens up the world of the fairy tale with massive visual effects and an even greater variety of magical creatures, which were entirely computer-generated. The film includes a total of 2,168 visual effect shots, which were primarily created by MPC out of London.
According to Brozenich, the majority of the film’s effects were for the enormous battle sequences taking place in and around Castle Ulstead, many of which involved a secret weapon created under the supervision of Queen Ingrith. Her creation, a mix of fairy dust and iron which turns into a red powder-like substance, destroys the creatures of the Moors and is brilliantly realized on-screen as explosions of red smoke in the sky.
When visualizing the film, director Joachim Rønning pictured the red dust bombs exploding over Berlin during World War II. “Two years ago, those were the images in my head,” he says. “That’s what I was trying to create.”
The wings for Maleficent—who had three different looks, were created with CG effects in post-production, but the simulated flying was completed during principal photography. Working closely with stunt coordinator Simon Crane (“Rogue One: A Star Wars Story,” “World War Z”), McLaren and team were determined to make all the flying sequences look as effortless and real as possible, while keeping the actors safe. The actors wore a ‘tuning fork’ rig that attached to the actor’s hips and was controlled by operators off set. This gave the actors the ability to hover and dive and makes the action look very fluid and natural.
MPC Character work
We spoke at MPC to Tom Reed, Head of Character Lab (formerly a lead rigger) and Jake Harrell, Facial modeler at MPC. MPC took two different approaches when working on the character work in the film, depending upon how realistically human the faces needed to be.
For the three Pixies, (Aurora’s Aunts) MPC used Disney’s Anyma, developed at Disney Studios Research in Switzerland. Anyma is a geometry-based temporal photogrammetry approach, which evolved from Disney Medusa, which was still also extensively used.
The workflow for the Aunts had the animation targetted first at a perfect copy of each actress and then once that was approved, the animation was retargeted to the slightly modified and stylized pixie versions of each actress.
MPC got a faithful re-target of each performance, “as a baked geometry cache from ILM – which does the processing of Anyma these days for Disney Projects” explains Reed. “They send us a Maya file, which is basically a long blend shape sequence of the actress’s face, which we then put through a processing rig that we developed for each of the Pixie characters to do the stylization”. MPC would then move on to their FACS facial rig. At this stage, it is still very easy for the MPC animators to blend in and out various portions of the face in relation to the original performance. “We’ve got the original processed Anyma data moving the face at the beginning of the FACS animation process. The animator can then add on more FACS, if they want to, but they can also dial down regions of the face- if they want to take out the (capture) performance and just do a pure keyframe animation” he adds.
As the editing process cuts together different captured performances, MPC had to develop a new part of their face pipeline to allow a blending between takes. “You might get a single shot that was comprised of four different takes of different sections of Anyma with different bits of dialogue” Reed explains. MPC had to come up with a process that would take the editorial master sheets that they got from the client with all the various bits of information about what take was used, and then MPC would run scripts that would strip down these PDF, log the edits, and then automatically apply those edits to the various different original takes of Anyma to produce the right composite ‘shot’. “Each time we got a new master sheet from editorial with a different edit of the Anyma performances, we’d run these processes and it would edit together and blend the different sequences from the original Anyma data to produce the one final shot that would match with the audio of the edited dialogue”.
Once the edited Anyma version was approved internally, MPC would ‘pixify’ that for the final face. All the data was tracked with Metadata to the end. This was important as it allowed an animator to reference the original source of any word or phrase, so they could pull up the reference footage shot of the actual actress delivering that line when it was originally recorded. This was extended to include a library of Anyma data of the actresses between lines when the actress might have just been listening or reacting. This was a key resource of incidental non-shot specific captured data. This was important as none of the Pixie actresses recorded their dialogue together. Due to scheduling clashes, they all recorded their performances on different days.
As the pixie Aunts were not recorded collectively and were often seen flying, the MPC team had to work hard to get the right eye lines. While the Anyma data is very accurate, the actresses would rarely be looking in exactly the right three-dimensional place. “The Anyma data is great, and it’s improved markedly from the early Medusa days or even the old MOVA days (Digital Domain), – today the quality of the lips and the eyelids, is getting so much better, but we still had to do keyframe quite a lot on each setup” says Reed.
Harrell points out that Anyma and Medusa scans do not provide any data about the inside of an actors mouth, “we spent quite a lot of time making sure that we got the volumes of the teeth, the positioning of the teeth right, and their relative position to the lips. We found that we had to pay really close attention to this to make the character work correctly”. Harrell found that believability of the whole performance could be broken if the teeth and the inside of the mouth was handled incorrectly.
A digital double of Maleficent was also built to work in conjunction with the flying footage. While there was a Medusa scan of Angelina Jolie, the team tried to rely nearly entirely on preserving Angelina Jolie’s original facial performance, even if there was a need for digital double work, especially in relation to her wings and flying.
For the other characters, MPC used Dynamixyz which is a high-end facial capture rig with a head-mounted camera (HMC) and specific training profiles per performer. MPC built up 3 custom Dyanmixyz profiles using training data. Once processed, the system generated a set of Animation curves. “With the system, we trained profiles against three performers. Then each of these could work with any of our face rigs since we have a consistent base Rig control system. You could plug any of the three performer’s performances into driving any of the secondary characters”. While MPC did not use it on any of the hero characters, once the animators got the animation curves they were free to add or modify the rigs to allow for the specifics of different characters and polish the performances.