From face replacements to shattering soldiers, from CG trolls to magic metallic mirrors, Snow White and the Huntsman uses an incredible and diverse range of visual effects to re-tell the classic fairytale. fxguide has interviews with all the lead vendors – Double Negative, Rhythm & Hues, Pixomondo, Lola, The Mill, Hydraulx, Baseblack, BlueBolt and Nvizage, plus overall visual effects supervisor Cedric Nicolas-Troyan.
Working together with co-visual effects supervisor Phil Brennan, Nicolas-Troyan co-ordinated a wide range of shot-types – 1300 in all – for director Rupert Sanders’ first feature film. “I think besides spaceships and exploding New York, we’ve done everything that’s in the visual effects book!,” he jokes. “We’ve done creatures, transformations, animals and plants, buildings, fire and so much stuff. Cloth simulations, liquid metal, dynamics, face replacements. You name it – it’s somewhere in the movie.”
The film tells the story of Queen Raveena’s (Charlize Theron) dark rule after she has imprisoned Snow White (Kristen Stewart), until White teams up with The Huntsman (Chris Hemsworth). Some of the overriding principles for the visual effects work were the ideas of the Queen being ‘death’ and Snow White being ‘life’. “For example,” says Nicolas-Troyan, “we wanted these dark soldiers in the film to break like obsidian shards, which matches the Dark Fairy at the end of the movie. So it was an iteration of something else. When the Queen breaks into ravens they are dark and shiny. The ravens fall into a puddle of black oil and then she comes out of that. Everything has that type of look to it. It also transferred to her actual wardrobe – her crown and the suit of armor at the end.”
Attack of the Troll
A signature sequence in the film involves a troll and its encounter with the Huntsman and Snow White near a bridge, with visual effects by Rhythm & Hues. Nvizage previs’d the scene and acquired mocap for the troll (see below). On set, production filmed without any troll stand-in but used eyeline poles and air canons for interaction, including for watery splashes. “I was holding a tennis ball on a pole in front of Kristen,” says Nicolas-Troyan. “I knew when the troll was grunting and roaring. I was giving Kristen a cue so she knew what it would be doing at any given time. She did a great job because there was not much there to look at.”
Although Rhythm used some mocap for the body motion, all facial work was hand-animated, with the rest of the troll’s body made up of root-like muscles and a coarse skin texture. “The skin had to feel like different callus’ had developed on it,” says Rhythm visual effects supervisor Todd Shifflett, “and that they would read more like different roots and rocks that he had developed.”
The studio also created scores of creatures and plant life for the film, including many that populate the woods. There was a mystical stag, a badger, a fox, squirrels, rabbits, hedgehogs, birds, mushrooms, a snake made of moss, butterflies, the tortoise, scarabs which glow in the dark, ferns and even fairies. The fairies, in particular, were inspired by the director’s wish that they be child-like. “Rupert wanted them to be kids,” explains Nicolas-Troyan. “The fairies come out of the animals in the woods, basically. They were like eight inches tall. We were going to make them asexual so they could be naked, with deer eyes and bigger eyes and covered with a little bit of hair, and have iridescent skin, with white peach fuzz on them. They ended up having the features of two kids who were mo-capped for their faces and expressions. The body was mo-capped from two girls who were dancers. They had a graceful feel to them.”
Another of Rhythm’s visual effects contributions were the dark birds, seen when Ravenna transforms and also after she has been in battle. The studio relied on some flock animation software but was also able to adapt the birds to perform specific motions. “It was a little bit morbid in one whole section where there are some dying crows,” recalls Shifflett. “After the Queen is in a fight and has to go back to the castle to re-generate herself, the crows slam into the ground, and a lot break their necks. You’d be shocked and amazed of how much YouTube footage there is of real birds dying – disturbing but of course we had to use that as reference.”
The Mill’s Mirror Man
The Queen asks the famous question, ‘Who is the fairest of them all?’ to a metallic and liquidy Mirror Man which emerges from her wall. The Mill, under visual effects supervisor Nicolas Hernandez, completed the shots of the being coming out of the wall, dribbling along the floor and into a humanoid hooded figure. “Rupert had seen a sculpture by Kevin Francis Gray called Face-Off,” says Nicolas-Troyan. “He reached out to him and asked if he might re-design it for the movie and he did! We scanned that and The Mill started working on it. The tricky part was how was he going to come out of the mirror?”
After determining that some early looks seemed a little too close to Terminator 2, “we thought more about cloth,” recalls Nicolas-Troyan. “What about trying to make it look like pulling a veil of silk? So that’s what The Mill started to work with and doing a cloth sim with a liquidy feel. That was very successful and made it very different from T2 and more delicate and less alien.”
On set, Theron acted against a crude prop version of the Mirror Man with a mirror surface. Inside, production hid a RED EPIC camera turned sideways that gave a full 5K head to toe image of the actress to be comp’d in later. “Then behind the stage we hooked up a mic and some speakers,” adds Nicolas-Troyan,” and the voice of Mirror Man was there looking at Charlize on the monitor and talking to her.”
“We organized a studio shoot to film a lot of liquids in slow motion to study the right look,” says Hernandez about The Mill’s approach to the fluid and cloth sims. “We had the Phantom camera and did a lot of tests with fabric, paint and corn starch on a speaker. Cedric pointed to what he liked. He wanted really slow but in normal speed. We did a lot of tests but stepped away from it once we went with the cloth look.”
As the liquidy form of the magical creation stops on the floor, it rises up into the solid figure. The Mill used Maya nCloth with many custom forces for the final look of Mirror Man upright, and to control the cloth shape and motion. “On top of that we used Houdini to add a layer of fluid deformation – we’d bend the edges to control the shapes,” adds Hernandez. “To control the performance we added subtle shoulder movement, added some breathing and tried to make him more human than just a statue.”
To comp the reflection obtained with the RED EPIC, The Mill had to cheat the look of Theron to ensure she was readable. “We also developed a shader for the Mirror Man that had a lot of different blur reflections, oxidations and little scratches as little layers,” explains Hernandez.
Replacing faces: the dwarfs
The Huntsman’s dwarfs were played by full-sized actors, including Nick Frost and Ray Winstone. Production used both old-school and digital techniques to help achieve their unique look. “There was the good old trick of everybody raised up first,” says Nicolas-Troyan. “We built platforms to make it work for those types of shots. Kristen and Chris would walk on risers and everybody else would walk on the ground. We used wide lenses and gave a lot of head space when we shot the dwarves so all of a sudden they look really small. It was very challenging to do say Ray Winstone next to Kristen Stewart as you have to make Ray look like he’s 4’11′. But if you look at old movies, they were doing that type of stuff and they didn’t have CG.”
Other shots required motion control or greenscreen shoots where arms and limbs would be removed or reduced in size. Body doubles were also used. “The most important part of all of that was the research prior to the shoot – I researched dwarfism,” notes Nicolas-Troyan. “Besides their shorter limbs, the selling point from an acting point of view is the way they walk. I found out that the center of gravity of a little person – they offset their center of gravity left and right but full-size people offset it forward and backward. So what makes the trick is the wobble that dwarves have when they walk.”
The visual effects supervisor created a test – both as a still image and then as a short video – to show to producers to convince them of the on-set and VFX techniques. “And we also cast dwarf doubles who were actually proportionate dwarf doubles who could do exactly what a full grown person could do,” says Nicolas-Troyan. “We paired them with our actors and we sent them to our dwarf camp. They actually learned how to wobble together and had the same style of wobbling. So when you cut from one to another, you buy it.”
Lola VFX and Rhythm & Hues both contributed dwarf shots to the film, with Lola initially adopting a similar approach to their previous work in face replacements. “On the surface,” says Lola visual effects supervisor Edson Williams, “the work was very challenging, but we had developed a technique on Social Network that should serve us well.”
Lola’s approach to the dwarf shots was to utilize face projections. “We developed the face projection technique on Social Network to project Army Hammer’s facial performance onto the body doubles face,” explains Williams. “This allowed ‘twins’ to interact in new ways and the twins were even able to row a boat together. The face projection technique was a perfect fit for Snow White because you can shoot principal photography first, then once an edit is locked, you re-shoot the hero actors performance in a controlled environment. Face projection starts by 3D tracking (Pftrack) the body double’s face, then carefully analyzing the lighting changes on the face of the double. The next step is to pre-program these lighting changes into the computer controlled lighting dome, this is done with DMX lights and software.”
“Once the original scene lighting is programmed into the dome, we bring in the hero actor,” continues Williams. “The hero actor sits in a stationary chair (with slight head restraint) and recreates the performance that is being played back on a reference monitor. The recreated performance is recorded with four RED EPIC cameras placed strategically around the dome. The final step involves projecting the RED footage back onto a cyberscan of the hero actor’s face that has been 3D tracked back into the original scene.”
Williams admits that the dwarfs were much more of a challenge than the twins in Social Network. “In Snow White,” he says, “the goal was to get as many shots in camera as possible. To facilitate this, the hero actors had their heads re-created as latex masks, the masks were then placed over the body doubles heads with only the eyes and mouth of the double showing through. Unfortunately for Lola, when you stretch a latex mask of the hero actors head over the body double’s head, the proportions of the mask are greatly distorted. Initially we only replaced the front facial area of the double, and utilized the latex mask for the rest of the head. The test did not look good, and we had to develop new tricks to correct the problems. We began to 3D track the masked body double with a cyberscan of the hero actor, and then using the cyberscan’s proportions we would deform the masked head back to its proper shape. Once the masked double was close to human proportions, we began our face projections again, but we included much more of the hero actors performance, including the ears, neck and hairline.”
Dneg’s Dark Forest and Dark Fairies
Double Negative, under visual effects supervisor John Moffatt, orchestrated an elaborate collection of forest animals and creatures for the Dark Forest, including twisted trees and branches, a bat creature, oozing muscles, beetles and butterflies. For a climatic sequence, Dneg also created the Dark Fairies conjured by the queen when Snow White and co attack the castle. Made up of obsidian shard-like elements, the Fairies are humanoid in shape and move quickly against their opponents.
The actors performed sword swings against air, while Dneg crew gathered textures and lighting reference and witness cam views of the takes. Various stunts were also carried out on set. Back at Dneg, the shots and the movements of actors and stunt doubles were matchmoved and wires and body harnesses cleaned up.
The Dark Fairies were made up of 30,000 shards – the set-up began with an animation rig of the creature to block out actions. Animation was brought into Houdini, and a Dneg FX toolset – including an in-house tool called Bang that embeds the Bullet physics solver within Houdini’s SOP – let artists choreograph the movement of the Fairies as well as allowing shards to detach or move about in specific ways. Each shard was able to be extracted to a point cloud, and then relied on point instancing at render time.
To hear more about Dneg’s Dark Fairies, fxinsider members can watch a video interview with visual effects supervisor John Moffatt below.
Pixomondo aided in bringing to life the battles seen in Huntsman, including a battle between the Dark Army and the knights, overseen by visual effects supervisor Bryan Hirota. A key feature of the dark army is that they shatter on impact into black shards – as discussed above this is a pre-cursor to the obsidian Dark Fairies. “In some shots they had knights in place that would get struck and would fall, others we painted knights on top of them and just dealt with with paint-outs,” says Pixomondo visual effects producer BJ Farmer. “For others we added knights in the frame when they weren’t there at all.”
Pixomondo relied on 3ds Max, V-Ray and Thinking Particles to achieve the look of the shattering soldiers. “It took us a while to arrive at the fragmentation,” notes Pixomondo digital effects supervisor Andrew Roberts. “Should they be whispy? Smokey? We tried some ink effects and something more fluid, and ended up with the obsidian look. Each of the fragments had to appear as shards and be sharp and dangerous looking.”
“We would try pre-fragmenting him and then shatter the soldiers,” adds Roberts, “but then it would just feel like a CG effect as if he’s frozen and falls apart. But Thinking Particles allowed us to set up a volume break system where based on the match moved sword and the angle and velocity that the sword hit the surface, it would just dynamically break whilst the knight continued to animate, and have that fragmentation go all through the body. For the trailer shot, we had over 130,000 fragments contributing to that dynamic sim.”
“A good portion of it was procedural, but our FX TD put in some control so you could specify the time when certain limbs would shatter and then various vector nodes to direct which direction those shards would actually fly. Sometimes we would even hand animate certain fragments – for the hero trailer shot Cedric wanted a fragment to fill frame right at the end, so that was one that had to be hand-animated.”
In terms of compositing, one of the biggest challenges was to make the CG soldiers look the same as the live action men around them. “The guys did a great job of matching that with Nuke,” says Pixomondo compositing supervisor Randy Brown. “We could paint and project something onto them, and then that wouldn’t shatter so we would have to transition into the shattering render.”
Pixomondo also created grand army scenes for sequences on the beach and in the forest. “In one shot,” notes Brown, “we had an original plate shot from a helicopter. It’s basically just the battlefield of the black army standing there and then the white army riding up to them. On set they just had the first two rows and then the back row was horses – the rest was a rectangle we would fill in later. We went back and forth on which shots would work. They also wanted the ground to look more scorched and added some netting they had used to cover some fauna. Then we had to get rid of ground the white army was riding up on. They had some fire elements too. Then there was a camera car we had to get rid of. Eventually we ended up getting rid of almost all of the plate.”
“For the further away helicopter shot we had a lot of cards set up with 2D pyro on them and smoke elements,” adds Brown. “But when the camera’s over the top, the cards were just flat and didn’t work, so those elements had to be rendered in camera specifically for that shot.”
Pixomondo used both Massive and a targeted hand-animation approach to achieve some of these army shots. “We looked at Massive to drive those actions, but we needed very specific control over each of the knights,” he says. “So after the shot was match moved we did an object track of each of the silver knights and their swords, so we’d have the correct sword swings. Then we had a small group of talented animators that hand-animated 300 unique characters ducking and defending themselves, and that worked out really well.”
“The overall army was made up of 1500 knights and so for the surrounding army we had a series of scripts that would randomly select animation clips from a 1000 frame master source and then it would apply that animation to the knights that weren’t directly affected in the battle, and distribute them over the Lidar’d terrain at a random scale and rotation, so that it all felt unique. Then we could also render out some unique character IDs so we could color-correct them and adjust their luminance and hue just slightly.” Additional pyro elements for torches and fireballs were created in FumeFX and Krakatoa, with Thinking Particles and Krakatoa used to disintegrate the hair of the knights. For one of the castle attack scenes on the beach, horse agents inside Massive were exported as .obj objects for their hoove elements, then brought into 3ds Max to trigger particle splashes.
“One shot I really liked working on,” says Roberts, “was right towards the end of the prologue when the King has his foot on the helmet of decapitated Dark Knight and as he moves his foot forward over it and rolls it. It crumbles and disintegrates. They shot a live action helmet and then we match moved that and painted it out from the plate and used Thinking Particles to do a progression of fragmentation. It had the knight’s brains spilling out but it was this obsidian look so doesn’t actually look gory.”
Snow White set extensions
Various set extensions for the film were carried out by Baseblack and BlueBolt.
BlueBolt crafted scenes of Snow White’s castle and the royal village, with supervision by Angela Barson. “The village was partially built to a certain level and then we set extended it up with additional streets,” says Barson. “The castle was a very small set built at Pinewood in a courtyard and then extended. We had to do the castle and the village in multiple states, so it starts off in the Magnus reign which is all pristine and happy, nice colors and everything and then go through the Ravenna reign which is in disrepair and has veins done in SpeedTree growing over all of it, and holes everywhere, much more grunge-y and dirty and dark.”
The castle, situated on an island off the coast, was a full 3D build in Maya and textured in Mari. “There are helicopter plates and battle scenes there shot in Wales,” notes Barson. “We had a Lidar of the coastline but not really much of the island itself. There were no distinctive points to lock onto. Every time we matchmoved a shot, that fed into information for the other shots.”
Then, the village also featured in two reigns, including in a dark and stormy setting that required tricky bluescreen rain keys and additional digital precipitation. Plates for the village were shot at Pinewood with only a partial set. “We also had a lot of snowy shots so we had to add snow atmospherics to the castle – some CG snow with live action, and lots of smoke,” recalls Barson.
Baseblack’s first sequence was for the prologue, which covers the marriage of Ravenna. Production shot on a minimal set with greenscreen backing, and then extended the area. “The started off asking us to reference Durham Cathedral, which is where they wanted to shoot,” says VFX supervisor Steve Moncur, who headed up the work with Rudi Holzapfel. “But the costs associated with hiring the cathedral were going to be astronomical.” The cathedral was scanned and photographed, but access was only granted at night. Baseblack added crowds and put in new roof beams and back walls in proxy geometry, plus atmosphere, and then used Nuke to project shot elements into the existing plates. The studio also worked on the epilogue cathedral shots.
Other Baseblack sequences included shots for castle of Duke Hammond, which involved extensions to partial courtyard sets filmed at Pinewood, matching the half-built walls and adding some smoke and movement into the plates. One shot featured horses exiting the castle entrance, filmed only with a minor frontage and with greenscreen backing. Shots from various angles, including smaller comps at ground level, relied on a topographical map to ensure consistency and use one asset set-up.
“We also did some distance shots of the castle,” adds Moncur, “based on plates shot in Wales. For a procession sequence that we shared with R&H, since they were doing face replacements, we had to lower the hillside, add paths, add in shot elements.” Other contributions from Baseblack included exterior views looking out from castles, a helicopter fly-over – requiring a 2,000 frame track, a CG castle and a custom-built collection of different mountainous landscapes from live action elements and work in Terragen.
Queen on fire
Hydraulx, under visual effects supervisor Colin Strause, completed shots of the Queen burning towards the end of the movie, relying on CG fire and a CG face replacement. There was also a full 3D transition shot from Prince William to the Queen, and several volumetric fog and CG sword extensions during the film.
For the Prince William transition, laser scans of the actors were used to build digital versions of the characters, using an inbetween morph character with no face (no nose or mouth) and then relying on cloth and hair sims before rendering out in Mental Ray. For the burning Queen Ravenna sequence, Theron was filmed with interactive lighting, then small tracking dots were painted on her face and neck. Using a laser scan and the dot data, Hydraulx rigged an animated model of the Queen’s skin and muscles. A Mental Ray shader allowed them to blister the skin and reveal burnt muscles underneath, with comp’d in practical and CG fire and heat distortion.
Nvizage began previs for the film by modeling the main sets such as Ravenna’s castle and the royal village. The studio relied on its OptiTrack Insight virtual camera system (VCS), a system that enabled production to establish set builds and lens sizes before shooting. The VCS was used for two battle scenes, the troll bridge encounter, various helicopter shots and other sequences inside the castle. (Halon Entertainment also provided previs services).
“The VCS could be operated as a hand held camera or as car or heli mounted camera depending on what was required,” notes Nvizage in its press materials. “Our system allowed us to easily adjust the speed of the camera in the virtual environment, as well as automatically smooth out accidental bumps and jolts. One of the favored methods of shooting by the DOP was using a wheeled swivel chair, that way he could create steady camera moves whilst using the chair to take some of the weight of the camera.”
Nicolas-Troyan acted out the mocap for the troll sequence. “This animation was incorporated into the sequence along with existing animation for Snow White and the Huntsman. The sequence was long and complex, so we broke it down into four parts, allowing us to work concurently working the VCS and generating the animation content. Upon completion of this sequence we provided scene diagrams of the Trolls steps and hand impact points to give to SFX, quicktime schematics and technical plans showing camera moves in relation to actors and set.”
All images copyright 2012 Universal Pictures.
We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.