Weta Digital recently gave an incredible full day 'Making of Avatar presentation on James Cameron's film as part of this year's AnimfxNZ conference in Wellington. Here is a rundown of some of the major points brought up by Weta supes and artists from the day.

Weta's accomplishments

The Making of Avatar‚ was actually one of the first times Weta Digital had been able to present background materials on the film to a more general public, outside of SIGGRAPH and related conferences. The venue for the event was the in-house cinema at Park Road Post Production in Miramar. Earlier in the week, attendees of the AnimfxNZ 10 conference had heard from accomplished digital media speakers and had the opportunity to participate in detailed VFX masterclasses.

Senior visual effects supervisor Joe Letteri opened the presentation by outlining some of the ground-breaking visual effects and filmmaking advancements - from the performance capture, to the stereo rig to Weta's digital creature and environment work. Letteri said James Cameron approached him and Peter Jackson after the 2005 Oscars ceremony to talk about the development of a new stereo camera rig. Cameron came down to Wellington during the shooting of The Water Horse and wanted to pursue a new style of filmmaking. Letteri noted that Cameron had ultimately de-coupled the camera from the actors' performances. The camera motion could happen at a different time than the lighting, for instance.

The film served as a great opportunity for Weta to explore new visual effects techniques, including a FACS approach to animation, spherical harmonics for lighting, new fluid solves and deep compositing. Still, despite all the advancements, Letteri said the film still felt like a James Cameron picture because a lot (perhaps all) of the shots were filmed by the director and had his signature style. This was evident even in the template‚ that was turned over to Weta Digital and basically replaced any concept of previs forever more in this kind of filmmaking.

Interestingly, Letteri said that Weta's experience with Steven Spielberg on the upcoming Tin Tin film, which is also relying on performance capture, has differed slightly to that on Avatar. Cameron would set up a master shot and shoot his actors playing out a scene from just one angle, then use the data to mix and match his desired shots. Spielberg, on the other hand, would use the virtual camera to film more traditional angles and setups, having his actors repeat the dialogue. Letteri said this was the new paradigm, and perhaps why the studio was slightly nervous every time they visited the Avatar set and saw the real-time template performances and asking, "Is that what it's going to look like?".

The virtual stage

Motion capture supervisor Dejan Momcilovic was the next presenter and talked about Avatar's virtual stage. We saw a lot of behind-the-scenes footage showing the set design, asset creation, scouting phase, the shoot and motion editing. Momcilovic said that the first test of the virtual production equipment - the suits and the facial cameras - in front of Cameron was also the first time everything worked; a big relief for all.

Scouting the scene once assets had been created turned out to be an interesting exercise, as Cameron was able to move trees and even mountains and these had to sometimes be re-designed on the spot. They also had to be tracked through production. Although the motion that was captured for the template was somewhat crude, occasionally things like tails had to be hand-animated if they were important for the scene. Sometimes the virtual camera ran at half speed, which also meant the audio collected was slowed down, to comical effect. For some of the live action scenes featuring the digital avatars, like the shot of Jake awakening in the lab, a simulcam technique was developed. We saw how the scene was shot first as virtual production and then shot live action, with the captured performance comped in.

Facial animation

Mark Sagar gave an enlightening presentation on Weta's use of the Facial Action Coding System (FACS) on Avatar. The system, developed by Ekman and Friesen in 1978, allows for a way to categorise facial behaviours based on facial muscles. Using action units as measurements, each muscle group in the unit represents a different expression. Sagar said that traditional facial animation can become quite complex when there is different geometry involved, but that FACS gives you geometric independence. A simple combination of action units can give you multiple expressions, and you can use the system to build smiles or, say, angry expressions, in this way. Without FACS, said Sagar, the process would be like making face sculptures at 24 frames per second that would all have to be consistent. Interestingly, the aspect Weta did have to add in was speech, as this is not something dealt with in FACS.

On Avatar, the helmet camera was integral in recording the actors' expressions with freedom of motion, along with accurate eye tracking. Weta tested the camera with a proof of concept, using a digital model of Gollum to map expressions from an actor's markered face onto geometry. Ultimately, the camera provided two important benefits: live feedback and accurate offline processing of facial data. The facial animation pipeline involved acquisition, marker tracking, marker re-assignment, eye tracking, tuning and baked-out animation curves. Sagar thought in the future they might be able to do away with tracking markers altogether on the face, so long as the recording was hi-res enough.

Modeling a Jungle: Building Pandora

Shawn Dunn and Marco Relevant provided details on Pandora's digital jungle. Although much of the main jungle plant life existed in the template provided to Weta, Cameron would move ferns and trees around, and these had to be tracked. Temporary cards used in the template as backgrounds or even as main plant props also had to be replaced with proper CG versions. We saw a breakdown of a shot from the film of Jake and Neytiri running along a tree branch. The production company, Lightstorm, provided Weta with the template render and Motion Builder files for the animation. Weta then carried out their own render and brought the scene into Maya. They could switch between the previs assets and final CG ones to check line-ups and action. Assets were moved around on a per-shot basis in order to adapt to the master file. Then, artists would do a shot-sub of what hadn’t been built yet, ending up with a list of assets of particular trees like a California Oak or a banana tree that would be crucial for the action if the characters had to interact with it.

To build complex plants, Weta developed a shrub workflow. Often canopies were represented by spheres initially and then converted using particle systems. There was also a strong reliance on extracted displacement maps to continue to add detail once the asset was working in the shot, such as a fallen log that the characters run over, or rock faces. Artists even collected rocks from near Wellington airport and scanned them in 3D back at Weta, using photogrammetry techniques to apply the textures.

Compositing Avatar: Stereo and Deep Images

Peter Hillman and Erik Winquist delved into Weta's new approach to compositing on Avatar, centred around deep compositing and a stereo workflow. They recounted the good old days (only two years ago) where much compositing could be achieved with A over B and many cheats. For Avatar's stereo environment, even though it was 40% live action and 60% all-digital, there could be no cheating. The complexity of the shots also meant that Weta could not simply render whole scenes, as changes needed to be accommodated and things like atmosphere, God rays, filtration effects, depth of field and lens aberrations and sensor noise were all crucial.

To composite in stereo, Weta initially relied on their existing Shake set-up, but proposed a new stereo EXR (SXR) format. It has since been incorporated into Nuke, RV, Silhouette, SGO Mistikia and Fusion. Slight adjustments to the macros and nodes had to be developed to deal with polarising effects from the lens set-up and any convergence issues if the cameras were not aligned perfectly or didn't match to scenes shot earlier. The whole idea is that really you don't want to have to composite twice.

Deep compositing techniques allowed artists to combine tens of thousands of elements for the jungle scenes over multiple layers. We saw some nice breakdowns of the Hammerhead Titanothere charging, with its feet properly incorporated into the undergrowth thanks to deep alpha maps. Another breakdown of Neytiri lifting a live action Jake showed that much of Jake's body was CG to deal with camera alignment and hand placement issues.

Physical Textures and Lighting in a Digital World

Senior prosthetics supervisor and visual creature effects art director Gino Acevedo started this presentation by showing how highly detailed skin displacement maps could be obtained using alginate, plaster, silicone, a flat-bed scanner and digital image processing techniques. The audience was actually given some of this mixture to harden on our hands and it really did show up an amazing amount of detail. The techniques helped with skin creation on the characters in Avatar, as well as for things like tree bark.

Martin Hill, head of shading at Weta, then discussed the studio's use of spherical harmonics in lighting Pandora and the development of PantaRay, the GPU optimised ray-tracing system that Weta developed with NVIDIA to allow for precomputation of ray-traced sparse, directional occlusion caches, particularly useful for the highly complex jungle and floating mountain environments. Hill lamented the fact that when they decided to go with a spherical harmonics system, they pretty much deleted their entire existing shader library.

Putting it All Together: The Supervisor's Perspective

Visual effects supervisor Wayne Stables, one of a number of Weta supes on Avatar, concluded the day with a look at his role in co-ordinated so many people and assets and just making the shots happen. Stables was responsible primarily for the Well of Souls sequences (also known as the Tree of Souls). The first words he heard from James Cameron were, "What did you do to deserve that?" The sequence was, he said, "complicated" - from story, aesthetic, visual and technical perspectives. There were crowds (always hard) and the need for a digital Sigourney Weaver, an actress Stables revered.

What was most interesting about Stables' talk was a discussion of the shots that did not work, at least at first. We saw the sequence that has Weaver's character being overtaken by growing root-like structures as she is absorbed into the Tree of Souls. The template version used a simple card texture to communicate the effect. Weta embarked on research into the right look, referencing things like sea life, and creating a number of tests.

After some months, the shot was not working - it had too much noise and jitter - and Stables decided to go back to square one. He ultimately settled on something simpler using animation curves to give a more stable growth-like pattern. In the end, the shot is only seven seconds in the film, and the actual element is only a small part of the shot. But Stables noted that the detail, the story-point and the artistry mattered, especially in a film like Avatar.


Thanks so much for reading our article.

We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.