Audiences are reeling in amazement at the artistry & technical polish of David Fincher’s The Curious Case of Benjamin Button. But even those with an appreciation of the power of vfx will be stunned to learn that for the first 52 minutes of this epic motion picture, the head of Brad Pitt’s character is a CGI creation. Bill Dawes speaks to DD VFX Supervisor Eric Barba about the process Barba dubbed “emotion capture”.
fxg: Eric, congratulations on The Curious Case of Benjamin Button. You must be very pleased how it all turned out. It’s a stunning film.
Eric Barba: I actually am. It’s taken a long time to get to that point where I can relax and be happy with it. Now that people are really responding warmly to it, it’s great.
fxg: Eric, before we talk about how you created the stunning shots for this movie, can I ask you to take us back to your initial discussion with David Fincher about Benjamin Button. How did you think you would be able to achieve the visual effects in this film?
Eric Barba: I’ve been working with David since 2002, working on commercials and music videos and bidding on various feature film projects. There were a dozen or more projects before Zodiac, but since the very early days, we have been talking about Benjamin Button. I’d heard about the project because Digital Domain had looked at it in a previous incarnation. David told me more and eventually, I got a script, then David and I would talk about it more and more while we were working on other projects, for instance, “How are you going to do the tracking,?” and “what do you think of this technology? ”
” At that point, David was pushing the digital workflow and we were using the Viper camera. We got really excited about Benjamin Button in about 2004. We were paid to do a test which we had a really short amount of time to do. We had a maquette built and did a head replacement with another actor in about 5 weeks. People thought we were crazy to attempt this. But because he did not have to talk or emote too much we could build it specifically for one shot — the scene where you first see Benjamin Button as a kid sitting at the dining table banging his spoon.
The test went pretty well and it was received very warmly by the studio. I remember showing it to producer Frank Marshall and he said that it was the most exciting thing he’d seen since the first dinosaur test walk for Jurassic Park. So then we knew we were onto something. But the movie wasn’t greenlit at that point so I went back to doing a music video and more commercials with David and ultimately Zodiac. When Zodiac finished we found that Benjamin wasn’t dead and these commercials came up that were the perfect test bed for Benjamin Button. ”
(These were three ads for iconic US Popcorn brand Orville Redenbacher in 2007. The ‘Colonel Sanders’ of the Popcorn world, Orville was famous for his direct to camera promotional spots that aired on US television in the 70s and 80s. After his death, Fincher used CGI techniques to bring him back for a spot in 2007).
“We thought we could use those spots as R&D for Benjamin Button, testing out rendering and tracking technologies. Principal photography for Benjamin Button had already begun in November of 2006. We learned what ideas wouldn’t work and what would need further refinement. We started working with the Mova Contour guys (developers of the Contour Reality Capture System) and we were really excited to be working with them. They were using similar techniques that we had used to bid the film initially but these guys had a really good handle on how it all worked.
Our use of Mova Contour on the Orville Redenbacher commercials didn’t get the warmest reception. Part of the reason was we were doing a guy who was already dead so there was no way to capture a performance from him. We cast a voice that was similar and then different actors to do the face and the body, which ended up as three very disparate performances. Another problem was that as you art direct a commercial you start to get away from the original performance and you tend to move into that “Uncanny Valley” territory. Doing that commercial was a great learning experience and it also humbled my team that had previously worked with David Fincher to put out some pretty amazing work. Up until that point, we thought we had a handle on it and then we did Orville and realized we still had a lot to do to make it all work.
fxg: What was the difference between the approach you developed for the test shot in 2004 and how you achieved the character in Benjamin Button?
Eric Barba: The part that carried over from 2004 was taking a life cast of the actor and from there a maquette is sculpted. With Orville, the maquette was not very good so we knew we would need some fantastic maquettes showing Brad at age 80, 70 and 60 that the studio could sign off on.
In the 2004 test, because we did not have the character doing a major performance, the rigging system was fairly primitive. We knew we’d have to build a much more elaborate system. When we started Orville our character supervisor, Steve Preeg, developed the rig that the animator used for Orville’s performance. We were going to use the Mova Contour data to help us on that project but it wasn’t fully mature, and we still needed to do a lot of homework on it.
For Benjamin Button we could not create the performance in animation, we had to translate it from Brad’s performance, so we thought about using Mova in a different way, to capture facial expressions. Steve was the character supervisor for Benjamin Button too, and he worked with a team to design the animation system and emotion capture process. We knew what we needed to get out of the Mova system and they knew how to get it all working. Six months later when we got Brad into the rig we were able to get everything we needed, which was basically a series of facial expressions. The Mova system is designed to get 24fps data and then that can be used to identically recreate that performance on a CGI version of the character. But because we were going to be using Brad Pitt’s performance to drive older versions of his physiology, basically re-targetting the performance to an older version of himself, that wasn’t really going to work for us. And because the Mova capture system requires that the actor is seated and you can’t move very much, and we needed to put him in an environment where he is moving around and interacting with other actors, there were going to be some limitations on what we could use that data for. So we used the Mova rig to capture Brad’s facial shapes and we then built systems to use Brad’s performance to drive Benjamin.
fxg: Can you describe the rig that was used to capture Brad Pitt’s performance?
Eric Barba: To clarify, we didn’t use Mova to capture his performance, but to capture facial expressions volumetrically. The Mova Countour Capture rig is designed to hold 28 cameras in an array around the actor. They are mounted on a speed rail-like structure that surrounds about 150 degrees of the actor. like a The cameras are all aimed at the actor’s face which is covered with phosphorescent make-up. This allows for frame by frame tracking of patterns and each point can be tracked in 3D space. This is the first system to truly capture someone’s face moving in realtime and provide a moving mesh that can be subdivided and rebuilt then retargetted to another mesh to drive a CGI performance. Since Orville Redenbacher, Mova have been constantly working to refine the Contour capture system and we had been working with them on this. It was a collaborative effort. They were invited in to collaborate on Benjamin Button and like all of the vendors were incredibly excited.
To capture Brad’s performance we shot him performing the role on a soundstage with four HD cameras and used image analysis technology to get animation curves and timings that drove our proprietary deformation rig.
fxg: So where does the digital Brad begin and end in this film?
Eric Barba: The baby was a mixture of a live action robotic maquette which was enhanced by Hydraulx I believe. But for the first 52 minutes of the film it’s a full 3D head, there were 325 individual shots. There’s no projection, there’s no 2D techniques. Once our work stops about 52 minutes in, Brad takes over in makeup. Ultimately as he gets younger he wears less and less makeup until it’s just Brad. As he gets very young Lola did touchup work on his physical makeup then when he gets back to the Dance Studio Lola takes over doing the younger version of Brad. Ultimately there are a couple of child actors and a baby at the end. The bulk of the work in head replacement happens in the first 52 minutes. We prevized the U-boat battle and planned it out with David, and built a bunch of the assets but ultimately we handed that off to Asylum because of time and economics, and I think they did a good job of it.
“The first “digital head” shot is the one we did for the test, where there’s a long dolly and pan until the audience sees Benjamin sitting at the table banging his spoon. That’s the first body actor for Ben in his 80s, as he grows younger we have another body actor take over for him in his 70s, when he goes out on the tugboat with Cap’n Mike and goes to the bar. The bulk of our work is the “Ben 70” character, and “Ben 60” when he leaves home. One of our last shots is when he is reading the letter from Daisy on the back of the tugboat. The line where he tells the Captain, “Well you do drink a lot”, that’s where the real Brad takes over.
fxg: Did the number of head replacement shots grow during the filmmaking process, or was it always intended to be that many?
Eric Barba: David always said that he didn’t want to shoot this movie around the fact that it was a CG character. He wanted to shoot it like he was shooting an actor. That means we don’t shy away from difficult shots such as seeing him naked in the bathtub or getting a haircut or getting drunk and stumbling. It allowed David to tell the story the way he wanted to tell it which I think is the right way to do it. We did end up with a higher number of head replacement shots than was originally planned.
fxg: Were there many challenges in the job of attaching the digital Brad to the various bodies used during the live shoot?
Eric Barba: The body actors had different neck lengths and shoulder sizes and even the arch of the neck was different, so there was a lot of massaging that had to go on to make those heads feel like they belonged. Our tracking supervisor, Marco Maldonado,had to create a tracking system that was far more robust than traditional tracking because traditional tracking doesn’t give you accurate Z depth. It doesn’t give you a spine which we needed to have so the head would follow the body exactly as it moved around. During the shoot we would have four of the principal Viper cameras running in perfect sync. We devised the software to take any of the different cameras and, once the plates were locked, we could triangulate and retrack to get the accuracy we needed. This was a huge undertaking but we learned with Orville that there wasn’t anything out there that could do the tracking the way we needed to have it, so that you buy the illusion every time.
fxg: In many of the shots the Brad Pitt character is wearing a hat or glasses? Did that present additional challenges?
Eric Barba: MMy intention when we started shooting was to always shoot Benjamin with the actor wearing a hat so that the body was shadowed correctly. My character supervisor was really worried about that approach but I thought that it would help with the illusion, so the audience would not know where the plate stopped and Benjamin started. Most of the hats in the movie are the live hat.
There were some issues with lineups where Brad’s head proportion didn’t exactly match the body actor’s. We didn’t want to distort Brad’s head to fit the hat because we didn’t think that was the right way to go. So sometimes we would just add in a CG hat and hope the audience wouldn’t notice, but I thought they needed to be in the shots so that Claudio Miranda (Director of Photography) and David could frame their shots. It did help us sell the illusion ultimately. However glasses were a huge challenge. Even back in 2004 when we were doing the first test scene David said he wanted these glasses to be as big as coke bottles. He wanted the audience to be able to see the eyes enlarged through refraction so they got the sense that he cannot see. At that stage I was going “Uh, oh” because refraction adds problems. the main one being that when we are looking through refracted glasses with the thicker lens it often distorts our perception of where that person is looking. So maintaining focus and an eyeline was very challenging
fxg: Was there a lot of work in compositing as well?
Eric Barba: It was massive. One of the things that we learned from the initial test scene and Orville, where we had 15 shots in the three spots we did for Orville, was that even with some really good compositors (and I had my A-team on that project) there’s only a handful of people that have the right sensitivity that once they get these head renders and put them in a scene, that they have the sensitivity to colour that the skin tones don’t come apart, or look ashen or too pink or too red. One of my compositing supervisors for the Orville spot, Janelle Croshaw, had that sensitivity, although the spots didn’t end up working that well for other reasons. So part of our plan going into Benjamin Button was to build a lighting system that would give us as close to perfect out of the box as possible, then we needed to build a compositing template system that allowed us to hand off shots to 20 other compositors and get the same results. That’s a really tall order with normal rendered CG but for Benjamin Button it was off the meter in terms of difficulty. My hat is off to our lighting supervisors Jonathan Litt and Dan Abrams, who did an amazing job.
We used a lot of common techniques such as HDR for set reconstruction. Our lighting system was different from the normal workflow. Traditionally your tracking team hands over their data, the lighting team will render off the pieces and they will get handed off to the compositing team, so they come in last. In this show our compositing actually came in first and they really enjoyed that. The integration team that was on set with us for seven months, basically surveyed every piece of equipment on set and we took HDRs for all the different head positions as we were shooting, and then all that information was catalogued as we were shooting it. We then built a system in 3D that would ingest that data, build the HDRs, we then had modellers rebuild the set from the survey data. We built shader systems that could take HDRs and project them back onto the set geometry, because we had the tracking of the head, we could relight it within Nuke and we could reproject those HDRs onto the CG head, to recreate a corrected HDR for the head position
Digital Domain emotion Capture Approach
– Eric Barba: ” Since our goal was not to create Benjamin’s performance in animation, but rather to ‘xerox’ Brad Pitt’s performance onto this CG head, we had to develop a brand new process that we call emotion capture.”
The overall process included:
1. Working from life-casts of Brad Pitt and body actors to create three photo-real maquettes representing Benjamin in his 80s, 70s and 60s, then shooting them in different lighting conditions using a light stage. (Rick Baker and Kazu Tsuji created the maquettes).
2. Creating 3D computer scans of each of the three maquettes.
3. Shooting scenes on set with body actors in blue hoods.
4. Creating computer-based lighting to match the on-set lighting for every frame where Benjamin appears.
5. Having Brad perform facial expressions while being volumetrically captured (with Mova/Contour), and creating a library of ‘micro-expressions.’
6. Shooting Brad in high definition performing the role, from four camera angles, and using image analysis technology data to get animation curves and timings.
7. Matching the library of expressions to Brad’s live performance of Benjamin.
8. Re-targeting the performance and expression data to the digital models of Benjamin (created from scanning the maquettes) at the specific age required in the shot
9. Finessing the performance to match current-Brad expressions to old-Benjamin physiology using hand animation.
10. Creating software systems for hair, eyes, skin, teeth, and all elements that make up Benjamin.
11. Creating software to track the exact movements of the body actor and the camera, to integrate the CG head precisely with the body.
12. Compositing all of Benjamin’s elements to integrate animation, lighting, and create the final shot.
An article and video by Lee Stranahan at the Huffington Post – The Amazing Effects Of Benjamin Button
CBS News story on youtube