How MPC used real and digital on Batman v Superman

Guillaume Rocheron was MPC’s visual effects supervisor on Batman v Superman: Dawn of Justice. In this detailed Q&A with fxguide, Rocheron recounts some of the major design and VFX planning issues for dealing with live action/digital character transitions, making Gotham and bringing the hideous Doomsday mutation to life during the film’s final battle.

fxg: One thing that struck me in the film was how much there seemed to be this interchange between the real actors and digital doubles, in the same shots. Can you talk about the transitions you achieved?

Rocheron: That’s something (overall visual effects supervisor) DJ Desjardin and I have been working on for a while now. We initiated it with Sucker Punch and took it to the next stage on Man of Steel, but Batman v Superman was really for us the opportunity to really blur the line.

It’s wasn’t just about good digi-double work, it was about finding ways to transition between real and digital and back to real again and a combination of all these. I think audiences are getting used to the actors doing the drama and then it cuts to a wide shot, which is CG action, then it cuts back to drama. One thing we really pushed was, how do we put the camera pretty close on the actor and get some drama and then transition to something that is unfilmmable, and maybe keep going and transition again? So those digital take-overs is something we’ve been pushing quite a lot and using a lot on the movie.

O_HF1200_showreelStills1_v1.1001
O_HF1200_showreelStills1_v1.1002

Finishing at IMAX resolution was really the question mark here, though. It had been done before where you can be pretty photorealistic with your digital actors as long as you don’t hold on them for drama for too long, but doing the very same thing in 4K IMAX means you’re literally working with an image that is four times the size, with four times the resolution.

Knowing that early on, we devised new rigging techniques and methodologies so we could push the digi-doubles to the point where, in the Batman v Superman fight, there’s moments where you’re watching a full-frame digital Superman face being punched by a digital Batman, and I don’t think the audience is aware of it. The reason we did it was all about shot design and getting the most out of every shot. We wanted to maximize the impact of every shot.

fxg: So what are some of those technical approaches you’re following to making the digi-doubles?

Rocheron: Well, we know we can do photorealistic skin and photorealistic hair and photorealistic looking characters, but the hard part is, how do you maintain likeness at all times? If you move the eye say 5mm higher you know you’re going to have something that looks like the actor but is not the actor. If the ears are half a per cent too big, your brain is going to question the difference – is it the actor or is someone who looks like the actor?

So, what happens when you smile? Certain muscles are moving. We work on muscle and fat sims. In order to maintain the likeness to the actor, you have to make sure that not only is everything moving the way it should but also you are hitting the specifics of each actor’s anatomy and features. Ben Affleck’s smile is very different from Henry Cavill’s smile. These are things we put a lot of emphasis on. What we would do this time is have targets for the specific actors – so in a smile we had the target of a certain muscle position. We used real world to drive more dynamic systems – we were forcing the systems to reach certain expressions or poses.

O_HF0910_showreel1_v1.1001
O_HF0910_showreel1_v1.1002

Then at IMAX resolution it’s even harder. You get an insane amount of details. I remember sitting in dailies and we were basically counting the eyelashes on Superman’s face because we were going, ‘There’s something in the eyes that just doesn’t work’, and we realized on the other edge of the right eye we had 20 per cent too many eyelashes – at that resolution you can just capture that.

To do all this we knew we needed a lot of real-world data. We captured the actors on set with Gentle Giant Studio’s new portable scanning setup built in conjunction with Light Stage. We were the first to use this. It meant we could do very high-res scans of Batman, Superman and Wonder Woman. We then got MOVA to come to Detroit as well as we were filming so that we could shoot faces and movement. That meant back in our facial rigs we could always go back to real data. When an actor opens his month, the skin and fat is moving not only realistically but exactly the way it does on the real actor, which I think is a tremendous challenge.

fxg: Tell me more about this new portable Light Stage rig?

Rocheron: With the Light Stage in a studio, well, that’s a massive rig, so Gentle Giant happened to develop a portable version of it which they could take on stage. We jumped on the opportunity and got them to come to Detroit for a few days so we could do the scanning. The resolution is incredible because you basically get to the skin pore level. So as a base you have the highest quality you can. Then on top of that we took a lot of photographs and all the usual capture methods. Then we did a FACS session which captures the actors in different poses in the MOVA rig, which gives us moving scans and lets us study how the ears are moving when you open you mouth and how much the nostrils are flaring and how the muscle contraction on the cheek is different on every actor.

O_HF0900_showreelStills1_v1.1001
O_HF0900_showreelStills1_v1.1002

fxg: On set, what was being done to capture actor performances or performances of the stunties for the scenes where there will be digital doubles?

Rocheron: We capture standard HDRIs and gray and chrome balls. We also have a rig that we call the Envirocam that we created on Man of Steel. It’s basically a motorized panaromic head that allows us to capture almost 60K of resolution of panoramas in the lighting of every shot. When we transition to digital characters we often transition the entire shot to a digital shot, so we have control on the camera move. You don’t want to on the set bake in a camera move that doesn’t exist. So to give us the freedom, we basically shoot a little vignette with the actor. Then we bring in the Envirocam to capture the background. That means we can transition the entire background with the lighting using two and half D projection to a digital version of it, and we have full control on the choreography of the shot. We’re then relying then on our lighting references and digital doubles to match as closely to the real thing.

In terms of the stand-in and stunt performers, well in the Batman v Superman fight, Ben is the mech suit. Often the suit was a bit too constraining. It’s a bit like the Iron Man suit, you can’t really act in it. So we used the gray tracking suits for movement. Either we replaced Ben or the stunt performer completely or we just kept his face and put the armour around it. It depended on the shot. We used some performance capture for these guys as well. We did a few shots with performance capture suits and multiple witness cameras on set. We did the same thing with Doomsday as well. We had a stunt performer, Ryan Watson and we did both facial capture and body capture on the set for him. We also did it for some Superman shots where we knew we couldn’t really film Henry but we wanted to capture his facial performance. We used a helmet mounted camera with tracking markers on the face so we could capture the facial performance.

O_GB0040_showreelStills1_v2.1001
O_GB0040_showreelStills1_v2.1002

fxg: How difficult was it to match the digital real mech suit to a real version?

Rocheron: There was a really beautiful real suit for on-set use. It was just very constraining because of the bulk. It was picture ready but not practical for movement. We did a lot of scanning and photography of it and we created a perfect replica of Batman in his mech suit at different levels of damages. We started by shooting the scene – Batman would make a punch and the plate is of Ben just moving his arm, so might keep the armour on Ben’s body but free his arm so he could actually move it – up to the point where the movements became too complex and then we would just shoot with a performance capture suit and put the armour back as CG.

There’s that one scene where Batman and Superman are outside in the rain and Superman pushes Batman with a flick of the hand and Batman is sent 200 metres back. It’s the typical example of a very difficult take-over. It’s in your face, the camera is not moving very much and you literally have a one frame transition. We basically shoot Henry and Ben having the dialogue and Henry pushes Batman back and in that plate obviously Batman doesn’t move very far. So on the moment Henry pushes Batman we paint him out, so it’s like we have a clean shot. Then we put a CG version of Batman on top of this and what we do is matchmove and do roto animation of the real Batman very accurately so they look perfectly similar. Then the transition is perfectly seamless.

O_HF0140_showreelStills_v1.1001

We do the same thing with lighting, putting them side by side and make them look exactly the same. If you look one and the other you can barely tell the difference. That’s how we do all the transitions, because it lets us do it over one frame. Interestingly in that process, when you make a digital double with lighting and integration as good as the real thing, it basically means you’re ending up with something that is photoreal to the point where you can’t tell the difference.

But after working on that particular shot for a while, in the end Batman is actually completely digital all the way through the shot. We just made it so close to the real one, we realized we don’t even need to do the transition, we can just keep him CG all the way through the shots. We copied Ben’s performance and all the lighting, and so even before Henry pushes him it’s actually a CG Batman in the shot.

fxg: What were you rendering in to do this digital work?

Rocheron: We upgraded our version of RenderMan to RIS which means we took advantage of the full path tracing. It really takes RenderMan into a different category of renderer. We did that mainly because we knew for Gotham City we would have a tremendous amount of data to render. The new RIS is very good at churning through a lot of geo and textures. It helped us with the quality of lighting and shaders because you’re fully raytracing everything. You don’t have to deal with cheats or shortcuts or an interpretation about what a surface looks like – you can really replicate real-world lighting from on the set.

O_HF0070_showreelStills_v5.1020
O_HF0070_showreelStills_v5.1021

Zack Synder loves very graphic images. And the DP, Larry Fong crafts frames that are very complex – there’s a lot of lights and it’s always very intricate and comic booky. The Batman v Superman fight – we actually went onto a real location in Detroit and had some real rain in there with rain towers. Some of the environment is CG, but we tried to make a lot of the images and environment around them practical. You could really craft frames for real that we’d match to and then on top of that we’d put our CG characters.

fxg: Let’s talk more about the digital city work – how did you realize that part of the visual effects?

Rocheron: Well, Gotham, for example, is a city with a lot of history. It’s in the comic book world and film world. In the Tim Burton film it’s very fantastic and gothic. Then the Chris Nolan version is a lot more grounded in reality so that it’s basically Chicago or Pittsburgh with a few tweaks on certain buildings. In this movie, because we see both Metropolis and Gotham, they had to be very distinctive. There’s always something very interesting in urban development where you see cities – you look at New York, Chicago and Pittsburgh and they all evolve differently because of history – what they’ve been through.

O_HF0510_showreelStills_v1.1001

So in building Gotham the one thing we wanted to avoid was building all the same kinds of buildings in grid patterns – you’ll produce a photoreal looking city but I don’t think you’ll give it that sense of history that is very typical of Gotham, unlike Metropolis that is a newer city and more developed. The way we approached it was, instead of building it from scratch and trying to invent urban development that was impossible to do, we thought how about we find sections of different cities that are appropriate for how we want Gotham to look. Production designer Patrick Tatopoulos’ original design for Gotham was a U-shaped city where we had what we called the city belt, a series of skyscrapers that form a U-shape and the interior – the center of the U is more the slum and port and abandoned area.

We scouted different cities. To create the skeleton shape, say, we found in Manhattan that Hell’s Kitchen was very appropriate. It’s a pretty flat part of Manhattan surrounded by a U-shape of buildings. We found the port in Detroit was very industrial and interesting so we used that. Then we found an oil refinery in Detroit, and then a couple of abandoned city areas that we could use for these types of areas. We went on helicopter shoots and sent photographers to take thousands of photos of these areas. Then we re-assembled everything into a big LEGO set. We took all these areas and plugged them into our photogrammetry workflow to re-create complete city blocks. We took some of the buildings that were too recognizable and created some unique architecture on them.

O_HF0610_showreelStill1_v1.1001
O_HF0610_showreelStill1_v1.1002

So, it doesn’t feel as computer generated because it has an organic feel to it. There’s not a single building or city block that looks the same – everything is unique because we basically re-created those massive chunks of cities and combined them together for the CG version. There was so much data and geometry – it’s almost unrenderable – but path tracing let us be geometry heavy.

fxg: Then there’s the massive fight scene with Batman, Superman, Wonder Woman and Doomsday. Can you talk about Doomsday’s design, in particular?

Rocheron: It’s an interesting piece of design. It’s a very established comic book character with very distinct features. If you look at him on the comic pages he has all these bones – he looks like a creature engineered to look like this – the difficulty is how do you actually integrate a creature like this in a world that is not very magical and grounded and realistic.

O_GB0430_showreelStills1_v2.1001
O_GB0430_showreelStills1_v2.1002

In the story Doomsday doesn’t come from another planet – he’s not an alien like in the comic books, he’s a genetic mutation of Zod. So the design was that Doomsday was not meant to be engineered by nature and somehow would be aesthetically balanced – it’s more like a mutation because it’s a weapon. In the film, Lex uses it on Zod’s corpse and it’s a transformation that’s amplified to make it a weapon of destruction.

We looked at Zod’s proportions and how the body was and literally doing some very crude transformation to it. There were also some very extreme versions, barely…he’s not like a character with a race. He’s really just a mutation. We were balancing out between a completely malformed monster just here for efficient and the comic book version. There were two stages to Doomsday where he starts more like an embryo where he’s more like a soft looking blob who happens to be very powerful.

O_GB0870_showreelSelects1_v3.1001
O_GB0870_showreelSelects1_v3.1002

That was stage 1, and then we used the storypoint of when the the army fires at him and throwing a nuclear missile at home, that feeds him energy and it builds up the point where he can’t contain it any more and he just explodes and bursts and his body over-develops. That’s when we move onto stage 2 and he crashes on Stryker’s Island and his body mass is increasing, his muscles outgrow his skin and his bones are going through the body. There’s too much energy flowing through his body. That’s how we introduced some of the comic book character.

fxg: How were you building Doomsday? What was going on in terms of his muscle and skin sims?

Rocheron: If you look at a gorilla or a heavy character, in the motion you make sure he looks heavy. That’s how he is anatomically designed. Our question for Doomsday was, does he move as fast as Superman does? He’s not really bound to any physical limitation. He’s engineered to be constrain-less. But he needs weight and that grounds him to the world. You can take the route of animating him slower and with more muscle mass, like a gorilla. It’s been a challenge to balance that out. We’d have him move very fast but then have these slower recoveries. Which gives your eyes time to see the jiggle on the fat, the muscle movement – he could move fast but we’d still have those details in there.

O_GB0660_showreel_stills_3_v1.1001
O_GB0660_showreel_stills_3_v1.1002

The skin had a certain amount of sub surface scattering and wetness, but then we used the fact that you could see through and see sometimes the muscles and the tendons to show the unusual amount of scattering on his parts. It was about giving him some interesting texture, because anything that looks too shiny it affects the scale. And the exposed muscles and tendons helped us create something more interesting than just a thick-skinned character.

fxg: I thought that during the fight there were some images that were almost like paintings, especially when say Superman and Doomsday are firing their heat vision at each other.

Rocheron: Yes, we had to ask ourselves, how do you represent two people in a heat vision battle? When you watch comic book frames there’s something very interesting about how they need to use a static frame to convey either something that is violent or threatening? You have to use colors and composition to really crank them up. In the hero fight at the end, it was something that we played with a lot so you could stop on a frame and understand if something is violent or big or small.

O_GB3050_showreelStills_v2.1001

It’s always difficult to realistically convey these moments that are not realistic. You know that people don’t fly, you know nobody has heat vision. You need extreme to use extreme action and find elegant ways to represent them and use that to your advantage. It needs to be grounded in reality but then pushed a little further to create some unique and iconic imagery.

O_GB0660_showreel_stills_3_v1.1003
O_GB0660_showreel_stills_3_v1.1004

In that heat vision battle, we knew the heat vision would collide and then how about when it collides it creates this really red atmosphere around them, so we could actually still see the characters – it was our opportunity to silhouette them and still understand what’s happening. We did a lot of things like this with fires and such. When you get such fast moving action and massive scale difference between characters, the need to make everything readable is important because very quickly it can end up like a blur. Motion blur of people moving fast at nighttime and you just don’t see anything! So you have to craft the frames and find interesting composition tricks – it’s part of the fun of it.