Paul W.S. Anderson’s take on The Three Musketeers mythology is a steampunked reinvention of the swashbuckling tale, complete with airborne battles and all in native stereo. Visual effects studio Mr. X Inc. delivered the bulk of the work with 274 shots, and we got the details from VFX supe Dennis Berardi. Warning: contains spoilers.
Click here to read our fxinsider interview with Mr. X’s James Cooper on the opening titles of The Three Musketeers.
fxg: In some ways, the shot count doesn’t sound like many given the scope of the film and the airship battles and environments. How did the visual effects process start for you?
Berardi: The scope of the work was deceivingly very large. Paul Anderson basically wanted a sense of spectacle. The DOP, Glen MacPherson, and I joked that the W.S. in Paul’s name stood for ‘wide shot’. He really loves establishing shots and we worked a lot on an 18mm lens.
For the film, we had to create the illusion that we were in Venice, Paris and London. It was a period movie, obviously, but it was mostly shot on stages and locations in Bavaria, Germany. However this did mean that we had great locations with castle exteriors that could be used for real and old buildings. We had a fair amount of digital environment work, though, to either augment the shots or create the landscapes entirely digitally.
Our biggest challenge was to create the Paris environments for a climatic fight between Rochefort and D’Artagnan, which happens on the rooftop of the Notre Dame Cathedral, where you see Paris in 360 degrees in every shot during the fight. That was shot all greenscreen on a small set piece. We created the remainder of Notre Dame and Paris was all digital. We also had an ambitious all-CG opening titles sequence.
Above: Watch a breakdown of Mr. X’s visual effects for The Three Musketeers.
fxg: What was your approach in terms of planning a relatively limited number of shots, in terms of previs’ing things out?
Berardi: Paul and I have worked together on a lot of movies, including the Resident Evil films, for more than 10 years. So we’ve developed a good shorthand. We like to previs the big action sequences or anything that’s got a cool factor to it. We would sketch out a thumbnail to block out a sequence – there are not a lot of people involved at that point. Then we edit that together and put sound to it and get a sense of feeling and tempo and get a sense of whether it will play in the film. Then a couple of versions later we have a piece we’re happy with – and it’s at that point we figure out how to shoot it, or whether it needs to be entirely CG.
The first round of previs is shot-making, the second round is technical with a perspective view of the scene and we can show the DP where the camera has to go or we can talk to the art department about how much set they have to build. At that point it’s a full collaboration. We then end up issuing supporting technical previs which shows lenses and heights and fields of view and set pieces.
fxg: Probably the most obvious effects work was for the airships – can you talk about the early considerations you had for how they would be achieved?
Berardi: The biggest consideration we had was – actually we were paranoid about miniaturization, because they are very large vessels. The Buckingham airship is 50 meters long and the Cardinal’s airship is 80 meters long. We were concerned that if we lensed them incorrectly or didn’t have the fine level of detail in them, that they would just look like miniature toys. So our biggest challenge was building in the fine detail and the scale.
Also, lensing was a huge deal. We tried to treat them like we would photograph a real vessel of this size, and move the camera like we were really photographing them. There’s an instinct with the camera to do things that you otherwise wouldn’t do in real life, and we tried to avoid that instinct. We tried to mimic air to air photography, and we didn’t the move the camera very fast usually. We were on a lens range of 18mm to 35mm.
fxg: How did you approach the animation for the airships?
Berardi: We didn’t move the airships very fast either, seeing as they were large vessels and shouldn’t move or turn too quickly. So in our early animation tests, whenever we rocked them too much or made a quick turn – instantly it felt like a toy. That was one of the lessons we learned early on – the inertial and momentum meant they didn’t slow down too quickly either. Getting the weight right was a real challenge. We had over 100 digital airship shots and they’ve all got ropes and burners and so many details, all with independent motion done in Maya.
Part of the animation also fell to our cloth team in terms of the rigging of the balloons. Whenever you saw the fine motion at a distance, the scale helped. We even introduced into the balloons a low-level bulging and warping in the balloon itself. We studied a lot of balloon movement in hot air balloons and airships. What you get is not a static shape – that shape is dynamically moving all the time. They are affected by cross-winds and you get these changing shapes. In effect the structure is always there because they’re internally ribbed with bracing. However, the subtlety of the shape changes and those changes in the shape helped with scale. So, for the rudder, we found if it moved too quickly the ship looked too small – it’s a 12 foot size rudder that if you moved slowly and felt the wind resistance, you could feel or believe the scale.
fxg: Did things around the airships, like clouds and the sky, help with that sense of scale?
Berardi: Whenever we could add in visual cues we would to help with the scale. So in the cityscapes, in Paris, we would introduce ground level detail – rocks and trees and puddles and people walking around. In 17th century Paris, banners and flags were big in the day so we had those too. We had a ‘poor-man’s’ motion capture session in-house and got various walk cycles so for any shot we could grab people casually walking around or whatever we needed.
fxg: Can you talk about the shot of Buckingham’s airship arriving in Paris.
Berardi: The location was a Palace in Herzberg, Germany. We have 14,000 of the Cardinal’s guards in formation outside of the Palace. We introduced very subtle animation cycles, even though they were in formation – we had a wind simulation for example going through the feathers in their helmet. The banners were all moving in the wind as well, the cobble texture underneath their feet was done to scale.
For the aerial shot flying in, all we kept there was the Palace itself. That piece was shot from a helicopter and we flew in as if the crowd was there. I had previs’d that shot so we knew exactly what to shoot. I kept the camera move of course and it feels real because it is real. We tracked the move and replaced everything except the Palace. So there were set extensions and the troops, but the physical camera move was a great starting point and helped with the suspension of disbelief. I always push for a real element to start with. And actually we even did have that debate – ‘Why do we have to shoot anything since we’re going to replace it?”, but I felt like we needed to start with something real. And on Paul’s films with his wide establishers, I’ll always try and find a location, or shoot an element. I always get the composition and the camera move approved, and then start adding in from there.
fxg: How were the soldiers inserted – did you have some kind of crowd system?
Berardi: We have an in-house solution. Our senior TD Jim Price and the Houdini team created individual soldier assets in our normal asset pipeline. Then we had a library of animation cycles that were loopable. We wrote our own set of Houdini scripts and plugins which randomize all of those animation cycles, and then we put this through our rendering pipeline.
fxg: What about where you had to create cities and buildings like Notre Dame – can you talk about making that architecture?
Berardi: We did a lot of photo surveys. We sent a team to Paris for three days to build a photo library of Notre Dame – thousands and thousands of detailed pictures for textures and just for modeling. A lot of it is based on image. We’ll also scan if possible – we used XYZ RGB to scan sets and props. We decided not to do a LIDAR scan, but instead went with an image-based modeling approach.
The photo survey team went to Paris with two 5Ds. They shoot textures by shooting perpendicular to every surface at super hi-res and with bracketed exposures and sizes. Then also they’ll shoot the same point on a building from two different perspectives to help with the modeling.
fxg: For Notre Dame in particular, what was your approach to how dirty something like that should be – because in the scheme of things it’s relatively new isn’t it?
Berardi: It really came down to the weathering. That church was built in the 1300s and it’s actually been under a perpetual state of restoration. We had big debates with the art director, production designer and Paul about the weathering on the practical set piece (even though it was very small). I was pushing for a more weathered look – we rationalized it that in 17th century Paris maybe they hadn’t restored or had the ability to clean the building that was 300 to 400 years old.
Also, in that era everything was heated by fire and there would be a build-up of pollution and soot. If you go there today, it’s actually pretty clean-looking because it’s been restored. But we felt from an art direction point of view and to sell the digital asset, it needed to be dirtier. We wanted to put water stains and soot marks. The roof is made out of lead, so we wanted to introduce a patina of soot and dirt and oxidization in there that would give it a texture.
fxg: There are a number of big environments, including one of Paris where Rochefort and d’Artagnan are fighting atop Notre Dame. How did you deliver the wider assets for that sequence?
Berardi: We created a sky on a dome that was a continuity-based sky that – based on the fight sequence we could rotate our sky dome and put on a camera on it and know exactly where we would be looking. We mapped out the whole fight and camera directions. Paul wanted a very foreboding sky – the lighting scenario was very ambient but it was down – there weren’t a lot of hard shadows.
In middle-ground we had wispy pieces of mist to sell the idea we were 250 feet in the air, which were effects elements generated out of Houdini. At the horizon, we did matte painting and projection for the distant Paris. As we got closer to middle and foreground, we had hero models lit and rendered as 3D assets. They would sometimes be touched up by our matte painter as well. Then the compositing team used Nuke to composite all of the elements.
fxg: Were the characters digital doubles for some of that fight?
Berardi: There was some digi-double work there, and for the Tower of London shot where D’Artagnan leaps from the burning tower. We did hero scanning with XYZ RGB. They did a body scan with the actors, and then a secondary hero facial scan of the actors. We did our own photo surveys and created our digi-doubles from the two main scans. We created our own hair system and ran a cloth sim for the clothes. The thing about the musketeers is that they’re clothing is very loose – they’ve got very large lapels and frills, so we had to get the sims working to cut with the real footage.
fxg: Then there’s the final armada shot with Buckingham at sea and scores of ships and airships. How did that come together?
Berardi: Paul Anderson likes to do pull-backs, and in that shot it starts on Milady and pulls all the way back to reveal the armada. On set, we had the actors on a greenscreen stage and I had a 50 foot Technocrane and we started tight on our actors and dollied back on the Technocrane and boomed up and telescoped back. We went from one corner of the stage to the other corner until we ran out of room. Then we took over the shot completely digitally, including with digital doubles.
Then we snap-zoomed back to the armada. The water was done in Houdini. The airships were the Buckingham style, which were a very complicated asset. The ships in the water – there’s cloth simulations and digital doubles as sailors as well as flags and the sails. We ended up with over 50 ships and airships in the 700 frame shot, in stereo. The ships were modeled, UV’d, rigged and animated in Maya. The renderer was V-Ray which worked out really well. V-Ray really handles daylight exteriors, with that global illumination kind of look really well, especially with the wood finishing.
fxg: The film was shot in stereo, which you’ve worked with before of course on the latest Resident Evil film and on TRON: Legacy, but can you talk a little about the shoot?
Berardi: It was shot on the PACE rigs, on ARRI ALEXAS, on a mirror rig. We shot stereo HD, 1920 x 1080. The rule was that we didn’t want any 3D that was too big or uncomfortable for anybody. But he wanted to have 3D help put the audience in the period movie by wrapping the world around them. So we used it as a period enhancement to help sell the idea that we were there. We had some fun more aggressive gags with swords coming right at us, and canon balls, but in general the 3D was not as aggressive as Resident Evil 4. This was more about helping to sell the space and the period feel of the movie.
We watched full stereo dailies in the PACE truck every night. We had an on-set digital lab with PACE and a stereographer doing dailies and dailies corrections for us. So every night at wrap, we’d go to the portable lab – a portable truck with a mini-theater in it which had a seven foot screen and there we watched the dailies every night.
fxg: What sort of impact did stereo have on you in post-production?
Berardi: We used Ocula in Nuke to do some line-ups and fixes. We have a Real-D in-house screening facility, with a Christie projector and a Stewart silver screen and passive Real-D glasses. We found that that larger viewing environment was essential.
Some of the fixes are easy, such as a vertical off-set. If it’s a polarization issue, sometimes you’ve got to go in and paint down some highlights in one of the eyes, because, by definition, the mirror we shoot through is polarized and reflections don’t always match. Sometimes it gets more tricky when the geometry of one eye to the other eye doesn’t match, and that’s when you go in and do some mesh or grid warping in Nuke and Ocula.
fxg: Any final thoughts on the film and your visual effects work?
Berardi: It was a model of employing visual effects in a supporting role where we needed to be invisible, but then utilizing VFX unapologetically for the hero action with the airships. In general it was a very nice collaboration between art design, real locations, cinematography, stunts and visual effects. It was around 300 hero shots – it wasn’t an 800 or 900 visual effects shot movie, but we would choose them very carefully working with Paul.
All images and clips copyright © 2011 Summit Entertainment and Constantin Film. Courtesy of Mr. X Inc.