Our fxguide Batman Movie DesignFx piece with our media partners WIRED magazine:
Animation: Keeping it real!
The Lego movies such as The Batman Movie, are built in such a way as to capture a child’s imagination, and filmed as if a team of adults spent months, or years, doing stop frame animation of Lego blocks.
In Animal Logic’s new film for Warner Bros, a secondary character steps up to lead his own film. (Something this character seems very confident to do). But moving a secondary character to the role of providing the majority of the story and acting required some major technical innovations and refinement of the animation process.
The Lego pieces are very complex to animate. All the performances come from replacement animation, on rigid hard surface ‘plastic’ characters. This is especially true for complex facial animation and emotional performances. For example, Batman only has a glowing area under a cowl for eyes. For Batman to deliver his lines, every expression means one clow pops off and a new cowl is added. Animation Director, Rob Coleman and the team at Animal Logic (AL) used no traditional Disney style ‘squash and stretch’ on the characters.
Coleman points out even the bending of an elbow, “if an arm bends then it does so by digital lego blocks being swapped in and out to make the bending shape”. It’s all animated brick replacement. Coleman even points out that when Batman is say driving the Bat-car or plane, and his arms would not reach the wheel, rather than just stretch the arms, a second set of lego arms are put in the shot. Batman’s normal arms disappear down and a whole second set of arms come up from below to hold the wheel. This gives the illusion that his arms are flexible enough to drive, but it does not violate his approach, and that of director Chris McKay, that everything AL does, could be done in the real world on a miniature set.
Given how limited using this stepped key frame approach for facial performance, Coleman and the team at Animal Logic tried to give the audience every hook to allow the audience to believe in the characters. This really shows in the non-verbal acting, when say a character listening. The team have various tricks and approaches to help, firstly the angling of the characters is key, this helps with blocking but also signals if a character is listening for example. The static characters actually slightly move their rigid bodies left and right slightly, as if shifting their weight. Under Batman’s cowl, which there are no eyeballs or even black dots such as Emmet had in the first Lego movie, but there is a subtle fall off in light. If Batman suddenly looks to his left a very small change in intensity across the growing region can be seen. This is matched with a slight hue shift towards blue on the outside of his glowing ‘eyes’ area, all of which are tiny clues to the audience about where his eye line is pointed.
Central to Rob Coleman’s work is understanding what the subtext of a scene is all about. While the characters may be saying one thing, we the audience may need to be aware that other Lego characters in the scene don’t believe this, or are focused on something else. The subtext of a scene is often complex and it is very hard to get clear, even with experienced live action actors. “And yet audiences love subtext” explains Coleman, “it is what the story is really about”. As with most animated films, the voice actors are filmed and given to the animators as visual reference. Rosario Dawson could not record her lines in the same physical location as actor Will Arnett, who played Batman. One was in New York while the other was in LA. But as Rob Coleman, explains Rosario Dawson was extremely valuable to the animators, as “even though she only heard the other lines in her headphones, she was so in the moment, and in-character, that she was making brilliant acting choices even when she was just listening to their lines”. As such her ‘performances’ when she wasn’t speaking is just as much in the film as her performance when she was delivering a line.
The level of nuanced acting and comic timing that the AL animators were able to capture is even more remarkable when you remember that it is done with only nine points of movement with any character.
Lensing Batman: Gotham Procedural Lens Flares
The notion of subtext was picked up in the lighting design. The team asked, is Batman moving towards family,if so he might move into light. Or is he going on his own way and thus falling into shadow or darkness. Every shot had its emotional beats underscored by the lighting.
Coleman was keen to see the world with strong DOF, as it was a key indicator of size and realism. While everyone agreed, one of the visual problems the team faced was, what was the correct lensing if this had been ‘shot for real’. As the characters are only 1.5″ high, it quickly became apparent that normal world lenses would produce too much DOF, and literally almost everything would fall out of focus. To solve this the team produced an ‘out of the box’ solution. They decided that the Lego Batman film was filmed by a Lego sized film crew with say a tiny 50mm Lego lens. This then produced a shallow depth of field that obeyed the laws of physics, but just on a smaller scale.
As this film was naturally darker and shot with a lot of back light, it became apparent early on that the team would need a new solution to lens flares. On the original Lego Movie, lens flares were handled in a pretty traditional manner, either live action elements added in comp, or CG ones generated from specific points on demand. “For The Lego Batman we knew pretty early on we wanted something a bit more scalable, and more flexible” explained Compositing Supervisor, Alex Fry.
In a real camera, there is not some specific threshold where the lens begins to start flaring. Everything in front of the lens is flaring all of the time, it’s just that a dim object produces an almost infinitesimally dim flare. For this film, AL wanted every pixel in the frame to produce a corresponding flare. Unlike DOF which was almost entirely produced in the Glimpse renderer. It was decided that the lens flares would be produced almost entirely in the Nuke as part of compositing.
“So we started developing a fully procedural lens flare system using raw Nuke nodes, really just an exotic combination of Convolves, Blurs, Transforms and Vector warps. The look was developed and tuned using a black frame with a tiny super bright circle, but done in such a way that any pixel in the frame will contribute the same way”.
The team knew they wanted to use flares aggressively as part of the look of the film, but “we also didn’t want to back ourselves into a corner creatively,” he explains. “So rather than baking them into the comps and have everyone get gun shy about their intensity in dailies, we made their generation a back end process”.
When any shot was sent to Digital Intermediate (DI), a new EXR sequence was created that combined the latest comp, all of the grade mattes and a new procedural lens flare pass on it’s own layer.This innovative approach allowed grading to be done ‘under’ the flare. The Di team had the flexibility to grade the base image without having to deal with interference from the flare, and to tweak the colour and intensity of the flare very late in the game, perhaps boosting it in some areas. Importantly, it also allowed the flare to be backed off when it ran over a character’s face. “It also meant we could be much braver in our use of the flares, knowing we could back it off if we received a late in the game studio note,” he commented.
In the vast majority of cases, flares created from the pure scene linear values did the job, but occasionally DI would call specific shot elements out for a “B Flare”. In these cases, the lighter or comper would isolate the element, create a new procedural flare from that element, and pack it into an extra EXR layer.
Being able to cleanly add the flare in DI has become possible due in increased colourspace flexibility in DI. In years past, DI would have been stuck with just a Log flavour, which would have meant a compositing operation, such as adding light, could have had undesirable results. But “in the Baselight we’re now able to flip between different working spaces at will in the stack. So in this instance, despite most of the grading stack running in ACEScc (The ACES Log working space) we’re able to flip to ACEScg to do compositing operations in a scene linear colourspace, giving us the same look we would have achieved in Nuke”.
In the end, essentially every shot in the film has some level of flare dialed in without restricting the grading.
Effects in Batman
The film has a lot of smoke and similar effect done as ‘Digital miniature effects’.
“The rule of thumb was if Christopher Nolan,who likes his miniature work – if he on the miniatures could have made the effect – I will happily take it.” comments Miles Green FX supervisor. This meant that the team ended up watching a lot of the Dark Knight Rises – studying the smoke rising up and all the practical light effects. The team also referenced old Thunderbirds episodes and Fantastic Mr Fox.
The team tried to do smoke coming up from manholes as lego bricks smoke puffs, but they looked like something other than smoke. This was distracting from the story as audience would need to guess what the object was, it was never immediately readable as smoke. In the end “as long as it looked like a miniature, and that maybe there was an incense stick under the table burning – he (McKay) would buy it – but it had to look like a miniature”, Green adds.
The problem comes when Batman goes swimming. The water was deemed to be a computer fluid sim in this film, even after the successful Lego water in the previous Lego film.
But at what scale should the water sim be calculated at? Batman is 1.5 inches high. In years gone by, if one was shooting real water as a miniature, there are all sorts of classic tricks such as over cranking the camera frame rate and air jets to make that water seem bigger on film. The question AL had to answer was did they need to make the water look: real world ‘large’, small, or small but shot to look bigger? Fluid surface tension, and spray droplet size is vastly different at all of these different fluid simulation scales. After all they wanted to shoot The Batman movie, as if it could have been shot for real as a miniature. The team even decided to shot some real test lego pieces to explore this… in real bathroom sinks at Animal Logic.
In the end, the team tried a bunch of approaches and some of these ‘accurate’ solutions just looked wrong, for example they decided that the audience didn’t want a drop of water on Batman be almost as big as his whole face. “We tried to get at the right scale, – we were constantly fighting to get it looking right…and not take the viewer out of the story by having something odd” adds Green. Really the director was happy for us to play with it to make it right”. Related to the same problem was motion blur. In a stop frame animated film there would be no motion blur, but it is not possible to shoot stop frame real water, and even if the team could simulate this by turning off all motion blur it made the fluids, and especially the splashes look wrong. The final solution when Batman is swimming with his pet dolphins was to introduce some partial motion blur to the water, even with the Lego Dolphins characters animated often on 2s and with no motion blur.
Animal Logic’s in house renderer Glimpse was extended to accommodate the new film. On the previous Lego Movie the team had used a hybrid rendering scheme with a mix of RenderMan and Glimpse. The Batman film was the company’s first full Glimpse only show.
As with the previous film, Sub Surface Scattering (SSS) was important to making the lego bricks look real. But in part due to the heavy backlighting in this film the team built a new single scatter extension. “Otherwise the bricks could look sort of waxy” explained Luke Emrose, Digital artist, technical director and rendering technology expert. “We actually used a system that is both a Jenson Dipole model and the single scatter. It works much better for light behind the object. Typical dipole gives a more milky appearance. For very small objects back lit the single scatter gives a much better solution” he adds.
In the new system the rays that are high energy, primarily, pass through the object. “These are blended in a stochastic model way to make it seem all like one SSS” he says. The team just got this combination of single and multiple scatter working just at the very end of the previous Lego movie.
Dealing with huge numbers
The Batman Movie is technically bigger than the Lego Movie. For example, The Gotham set was 220 million distinct bricks. The Batcape had a 100,000 curves, and was made of solid geometry. All the capes in the movie are made from solid objects, they are not textures. They are essentially woven, and “every single cape in movie are the modeled that way using our in-house tool Weave” explained Emrose.
- The shot with the largest number of primitives had quads – 3 trillion ( actually three trillion, six hundred and thirty billion, five hundred and nine one million, one hundred and sixty thousand, one hundred and seventy one primitives (“you have to double that for triangles,” joked Emrose)
Number of ray cast simulated in the whole show was 13 quardrillion (give or take a bit )
Gotham set would be 6.6 football fields of Lego and yet it would render automatically, without manual intervention
- The film has multi-level instancing and Glimpse is very fast loading of data. The team did a test to see if they could load 66,000 instanced versions of the entire 6 football field sized model of Gotham. In animation mode, “without anything like full lighting, you can still move around the vast model in real time” proudly points out, AL’s Director of R&D, Guy Griffiths.
All the blocks have patina, each brick has a unique identifier given to it when it is published and this looks up into scratch maps, thumb print maps, dent maps, edge maps and other effects to make each block look unique. “There is so much instancing in a Batman or Lego movie that when other shows come along, – whatever it is – sand on beaches – whatever – we are like ‘ Yeah we can handle that’… everything else seems easy”, comments Emrose.
The lighting complex that Glimpse allows makes a big difference to the production pipeline. For example the Batcave has 5000 area lights. To render that effectively is difficult, “traditionally just sampling a dozen area lights would be a handful, so sampling is a whole different story” he adds. To solve light complexity the team used Russian Roulette for half termination – “so we are not tracing too many paths. As well as light culling, we have methods for sampling sub sets of lights so rather than sample all the lights we sample a subset dynamically depending on where we are in the scene,” comments Emrose. Glimpse figures out based on intensity, position, direction, visibility, (back facing culling ), if a light is too small or too low, then it can make a determination that its solid angle is too small to make a difference.
Part of this attention to light complexity handling has come about from a demand for more lights in assets. “Assets are published with lights in them, so a lot of the lighting team are opening up the assets before they even hit shots and putting lights in building, in cars etc…so when the shot is laid out by the environment artist there are already tens of thousands of lights there before they even start lighting” comments Miles Green, …and of course Batman is one of the only characters with two area lights in his face.
Glimpse is a uni-directional path tracer but there has always been an extension for better defocusing, this has been extended into progress refinement rendering. It is used specifically for DOF. “In order to resolve high lights you need to be able to trace back through the lens, in order to get the correct shapes on the DOF bokeh on the image plane. This takes into consideration the shape of the lens,” says Emrose. Animal Logic’s lenses are nonlinear so they can fire a ray back through the lens, “to do this the renderer has to be bi-directional for this part of the program”. With tiny Lego figures DOF is vitally important for scale as well as aesthetics. The system can thus do anamorphics or spherical lens characteristics – and that would be quite different on the shape of the bokeh.
There was a lot of testing of the Energy conserving 3D DOF. For example, a render test had a light pass behind a pole – and this meant that the pole seemed to compress horizontally at that point, “and we got into this big argument about the renderer being wrong – so we went out and shot it with a lens similar to the 3D camera – and it was exactly what the same – the energy re distribution DOF predicted a result that was exactly the same as what happened in real life,” explains Emrose.
There are only about 3 shots in the film that dont have 3D rendered DOF by the renderer… and they involved a love heart shaped bokeh. Emrose laughingly explains, “we could have done, that but we didn’t have time to implement importance sampling for love hearts”
Green points out that once the DOF had been set “every department sees everything with rendered DOF”. AL has an automatic rendering system called ‘Renderboy’ and they delivered all the clips with rendered DOF. With Al’s approach of having ‘Glimpse Everywhere’, all the previews are using Glimpse. AL has an approach of Render Everywhere. “It really allows us to validate out assets from start to end – that is the main reason we were really pushing for it,” comments Green. “Without it when you hand assets on to the next department that data is unreliable… when can hand it on, if you have seen it with DOF, then you know it on it will work. You know it will render”.