Industrial Light & Magic visual effects supervisor Scott Farrar and digital production supervisor Nigel Sumner discuss key scenes from Transformers: Dark of the Moon, including Bumblebee catching Sam and the Chicago battle, as well as working in stereo.
Having delivered groundbreaking work on the first two Transformers films, ILM upped the ante in many areas of this third outing, from the complexity of the characters, the number of transformations, to the simulations, and then made it all work in stereo. No clearer is this evident than in the spectacular shot 'SD160'. Here, a tumbling Bumblebee, who is evading Decepticons on the highway, ejects Sam from his front passenger seat to avoid a gas bottle truck. As the two heroes launch into the air, Bumblebee continues to knock objects out of the way and protect Sam, before grabbing his screaming friend at the last second and re-transforming into car-form and driving on.
Watch ILM vfx supe Scott Farrar discuss the Catching Sam shot in more detail
- Watch part of the 'Catching Sam' shot in this promo clip for 'Dark of the Moon'.
"This shot is a good example of why I enjoy working with Michael Bay so much," notes ILM visual effects supervisor Scott Farrar, who says he relishes the process of working out what elements to shoot to make the shot come together. "It's like a big magic show where each and every shot is a little bit different in terms of problem solving."
The shot was planned out directly using animatics rather than storyboards, with Farrar insisting that the key parts of the scene would be shot with the real actor, Shia LaBeouf, and that a digi-double only be utilized for the impossible moment of the transformation. On a backlot bluescreen set-up, LaBeouf performed his stunts in a harness. "He gets hoisted up and we shoot him at 120 frames per second on a film camera," says Farrar. "Shia does this 'arghhhh!' in the air, but not at real time, he does it for high speed, so later on we can time it for slow-motion."
Backgrounds and other live action reference were then filmed in Long Beach, including the on-set Bumblebee Camaro for color and paint looks. "And then we start putting all the pieces of the puzzle together," says Farrar. "We do Bumblebee as a rough transformation and we comp Shia in and then start putting all the thousands and thousands of bottles flying from the bottle truck, glints on everything, dust, debris. Almost all of that had to be built in the computer. But we did run outside and shoot things just to see what they should look like in the light, to make sure that your references are right."
Shooting for real
References were also key for Dark of the Moon's epic Chicago battle, spanning the film's entire third Act. The battle also displayed the full gamut of transforming robots, environment work and simulations - in stereo. Central to Michael Bay's imagining of the sequence was that plates be shot for real in the actual city. "We try and shoot everything real," says Farrar. "You may have seen some films recently where the entire city has been destroyed and it's entirely CG. Well, for a Transformers film, it's different because we actually went to Chicago."Watch a Chicago scene breakdown
"If you start with the real thing, you have a lot more to work with to make it look real," adds Farrar. "So for a couple of months there, I was in a helicopter shooting aerial plates of the real buildings. And we'd add destruction to all the backgrounds - smoke, fire, debris, fighter planes, war, battles, torn up streets - to real cityscapes."
Even scenes involving incredibly elaborate computer graphics and simulations, such as the scene of the Driller (Decepticon robot Colossus' menacing pal) destroying a glass skyscraper, relied on reference of the real thing. "There is a building we perfectly copied that is on Wacker Boulevard in Chicago," says Farrar. "I shot aerial plates for that for every time of the day so that we'll know - what does a reflective, mirror-type glass building look like with front light, side light, warm light, cold light? All these different things we roll into our shots."
In that scene, the Driller wraps itself around the skyscraper and tears it apart before crashing onto another building. ILM relied significantly on its internal proprietary physics simulation engine for the sequence, which included breaking concrete floors and walls, windows, columns and pieces of office furnishings.
"We did a lot of tests early on to figure out how to break the building apart exploring a lot of the procedural options," explains ILM digital production supervisor Nigel Sumner. "A building that's 70 feet tall - to go in and hand-score the geometry so when it fractures or falls apart - would be a time consuming laborious process. The floor of a building may be made of concrete. How does concrete fracture when it tears apart? The pillars would be made of a similar material but made of rebar or other engineering components. We'd look at how a building would blow apart and then choose the best tool to help achieve the properties of that during a simulation."
"We had a pre-defined library of chairs, tables, desk lamps, bookshelves," continues Sumner. "All of these things would be rigidly sim'd to slide out of windows during a glass breaking sim and adding more complexity to the shots."
The Driller, a complex mix of an eel-like body with spinning rotator blades, knives and teeth, was ILM's most dense model on the show. "It's caused challenges in the ability to both animate it and push him through as a single asset, especially in conjunction with the skyscraper he's trying to tear apart," says Sumner. "He's the heaviest asset we've had to render, both in terms of number of polygons and pieces of geometry, but he also has a lot of hidden details in him that a lot of people may miss - just because of the sheer scale of him."
Sumner calls his work on the Driller a "love-hate relationship," especially in conjunction with the skyscraper he tears apart. Ultimately, the most complicated shots of the sequence, which feature more than 70,000 Driller parts and the falling glass skyscraper approached 36 hours a frame to render, on eight cores. "When you work that out in a linear time fashion," says Sumner, "it actually approached 12 days in terms of linear rendering time."
Shooting for 3D
Dark of the Moon was shot partially with Cameron-Pace Group's 3D Fusion camera rigs developed by James Cameron's team for Avatar, and other film and camera systems. It was also partially post-converted. Scott Farrar, for one, embraced the detail and fidelity inherent in creating giant robots for stereo. "I did some tests with the robots," he says, "where we were close-up on a robot - and you know Optimus Prime has 10,000 pieces - and if you get close-up you see all the details in the nooks and crannies of these pieces. It's totally unlike a plain surface subject like a human head or an animated head. You see this detail and there's truly not been a film that would not look as cool as what we're doing right now."
Shooting scenes with giant robots to be added in and making them work for stereo was always something on Farrar's mind. "Let's say you've got five robots distributed down the street fighting whatever or running, and want to give it a lot of volume," he says. "Well you can make that space look very deep - the problem is if you do, the robots start to look very very tiny - it's almost counter-intuitive. So you've got to be very careful about how you build your depth in the shots. The safest thing in all cases - and this is why working on a Michael Bay film worked pretty well most of the time - Michael is keen on having foreground/midground/background depth in his shots, even in normal live-action shots. He'll say, 'Put some stuff hanging here!' It could be women's stockings or forks and knives dangling from a string out of focus - it doesn't matter, but it gives you depth, and focus depth, and makes it more interesting."
Ultimately the film came together in what Farrar describes as a 'checkerboard combo' of formats and stereo techniques: "We actually shot with all kinds of formats, knowing later on that we were going to have not only have stereo, where we do rendered robots that are from two views, but we'd also do single-eye and hand those off to a company that would do the conversion. But it was the only way to do this film. I mean, we had crash cameras - that's a single lens and there's no way to do that stereo because it's a camera that if it gets destroyed it's OK - they're cheap. You've got to convert that. You've got to convert the helicopter images. So that's why it's a very interesting combination of formats."
All images and clips copyright © 2011 Paramount Pictures. All rights reserved.
We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.