Invictus Draws a Crowd

Invictus tells the story of South African President Nelson Mandela’s efforts to unite his torn country under the auspices of the 1995 Rugby World Cup Championship. Director Clint Eastwood collaborated with visual effects supervisor Michael Owens and CIS Vancouver to create virtual stadiums and crowds, plus key digital make-up effects, for the pivotal rugby matches seen in the film.

10Feb/union/nz254_185_plate_small
plate before
10Feb/union/nz254_185_del_small
final shot

Stadiums and crowds

Owens realised early on the extent to which crowds would play as a wholly separate character in the film and how digital crowds would be the most suitable approach. “The first thing was, you just can’t rent 60,000 fans,” explained Owens. “You can get extras on the first day, and the next day less show up, and then even less the next. And then there’s the expense of dealing with it all. So it was obvious from the script that we would need to augment all the crowds.”

Owens and CIS Vancouver visual effects supervisor Geoffrey Hancock settled on an approach whereby the entire stadiums would be replaced from the sidelines upwards and populated with motion-captured digital crowds. Deciding not to go with a bluescreen or greenscreen solution and instead rely on rotoscoping – a method he had also successfully utilised on Flags of our Fathers – Owens convinced Eastwood to shoot rugby matches with the intent of adding virtual stadiums and crowds in post-production. “It turned out great on Flags,” recalled Owens. “Your lighting looks better and it’s more natural. And your situation for the actors and camera operators is better because they can see everything without being constrained by a bluescreen.” In-house teams at CIS Vancouver and CIS Hollywood contributed to the extensive rotoscoping effort, along with outsourced companies from the around the world.

To re-create 1995-era stadiums, Hancock and his team flew to South Africa before shooting, surveying two of the stadiums used in the film to represent six different locations. CIS Vancouver then began building the stadiums and other virtual locations in Maya and Photoshop before porting the models into Houdini. “The stadiums were effectively our tracking device,” said Owens. “If we were placing digital people back into a real stadium there would have been lots of subtle lens distortion and things that don’t match up. We didn’t have to worry about that because we were going to put CG people into a CG environment. Tracking-wise it works out great and any discrepensies where a row of seats don’t quite match up doesn’t matter because you’re not using it. Most people might think we only put the crowd in it, but we put in everything. It also let us adjust the stadiums a little, because period wise they were not quite correct.”

10Feb/union/nz217_100_plate_small
plate before
10Feb/union/nz217_100_full_small
final shot

On set, Owens would ensure that the camera operators knew to frame the shots for the non-existent crowd as well. “I would say to the operators, ‘You might want to pan off the players now’ – just reminding them that there was another character out there. They really got into it. Occasionally we re-framed things in post, and we’d tilt up a little bit.”

Adding thousands of spectators to the stadiums necessitated a new approach to motion capture and the previous use of crowd software. “On Changeling we had used Massive for street crowds,” explained Owens. “Massive is an amazing tool, but it has one limiting feature in that if you have to change something you have to re-sim it. And then you get something different! We wanted to change the demographics and change certain people. On top of that the crowds had to react to certain things. It wasn’t like you just had them cheering. We knew we had to take it to a whole new level.”

Motion capture sessions featured people of different shapes and sizes and were cast in a way to satisfy all the demographics necessary for a large crowd. “We were really strict about making sure that motion capture captured for a specific body type would only be used for that actor’s virtual body double,” noted Hancock. “In the past we’ve found that if you start taking motion capture from one body type and applying it to a different body type, there’s something lost in that translation and those subtle things are things people pick up on.”

10Feb/union/nz244_400_plate_small
plate before
10Feb/union/nz244_400_del_small
final shot

At CIS Vancouver, a custom plug-in to Houdini was written to allow artists to interface mo-cap data with Massive. “All of our motion capture data was being edited in Massive and pre-simulated cached libraries of animation were being created in Massive, then fed into Houdini,” said Hancock. “The procedural nature of Houdini’s workflow was really beneficial because we could leverage these pre-simulated animation libraries and all these models and textures of our performers and the locations, and interchange people’s wardrobe, appearances or their animation – whether on an individual basis or a group of people. That flexibility of not needing to go back to simulating really allowed for many, many more iterations quickly, which I think allowed for detail and art direction to allow the crowd to be a character in the film in its own right. It could react to the gameplay and carry the emotion along in the game.”

Featured in live-action composites and completely virtual sweeping camera moves, virtual characters had to hold up to sometimes a third of the screen in height, something the visual effects practitioners believe could not have been done successfully without the variety in the captured performances. “It made for two things,” added Owens. You could change things without having to go backwards and it gave you the freedom of art direction – directing the extras to do what you need to do. Just like an AD would yell out: ‘OK, they just scored!’, and the crowd would go ape-shit. I don’t think we would ever have got through the movie if we had done it traditionally.”

10Feb/union/UMD_07144_v03_before
plate before
10Feb/union/UMD_07144_v03_after
final shot

Blood and guts

As the stadium and crowd shots came together at CIS Vancouver, Eastwood and Owens felt that some of the rugby game-play needed a heightened sense of violence and effort. To achieve this, CIS tracked scrapes, scratches, bruises, bloody cuts, dirt marks and grass stains onto the rugby players and generally dirtied up their wardrobe. “This all started on Gran Torino,” recalled Owens. “In that film, when Clint cries as an actor, he gets all stuffy and runny and it’s not quite what he had in mind for the scene. He was thinking if he could just have a single tear come out of his eye, that would work better. He asked me if we could do that and I said of course, and I said that we could time it exactly how we want. We could make it a big tear, a little tear, whatever we want. In addition, he had re-shot his hand going through some glass in frustration. Prior to that he had just hit a cabinet door. He liked it so much, he thought he should cut his hand out of frustration. So what we ended up doing was putting blood on his hands after that sequence, and then carried scars on his hands throughout the rest of the movie.”

“So, for Invictus,” continued Owens, “Clint wanted to be able to art direct the dirt and injuries being built up on these players. It was part of saving time on the shoot too. We probably ended up doing about 200 of these make-up effects. What’s really neat is that it adds a subtle but very dramatic difference. If you see a before, you kind of buy the scene, but you put the blood and guts in the scene and you go, ‘Holy Shit! These guys have really gone through the wringer.'”

Artists at CIS Vancouver used Nuke, Shake and Mocha to track on the various scrapes and bruises. “Mocha tracking software was really instrumental in the way that it could track planes of the faces very three-dimensionally,” said Hancock. “It could track all the different portions of the body in separate pieces. We would begin by just tracking on chequerboards to see if we had tracked them properly and that it was sticking on people running and tumbling in the frame. Then we would go back in and add in the more visceral scrapes. Clint just kept saying, ‘More, more, more. Keep it going throughout the game. Make it more noticeable and make it look brutal!’. So we kept pouring it on.”

More roto

The rugby players received even further visual effects treatment when it was realised that a film stock issue resulted in the South African Springboks’ jerseys turning an unattractive shade of brown instead of the green and gold they should have been. “If the camera panned and you go from backlit to frontlit,” explained Hancock, “the jerseys would turn brown then green again in the shot. In addition, the blacks on the New Zealand All Blacks jerseys in the final game went red. It’s a real story point that the Springboks wear this green and gold uniform which has been a symbol of apartheid. So it needed to be changed in the film.” Ultimately, the film stock problem – the result of a high sensitivity to the infrared spectrum – was solved with further rotoscoping and keying of the jerseys and socks where necessary.

Additional visual effects for scenes of South Africa in 1995, a 747 stadium fly-over and views of Nelson Mandela’s trip to the UN rounded out a total of about 600 Invictus shots for CIS Vancouver in what is ostensibly not a visual effects film at all. “I think this may be the biggest rotoscope job in the history of cinema,” concluded Owens. “We had to final the shots multiple times. Once for the crowd, once for the jerseys colour correction and and finally again for the overlay of the blood and guts, without anybody knowing we were really there.”

Images courtesy of Warner Bros. Pictures and CIS Vancouver.