VES commercials: crowds and cars by Framestore

Two of the commercials up for nomination tonight at the VES Awards are by Framestore. We take a look at Pepsi’s ‘Crowd Surfing’, featuring some unique crowd animation done in Houdini, and the Nissan Altima ‘Wouldn’t it be cool?’ spot which deconstructs a car.

Pepsi – Crowd Surfing

fxg: Can you talk about the origins of the Houdini crowd system that was developed and used on the spot? How do you capture the mocap data, and how is that transferred into Houdini?

Martin Aufinger (3D) and Diarmid Harrison-Murray (CG supervisor): We developed the Houdini crowd system specifically for this commercial. There were two main motivations for going down this route.

Firstly, we were keen to have a crowd system that is suitable for commercials projects. Primarily, we were looking for an efficient system that allows us to populate environments with people, control geometry and texture variations (their outfits) and assign behaviour such as cheer, idle or dance, without spending time on simulating thousands of crowd agents.

Just think of a typical commercial’s crowd project with stadium shots, all you need is efficient crowd layout and rendering. Agent interaction and therefore expensive simulations are hardly ever necessary.

Secondly, on this particular project we had to create quite complex crowd animation as we needed our party crowd to raise their arms, support footballers (soccer players) running on top of them, anticipate the footballer’s performance and react to their actions.

To meet these demands, we built a system that allowed us to use our basic non-simulated crowd layout and split off groups of people in order to add more sophisticated behaviour. We simulated their behaviour and got them to react to the footballers. Once those hero crowd agents (usually around 100-300) were simulated, we merged them with the other non-simulated agents and rendered them all in one. In terms of lighting and rendering there was no difference between the two types of agents.

For capturing mocap data we are lucky to have an optical motion capture studio in house. We piped our mocap data through Maya into Houdini. Primarily we used Maya as a bridge in order to keep our capture data usable for other non-Houdini projects. One of the biggest pipeline hurdles was the fundamental difference between Maya’s joint based skeletons and Houdini’s bone system.

fxg: How did you plan out some of the shots with the crowds – could you layout the scenes with temporary crowd animation or geo?

Martin Aufinger and Diarmid Harrison-Murray: Yes, this is one of the key benefits of our crowd solution. We could layout the shots, adjust behaviour and look of the crowd, set the level of detail and animate cameras and lights, without spending time on crowd simulations. The shots were pretty much locked before we started any complex tasks. In the end about half of the shots did not require any bespoke crowd simulation at all. You can imagine our system as a sophisticated instancing tool that gives you a lot of control over crowd behaviour, look and layout without simulation agents.

fxg: What are some of the controls Houdini lets you make to the agents, and what were some of the differing characteristics you added to each person?

Martin Aufinger and Diarmid Harrison-Murray: Houdini is all about data processing, in our case point attributes representing agents. It is an open system with easy access to all the data at any time. Our crowd system was developed with the same principles in mind.

We were able to define behaviour or agent’s clothing using human readable attributes that could be altered throughout the network. Depending on the shot requirements we set the crowd’s general behaviour (a simplified example would be 50% dance, 40% jump and 10% idle) and adjusted it by either painting attributes on the ground or animating locators forcing close agents to change their behaviour. If necessary we also overrode individual agents manually. The same principles apply to geometry and texture variations.

For this commercial we focused on generic behaviour such as different types of cheer, clap, dance, jump and idling. An additional behaviour category was used for agents close to the footballers anticipating their arrival. On top of that we enabled agents to grab the footballer’s feet and support their run on top of the crowd. This was again controlled through point attributes.

There are definitely more features to develop but it is an open system, very efficient and definitely a lot of fun to work with.

fxg: How was the beach and concert environment realised – where did you source lighting information from?

Chris Redding (VFX supe): The beach environment was photographed in San Sebastian as a huge HDR panorama. This was then mapped onto an accurate topographical model of the real location, so that defocuses and atmospherics would behave in a realistic manner. In theory this HDR could have been used for lighting information, but the light from the stage lights was so dominant, it was pretty irrelevant.

fxg: Can you talk about some of the other effects such as the lasers and additional animation, and re-lighting that was required?

Chris Redding: In terms of additional work, in the end we used very little of the crowd plates that were shot, using our CG crowd instead, as it looked considerably better! And every single wide shot of the footballers running over the crowd was roto-animated. This was so that we could re-project the body double action plate onto geometry in order to relight them so that they integrated better with the crowd below them.

There was a considerable amount of ‘hidden’ animation work, adding hands coming up from the crowd to support footballers feet, or for instance, to catch Messi as he falls into the crowd.

The lasers were originally going to feature slightly less, and at one point there was some discussion about not having them at all. But once the guys had a few early versions working really well, everyone loved them and it was decided to feature them even more than was originally intended.


Nissan Altima – Wouldn’t it be cool?

fxg: There are some great almost-organic looks in this spot – what were some of the overriding design principles behind this spot, and the manner in which the car broke apart and formed various pieces?

Aron Hjartarson (VFX supe): We established principles for the effects at the very beginning. They had to be rooted in reality, so we studied how matter changes form on a molecular level through sublimation, melting, freezing and boiling and used that to inform our approach for the disintegration and reassembly. We also drew inspiration from higher level state changes, organic processes like spore growth laid the foundation for the tyre birth for example.




fxg: How were the various break aparts and reformations designed and planned out?

Aron Hjartarson: We established a roundtrip from Animation to FX and back to evolve the story which was then fed into the edit until the pacing and feel were right. We tried hard to maintain flexibility in this process to make sure the FX had time to breathe – they are the character of the piece. Still, the basic process we use on every job was adhered to, starting with boards, to blocking, to animation, to FX. The feedback loops we stuck into the process made it less linear and more conducive to creativity which we needed given the tight turnaround.

fxg: What kind of car data or photography or other information did you have to help you model the car and interior pieces and engine parts?

Aron Hjartarson: We received detailed CAD models from Nissan of both Altima generations.Those models required a vast amount of cleanup – and it was not enough to make sure the models just rendered. The tools we developed to break apart the models relied to a great extent on their underlying topology to be of a certain standard so the rebuild effort was far greater than usual.




fxg: Can you talk about some of the modelling, animation, effects and rendering tools you relied on?

Aron Hjartarson: We put our faith in XSI and Ice, as well as Arnold for rendering. For our purpose there was nothing that came close to the speed and interactivity of XSI, Ice and Arnold for the massive amount of data we needed swirling around so it was an easy decision to go that route for this particular project. We also relied heavily on Eric Mootz who built a lot of custom tools for us.

fxg: What were some of the major lighting challenges in realizing metallic, yet particulate parts of the car as it re-forms?

Aron Hjartarson: By and large the subject matter lent itself very well to CG rendering, but the crystal growth was very challenging – we had screenfulls of refracting, transparent or reflecting surfaces with shallow depth of field and untold millions of triangles. However, I have to admit that Arnold took the usual challenges out of the lighting process. It allowed our artists to concentrate more on making pretty pictures and gave us the opportunity to dial in a look we were all very happy with.