Puss in Boots: head in the clouds

Director Chris Miller’s Puss in Boots revisits the popular character voiced by Antonio Banderas previously seen in the Shrek films. We talk to visual effects supervisor Ken Bielenberg about the challenges in Dreamworks Animation’s latest feature, while effects lead Brett Miller delves into the lighting and rendering approaches for a major clouds sequence.

Puss in Boots tells the story of Puss before he meets Shrek and Donkey, and his adventures with Humpty Dumpty (Zach Galifianakis) and Kitty Softpaws (Salma Hayek). Although the Dreamworks Animation team had ample experience in the Shrek world, this time around there were all new characters and environments to create, along with the updated technologies to make them.

Watch artists from Dreamworks Animation at work on ‘Puss in Boots’.

With several of the Shrek outings under his belt, visual effects supervisor Ken Bielenberg oversaw a number of departments on the film, including modeling, rigging, surfacing, character FX, effects, lighting, matte painting and crowds. “In my position I also work really closely with the production designer,” says Bielenberg. “The two of us are responsible for delivering the look of the film and seeing through the director’s vision. The production designer works on the art side and I’m partnering with him to make sure it’s all that it can be.”

On Puss, Bielenberg notes that the project was initially under the radar both internally and externally, but ended up being one of the most complex Dreamworks films from a technical point of view. “From an effects stand-point, the biggest technological challenge was the clouds,” says Bielenberg. “In the past, doing big cloud sequences in full volumetric 3D has been doable, but incredibly taxing, both from a data standpoint, rendering and artistic.”

“We’ve developed systems in the past that have produced good results,” adds Bielenberg, “but they’ve been a bit unwieldy – so we’d only be able to handle a couple of shots. For this film we had a big sequence that takes place in the clouds. Three of our main characters are in the cloud, interacting, and the cloud is pretty much the entire environment – cloud landscapes, trees being formed from the clouds, plus the characters are walking through it. And then it’s also in stereo. So that really drove us to do 3D clouds than primarily matte paintings which we might have typically done.”

See further below for our interview with effects lead Brett Miller on the clouds sequence.

Humpty, Puss and Kitty - storyboard
Animation pass.
Final shot.

Puss and the other feline characters have, of course, fur that needed to be efficiently groomed and simulated. “We utilized the fur system that had been done before,” says Bielenberg, “but overhauled it because Puss had to support the whole movie, along with Kitty this time. We worked on different simulation passes for them, and optimization so that the character FX department could do the 1400 shots that have cats in them. We also adapting the smoosh systems for the belt around Puss and his boots, and the Kitty character has that as well, plus his hat, so they could all interact. And we had wind dynamics which we hadn’t done much in the past.”

“I also challenged the FX department to do fur-on-fur interaction,” continues Bielenberg. “Not just solid object interaction pushing fur around. So if Puss is scrunched up and leaning on the table, the fur from his forearm had to interact with his bicep in the right way.”

On prior films, Dreamworks had relied on a proprietary fur system, but this time around artists used Houdini for much of the fur. “We found that Houdini could handle an order of magnitude number of curves bigger than we’ve been able to do in the past,” says Bielenberg. “We had a one to one representation of curves for the fur that were interacting with other objects like the belt. The character FX artists could pull up Houdini and really get a WYSIWYG representation. You could see how the curves were interacting with any forces in Houdini. And four or five simulations could happen in the one package, rather than a serial process.”

Another major tech challenge for the Dreamworks artists included a jungle sequence involving much character interaction and ambient motion, along with significant tree and grass geometry and translucency and GI issues. “Just after that jungle sequence they drop down into a moat,” describes Bielenberg, “so we had to do a raging river with all these vines, destruction and splashing, that leads up to a whirlpool.”

Leading to the clouds sequence, the characters plant some magic beans before a big tornado forms. For that, Bielenberg pushed for his team to look to real-world tornado reference – often on YouTube. “We’d pull together lists for dailies and pull them apart – ‘this shot has great speed and sense of scale, this shot I love the way the dust picks up and rolls around as if a helicopter is landing’ – we picked different pieces.”

Other scenes also called for as much real reference as possible, such as for splashes of the characters diving into water. “Luckily we have a pool at our PDI facility,” says Bielenberg, “so we took underwater shots of effects guys diving into the water to see what the types of splashes and aeration and bubble patterns were.”

The beanstalk that grows and propels the characters into the cloud world relied on procedural modeling techniques set up by the FX department. “That was a challenge for surfacing,” recalls Bielenberg, “because they didn’t have stable UVs to work with. They had to figure out ways to proceduralize that and then be able to go in shot by shot and customize the surfaces when they started to stretch.”

In terms of toolsets, Dreamworks used Maya for modeling and layout and camera, and for set dressing, and then its proprietary tool for character animation, with a mix of off-the-shelf and in-house systems for cloth, fluids, lighting and crowds. Nuke was the compositing tool of choice.

All this, as well, had to completed in stereo. For the first time on a feature, Dreamworks rendered stereo every time in the lighting department. “That was a big win for us,” notes Bielenberg, “because in the past if you’re rendering one eye and then at the last minute when you’re going to hi-res, when you go to stereo, invariably things are out of sync. Also, it was great because we never know when Jeffrey Katzenberg was going to grab a sequence and show it somewhere!”

15,000 units deep

In one enchanting scene, Puss and his gang hop onto a growing beanstalk that shoots them up into the clouds. As they journey to the legendary giant’s castle, which contains the goose that lays the golden eggs, they discover a great cloud environment. Dreamworks Animation effects lead Brett Miller tells us how they created the clouds for the sequence.

Kitty, Humpty and Puss on the beanstalk as they enter the cloud world.

fxg: Can you tell me what design aspects of the clouds that had to be solved from a technical point of view?

Miller: The texture and material of the cloud world are very important for the sequence. It’s designed to give us a sense of wonder and awe at the sense of scale of the world, and how it’s made out of this dynamic flowing element, instead of a static construction. But at the same time it’s supposed to evoke a pastoral setting with rolling hills and clouds that feel like trees. So we still have that connection with the idea of a landscape.

We had already solved a big part of our volume issues with the work we had done on How To Train Your Dragon, but we knew we had to increase its scale tremendously. We had built a suite of volume modeling tools, and reduced that down to an extendable library system so that we could build cloud modelers fairly easily. We also knew we were going to use frustum buffers, which for us was going to be the most efficient way to store really, really big landscapes. But we knew we would have to break them down to very small parts.

fxg: How did you prepare for the cloud sequence in terms of R&D work?

Miller: We had just recently hired engineer Ken Museth, who had done some important work at Digital Domain on the DBGrid format, which is a sparse volume format. We had him extend that so that it would work with our frustum buffer systems and our volume modeling tools. The first big thing that we asked R&D to do was to create a sparse modeling format. He built something called VDB.

The next thing we needed to be able to do was have efficient multiple scattering, because we wanted to not do matte paintings for the clouds. We knew at a certain distance matte paintings would take over but we wanted to push that threshold distance as far back into the scene as we could. This meant we knew we’d have to do cloud/light scattering, which of course is a very difficult problem.

fxg: How did you approach that lighting?

Miller: The cloud lighting system was authored by Nafees Bin Zafar. Nafees was key to the development of our strategy and was the principal designer for our lighting system. It was a three-fold attack. We broke down the light scattering into two shaders, one that would handle first-order scattering, and then another shader that would handle all the rest. The first-order basically did just simple look-ups on what’s called a ‘double Henyey-Greenstein model’ which is a simplified way of measuring light scattering using a phase function for single-order scattering. That means when a photon enters into the cloud, statistically speaking the photons will hit one water droplet and scatter out of the cloud. So we just measure how much of that light that is scattering is hitting the camera lens and how much is just scattering into the atmosphere. That’s a totally analytic approach based on the statistical model that we set up. It makes for a very fast and efficient render time for our volumes.

A scene from the film.

The second half of that equation is a little more complicated. A lot of people have tried tackling multiple scattering, and it usually comes down to photon-mapping or simulating packets of photons. But we decided that we would set up a system that used wavefront modeling on a grid. We would set up a regular grid that represents our cloud as density. We would pick a point or sometimes up to ten or 15 points where we would see the clouds with light – in other words, where the light would enter the cloud and usually that would be the point on the cloud that was closest to the sun. Then we would model wave propagation out of that point or points as they bounced around in the cloud.

What we ended up with was a grid that contained a value that represented at what time-step did the wavefront coming out of that light seed cross over that point in the grid. That implicitly infers the length of the path the light took, and then we can effectively map that to light attenuation. Even though attenuation in clouds is fairly low, it gives you just enough so that it feels like the light is just spilling through. It doesn’t give you any detail – we get that from the first-order scattering – but it gives you that look that you get on the underside of clouds when you see it backlit. In the end it did it really well, super-efficiently and it was multiple-threaded.

fxg: So what were the other aspects of the scene that needed to be solved as well?

Miller: We would run the lighting passes before rendering, and bake the grid and refer to that during our rendering stage. We also used spherical harmonics for environment lighting to get the inter-cloud balance, the skylight and the light boosted in the crevices of the clouds to emphasize the cloud look. And then the other thing we knew we’d have to do is have widespread gas simulations throughout the entire system that would have to react very efficiently to moving characters, which are basically deforming meshes.

fxg: Did you look to real-world clouds as reference or was it more art-directed?

Miller: It was an art-directed approach, but we began our development taking real-world references. The clouds are orangey-goldy, which was more art-directed but the lighting itself was photorealistic. Right outside our windows at the office we’ve got a small mountain range that almost daily would get massive Cumulonimbus clouds boiling over it. Any time we needed a reference you could just put your head out the window and see them. Occasionally we would pull things from the net that either the art department or VFX supe had referred us to.

fxg: How would a cloud shot be built up, in terms of layout and having it interact with the characters and other things in the scene?

Miller: We knew that Layout was going to need to know what these clouds looked like as they were laying out the cameras. That meant when we were creating the clouds, we needed to have them look as much like the layout clouds as we could. We sat down with modeling and layout and we had modeling build clouds that if you lit them, they weren’t just approximate clouds made out of ovals, they were the real thing.

Modeling created a library – about 20 or 30 of these things – and then Layout would drop them into the shot and frame the cameras and compositions in order to line it all up. We then took the models that were set-dressed into the shot and converted them to level sets, and one of our volume modeling tools would convert level sets into a noise depth volume representation. Once we had that, that formed the basic shape and we could dress it with wisps and cover it with little fluid sims and the sort of things that makes it feel alive. Ultimately, if you compare what layout was looking at when they set up the composition of the shots and what the final shots looked like, they were the exact same shapes.

fxg: What were the actual tools you used?

Miller: We used Houdini as our package of choice to contain our system. It’s really well-suited for that. We could manipulate the geometry if we needed to, blow it around et cetera. All of our volume modeling tools live inside Houdini. We could take any piece of geometry – point clouds, curves, whatever – and rasterize them into volumes and visualize them inside Houdini. But all the tools we were using, including our gas solver called Flux, were developed in-house. We would export those things in our proprietary VDB file format – the sparse file format – and then used proprietary volume renderer – MFrender – to render them in the final images.

Kitty and Puss.
Kitty and Puss.

fxg: What were some of the other challenging shots in the sequence?

Miller: There’s a couple of beauty shots – classic wide establishing shots. For those, we were basically dealing with frustum buffers – 900 x 500 on two of the axes, but then deep they were 15,000 units deep – gigantic grids. On the face of it, getting really accurate lighting that is multiple scatting in volumes that size would be really daunting. But we anticipated all that and so we did everything with optimization in mind and it turned out the shots were not a big deal. We could turn them around on our renderfarm at least overnight and get an additional iteration in during the day.

There’s also a shot where Puss and Kitty have a chase scene through the clouds, and they dive under the clouds and tunnel around in them, kind of like Bugs Bunny in the old WB cartoons. There’s absolutely enormous gas sims going on in that sequence. And these all have to work in stereo, which actually helped give you a greater sense of the depth of the fog. It helped that we had a compositing-heavy pipeline to allow for big, deep compositing.

fxg: What was the compositing pipeline for shots like this?

Miller: Well, we have three different renderers, and possibly four with Mantra for effects, and the way we get all the render pipes to talk to each other is with the file format – a deep raster image. We have a tool that can take two or more deep raster images and cut a set of deep rasters out of another one. So we do our cut-outs before entering into compositing. We generate a whole bunch of deep images and select a combination of them, and then produce flat rasters which we then bring into Nuke to composite. We are investigating a true deep compositing pipeline but for Puss in Boots we decided not to quite switch over yet, but we’re definitely looking into it, especially with OpenEXR 2.0 coming out.

All clips and images © 2011 Dreamworks Animation LLC. All Rights Reserved.

1 thought on “<em>Puss in Boots:</em> head in the clouds”

Comments are closed.