How to Paint a Digital Car

Cars are the backbone of the advertising industry and car finishes are equally important in films. Just this month, new releases Iron Man and Speed Racer needed to produce photorealistic car finishes. In this special technical report, we explore how the best facilities in the world go about creating car finishes that are beyond ‘photoreal’ and are now ‘damn near perfect’.

For some aspects of computer graphics, the motion or animation is the key to realism. But for one class of cgi objects it often comes down to just how real it looks standing still. Long, slow moving shots over digital cars gives the lighting team nowhere to hide. A car is an object we all know; we have all seen beautifully lit cars a hundred if not a thousand times in real ads. It is here where subtle highlights and broad reflections rule.

But the issues and lessons are not restricted to cars per se. Cars that transform into robots, super hero suits with car-like finishes, and the look of car paint work and metal paneling is the focus of our article this week. To find out the tricks of the trade we spoke to some real experts at Digital Domain, ILM, Pixar and the Orphanage. With complex car finishes required for blockbusters such as Speed Racer, Iron Man, Transformers , Cars and others, these teams have researched and developed some of the most accurate car shaders in both Renderman and Mental Ray.

Speed Racer

We start by talking to the heroes of the digital cars Mach 6 and others : Kim Libreri and Richard Morton at Digital Domain.

08May/car/mach6
Speed Racer

While Speed Racer used digital HDR ‘Environment bubbles” for the now famous Speed Racer environments, the actual car sequences were fairly much standard 3D digital cars in digital environments. The ‘bubble’ technique was used in some of the races, such as in the work done by BUF and Sony Pictures Imageworks for the mountain rally race (see separate Speed Racer Story). The Thunderhead race, for example, is fantastic – but still uses ‘traditional’ 3D animation techniques with cars that needed to look completely real.

Even with the complex saturated lighting environments, making the hero Mach 5 and Mach 6 cars look real was of paramount importance to the extremely experienced and award winning team at Digital Domain.

Two prop cars (the Mach 5 and the Shooting Star) were made for the film before the actual designs of the cars were finalized, but these were only used in a few static scenes. As such, the film makers were not primarily focused on matching to a real car and intercutting the paint work and details of a digital car with a live action car. Instead, they were focused on producing the best digital car they could. This is in stark contrast to the paint work in say Iron Man where ‘car like’ paint work was digital created to both intercut with live action and in the same shot extend the real with digital (see below).

However, the racing cars in Speed Racer such the Mach 6 however, had no real world equivalents. To make the computer generated cars look real they needed to be built with correct suspension and dynamics. But to achieve plot points, the animators also had to be able to override these ‘correct’ dynamics and let the cars do the impossible. In the film the cars do the impossible, namely car-fu: a ballet of leaping and rolling car choreography.

08May/car/thunderhead
Cars on the Thunderhead track

Yet as crazy as the animation was to become, the team at Digital Domain started with real world references. At the beginning of the process they got four different Corvettes. “We shot them in different lighting environments – it was most definitely the most comprehensive car study I have been involved in,” explains Richard Morton, the head of DD CGI car efforts and a veteran of many digital cars from DD’s commercials division. “It was interesting to look at the data we got and compare that to the last ten years of our digital cars, ’cause up until now we’ve been doing it pretty much by eye”.

For Digital Domain, the secret behind digital cars and finished painted metal work is ray tracing (Whitted 1980), “It has to reflect the environment – that is the number one thing”, explains Morton. On Speed Racer the team utilized Mental Ray “which is a very good ray tracer” , all the shaders were completely physically accurate, they were calibrated to work in the same lighting conditions, direct lighting, indirect lighting, night lighting, it was interesting because some of the artists had not worked in a ray tracer before, with physically accurate lighting” . The team had to walk through the lighting process of lighting a car much in the same way a DOP does, relying less on tricks and traditional 3D approaches. In the same way the surrounding environments (such as the road surfaces) needed to be properly dealt with as they would be reflecting as well as reflected in the cars.

This lead to a two-tiered approach. The first was to calibrate all the shaders and light the set. Then there was a second stage consisting of beauty lighting. “This is the stage that makes it look not only real but beautiful and here the team used techniques learned from their experience with car commercials,” says Morton. So the team used long digital lighting boxes, “which is what makes a car look so great, producing these long gradated reflections”.

In the races there were two very different environments for DD. The Thunderhead was “pretty much one type of look. It was old it was rusty, it was very much just warm tones,” while in the later Grand Prix the team had a lot more variety of lighting setups,” says Morton. “The approach I took was to work out the primary lighting and then look for some secondary complementary lighting that would bring out a cyan or green, but always with that lovely lighting box reflections.”

DD’s render pipeline is set up as full HDR lighting, not a partial solution of simulating HDRs with a few hundred point lights but rather a full image-based lighting model. Their system is true image based lighting and global illumination, built with help from Autodesk’s Mental Ray team. The DD system needed augmentation from the standard Mental Ray process.

08May/car/thunderhead2
Accurate and artistic lighting was needed

Too often, lighters spend their time trying to make things look photo-real, says Kim Libreri. But with their fully physically based HDR lighting model implementation, DD’s lighters get “photo-real out of the box,” he says. “It is then about getting lighters that can know how to make things look beautiful. It should have to fight for photo-real, they should not have to fight with ambient occlusion and spot lights — these other old world tricks — spending 3/4 of their time just trying to make it look real. That should be guaranteed…we want to make our lighters more like directors of photography and gaffers than traditional computer scientists,” relates Libreri.

Actually getting the sheet metal work correct is much much harder than leather or rubber tires. “The only way to make car paintwork look real,” explains Libreri is “to model the clear coat.” As the name implies, clear coat is a final layer of clear paint that protects the actual paint below on any real car. And it is the ray tracing interaction and specular highlight properties of the clear coat that DD feels is vital for good CGI car paint work. The team had to work with what they called a multilayered approach of dealing with a quite thick layer of clear paint and bending rays as the light refracted through that. After the clear coat, the team would then deal with the metallic flakes which are often mixed with car paints in the real world. Libreri claims they threw out previous models of how to solve these problems and really focused on a new, very accurate solution.

In general terms, global illumination works by estimating how much light arrives from the lighting environment and the rest of the scene at each surface point. In large part, this is a matter of visibility: light from visible parts of the environment (lights, sky, sun etc) and light that is blocked by other parts of the scene, with the addition of an estimation of light reflected from other surfaces. This becomes the colour of that pixel in the shot. It is the last part the estimation of light reflected that leads to modeling radiosity, as every object is also considered a bounce light source. The process is very computationally expensive. For further reading, see the book High Dynamic Range Imaging by Reinhard, Ward, Pattanaik and Debevec.

08May/car/speedr
Each shot could have up to 40 layers

For most shots there was a complex set of passes rendered out for compositing. From 3D a file would be transmitted with image pixel data (rgb) with multiple renders for multiple layers (background, mid-ground, cars etc ) and then they would also have the alphas, z depth pass and then the full 3d motion vectors for each pass. Motion blur needed to be dealt with creatively, allowing the filmmakers to blur different parts of the shot at different rates, so the final cgi files that were passed to the compositors had multiple data sets.

Each image had about 40 buffers in Nuke and there was a Nuke Gizmo built called “the car relight tool” which would allow the Nuke compositors to tweak the cars. The renderings from 3D were completely real but then in Nuke the “speed racer” look was applied, allowing the Nuke artist to add selective motion blur, and say re-colour the reflections in the side of the cars, explains Morton.

The team decided to split out the primary key lights in a scene, the final gather – which is effectively the global illumination component so a Nuke artists could balance key and fill light. There would be extra passes for say the light tunnel so extra interesting lights could be added to the cars – which were technically unmotivated but would give “extra bling and liquid reflection lines” on the surface of the car says Libreri.

The techniques that Digital Domain has developed are expected to be generalized and then re-introduced back into their commercials pipeline, feeding back into their general car commercial work. These include car motion racing simulators and lighting techniques.

ILM and Iron Man

While Iron Man is not a digital car film, it does deal with much of the same issues in its use of car-like highly polished metallic paint work and trim. We spoke to Ben Snow, ILM’s VFX supervisor on Iron Man.

In ILM’s early tests for the film the team created a “hot rod” look, not unlike “a gleaming red Ferrari”. This was around the time of Transformers at ILM. Although visual effects supervisor Ben Snow was not on the film himself, he found the same issues had been looked at for the cars and trucks in Transformers.

08May/car/IM_c
Iron man uses car style finishes

For Snow, clear coat was part of the original plan for Iron Man, as Transformers had “perfected a really nice clear coat”, he explained. It was used in the test, but for the majority of the actual metal suit in the film, Snow did not use clear coat. This was influenced by the need to match to the practical on-set suit made by Stan Winston Productions. The real prop suit was not made with clear coat, but “with an auto paint finish. It was an extremely expensive red auto paint…but it did not have that clear coat effect,” Snow points out.

Clear coat does provide a little bit of reaction between the top and bottom coats, but according to Snow it is the variation caused by the two different reflections and specular highlights that really is key, rather than refraction. While the concept of a clear coat was not used, the idea of having “two possibly, nearly always three” different specular highlight contributions was key to ILM achieving close-ups of the suit, explains Snow.

“We call them three lobes of specular; three different looks to the highlight. One might be soft and not quite as intense, the other might be sharp and gives you a hot glint when it hits the Sun. You layer those things together and so when you have a clear coat on top of the normal surface you might have two lots of that happening, so it can pretty complicated”. Snow points out in our podcast interview that not all of them play all the time. “Attempting to model a specular highlight, you can really get a sense of (the complexity) from a BRDF”.


08May/car/IM_b
ILM used renderman

A BRDF is a Bidirectional Reflectance Distribution Function. While a perfect mirror would reflect, and a completely diffuse object absorb all, most real surfaces lie somewhere in between and a BRDF captures the real properties such as anisotropy and retroreflection. In other words, partial reflection that can be directionally dependent AND can reflect back at odd angles not just reflect like a mirror along the line of the angle of incident. Just like we see with metal car paint, due to the metallic flakes in the coat under the clear coat. The BRDF shows how light bounces at all angles and in that way it captures other very important aspects of car paint such as the mirror like quality non mirror finishes show at oblique angles: things we need to otherwise fake with tricks such as a Fresnel CG pass.

ILM sent paint chips from Stan Winston Studios to a lab for DRDF. The real brushed metal proved too complex for the lab to provide a BRDF, but the red paint chips did provide a meaningful scan. These were incorporated into an ILM in-house complex visualisation tool showing graphically the specular highlights.

Unlike Digital Domain who used Mental Ray, ILM primarily used Renderman. Snow points out that the brickmapping Pixar introduced to renderman helped a lot with render times. Brickmapping is able to use a voxel type approach to store irradiance from a photon scattering passes, to increase render performance dramatically. A brickmap is a “3d, sparse, mip-mapped, octree,” which basically means it’s a fast, memory efficient, 3d texture (as opposed to point clouds / kd trees, which are fast, but are memory hogs and hard to filter). The advantage of brickmaps over baked 2D textures is that they are independent of UV’s and able to be blurred spatially (3D blur). Using spatial blurring instead of ray tracing can get you killer speed wins (think rough reflections or sss). (Source: CGtalk 06-01-2006)

Pixar and Renderman

Pixar had to solve similar issues with Cars. Although the cars in the film were not photo-real, they still required ray tracing and extensions to Renderman. Since the release of Cars, Pixar’s Renderman has moved further toward technologies for implementing color bleeding effects in film production, from new methods like point based color bleeding, to optimizing traditional photon map techniques for production.
We asked Per Christensen at Pixar about the differences between Ray tracing and Photon tracing.

Christensen: In ray tracing, the rays are traced starting at the *camera*. In a pure ray-tracing renderer, one or more rays are traced from the camera through each pixel in the image that we’re rendering, and further rays are traced from the points that those rays hit (to compute reflections, shadows, etc.). The end result is a color value for each pixel in the image.

In photon mapping, the photons are traced from the *light sources*. In other words, the photons are shot in the opposite direction as rays from the camera.Each time a photon hits a diffuse surface it is stored in the photon map. The result is a photon map — a collection of little points (stored photons) in 3D space.

Other than that, there are many similarities between ray tracing and photon tracing. We actually use the same algorithms to compute the intersection of a photon with a surface as we use to compute the intersection of a ray with a surface. The main difference is the direction of the rays/photons and whether the goal is to compute pixels or 3D points (photons).

As the DD mentioned, ray tracing only gets one so far with producing photo-real cars mathematically. To get the level of quality they wanted, they moved to a full global illumination model that incorporated ray tracing, using Mental Ray. This is similar to what Pixar discovered. To get the realism and scene complexity need for feature film “ray tracing,” Pixar combined photon tracing and global illumination with the addition of the brickmaps mentioned above. Christensen describes ray tracing as global visibility, and lighting calculations that make use of this facility are called global Illumination calculations.

There are a few global illumination methods:

Finite element methods

  • radiosity

Monte Carlo simulation

  • distribution ray tracing
  • path tracing
  • bi-directional path tracing
  • photon mapping

Pixar’s chosen method was to build on the Monte Carlo approach of photon mapping which they then extended with the brickmap representation of photon information. The Pixar method is an extension of the photon map method used in ray tracing.

Assume a complex scene with complex shapes such as a car with lots of other objects around it. The first step is photon tracing. The photons are stored in a collection of photon maps that together cover the entire scene. Pixar calls this collection of photon maps a photon atlas and importantly at this stage it is camera independent. In the second step, the irradiance is estimated at each photon position and for each photon map, a brickmap representation of the irradiance is constructed. Pixar calls this collection of irradiance brick maps an irradiance atlas. The last step is rendering using final gathering, with the irradiance atlas providing a rough estimate of the global illumination.

A normal Photon mapping method is very general and flexible but it is constrained by memory limits on the number of bounces that can be done – as Snow mentions when discussing their ray tracing in our fxpodcast.

Historically the photon map method for computation of global illumination was introduced by H. Jensen. It is a three-pass method:

  • First photons are emitted from the light sources, traced through the scene, and stored in a photon map at every diffuse surface they hit.
  • Next,the unorganized collection of stored photons is sorted into a “kd-tree”. A kd tree is a ‘k’ dimensional search of a space developed by Jon Louis Bentley. Photon mapping gives each photon an “energy” attribute. Each time the photon collides with an object, this attribute is also stored in the photon map. The energy is subsequently then lowered. Once the energy of the photon is below a certain pre-determined threshold, the photon stops reflecting.
  • Finally, the scene is rendered using final gathering: a single level of distribution ray tracing. The irradiance at final gather ray hit points is estimated from the density and power of the nearest photons. Irradiance interpolation is also used to reduce the number of final gathers. The final gathering can be sped up rendering by a factor of 5 to 7.

Cars require reflections to look real, but at this stage, complex film projects shots would still be prohibitively expensive to render. So to this photon mapping approach, Pixar added the brickmap.The brickmap is a tiled, 3D MIP map representation of surface and volume data, with an adaptive octree with a brick in each node. A brick is a 3D generalization of a tile – with each brick having 8^3 voxels with spare irradiance values. This representation is designed to enable efficient caching. The data stored in the brick maps are radiosity values (radiosity is watts per square meter). The Brick approach can be used to provide an irradiance map of a car, and unlike ray tracing which hits memory limits, one can now render ray tracing and global illumination in production scenes with the same complexity as say scanline.

(per Henrik Christsensen, Pixar June 2005 DTU and Dana Batali, Pixar)

The Orphanage and Iron Man

As stated above, Mental Ray can also address the issue of ray tracing complex scenes to make a car have “liquid reflections’ and accurate spec highlights. Another company that worked on Iron Man was The Orphanage in California. “We developed a Orphanage shader called Omega for Ironman,” says CG Supervisor Jonathan Harman. “Omega is mental ray shader that spits out a text file readable by a series of scripts we have written for Nuke. The Omega text file contains information about what passes have been output and allows a compositor to either use only the beauty render or rebuild the beauty out of all the passes with all the proper math operations to allow complete control in comp.”

“The shader we used had a two layer reflection component –a blurry reflection and a sharper ‘clear coat pass’ — as well as a paint only pass separated form the diffuse lighting pass. We rendered environment reflections separate from a ‘Sun reflection pass’ and broke the sun out of HDR images to allow for manipulation of the suns reflection in Ironman. We ran a separate smudge matte pass that allowed us to add additional blur to reflections where necessary in comp.,” relates Harman.

Early on in the Iron Man project, Orphanage vfx supervisor Jonathan Rothbart wanted a way to “control the sun” in their reflection pass, so they basically removed the sun from their HDRI’s and replaced the sun with a spec pass generated from a point light. The spec pass was then added on the “sunless” reflection pass and would be broken up by the smudge matte pass in comp . If they needed additional highlight reflections they would put HDRI bounce cards in the scene to add a rim or additional highlight kicks. The Orphanage shot HDRI’s of various lights on set to generate maps for the bounce cards.

“Since we were using Mental Ray’s ray-tracing capabilities we were able to use HDRIs as our reflection environment as well as a for our image based lighting,” says Harman. “We converted all texture maps to Mental Ray .map file format which is a float mip mapped tile-able texture file format, in order to optimize memory usage and disk read times. We did not utilize an fancy BRDF measurement tools. We did however we use a conserve energy relationship between reflection and diffuse light which aided in quickly getting a photo-real metal surface this utilized a fresnel/facing ratio color ramp to change the hue and intensity of reflections as the normals face away from camera. We used Mental Ray’s final gather technique for our global illumination method to calculate bounce diffuse light within our scenes.”

During look development, the team at the Orphanage shot turntables of the Winston Iron Man suit as well as suit reference images in all shots. They then set out to develop a shader that was optimized for anisotropic, or directionally dependent, multi layer ray traced reflections. According to Harman, there were a few requirements for this new “Omega Shader”.

First, render times needed to remain under 1 hour per frame for an average full screen 2K 3D motion-blurred Iron Man. “Due to the massive number of reflection rays, our shader utilized a in-house adaptive ray tracing technique developed by Will Anielewicz to intelligently reduce the render time,” Harman adds. This technique casts a minimum number of rays and then tests each new ray for how much it would change the final color. Once this threshold is reached, no more rays need to be cast (Adaptive Ray Tracing Omega).

Next, the shader needed to be “deep raster” so the scene could be computed once then the all necessary passes could be split out (MR Omega). The passes used were:

  • Beauty
  • Paint
  • Lighting (diffuse)
  • Self Shadows
  • Refraction
  • Motion Vector (optional)
  • Ambient Occlusion
  • GI
  • Smudge Maps
  • Surface Normals
  • Fresnel
  • Dirt
  • RGB XYZ map
  • Tinted Blurred Reflection
  • Clear Sharp Reflection
  • Sun “ppec pass”
  • Four”special passes” for mattes, each with RGBA = 16 mattes

The team needed to have the ability to easily read in all passes and reconstruct the beauty pass within Nuke for complete control. (Nuke O-pass). This multi-pass approach is a popular use of Nuke and one of the reasons the industry has started moving to NUKE.

Clearly the various solutions range from highly accurate full global illumination to more creatively passes based solutions. Either way, the industry is now expected to be able to achieve a level of realism that means a real reduction in the amount of time spent trying to light and film real cars in real studios, and a move to CG cars in both CG and real environments…even if those car finishes are not used on cars!