Day 4: Siggraph and Out

Day 4 of Siggraph and the last of our reports. While there are some good sessions planned for Thursday of the conference, it is also the day the show finishes early and we will be all traveling back to our respective countries, so this will be our final day by day report. But watch for much more from the various interviews we have recorded here in the coming week or so at fxguide.com

 

Mixed Grill

This talk began with Animal Logic CG supervisor Aidan Sarsfield discussing his studio’s pipeline development on the full-length animated Legend of the Guardians: The Owls of Ga’Hoole. Sarsfield walked us through the atomic assets concepts of Animal’s production methodology and actually had a demo of their actual production management system. One interesting story was that one night someone erroneously set the entire film to render, instead of just one scene, and that sent the render farm into a grinding halt. But Animal would also do this on purpose over scheduled holiday breaks to test the system and to do QA.

We also saw in this session great presentations from Andrea Arghinenti, senior technical artist at Guerrilla Games on the facial animation workflow for their Killzone 3 title, Ivan Neulander from Rhythm & Hues on reducing noise during ray gathering operation, and a further talk on high resolution relighting of existing buildings from photographs.

 

 

Directing Destruction

MPC’s Ben Cole began this talk with a look at Kali, his studio’s finite element based destruction tool, which was used heavily on Sucker Punch and on the yacht destruction scene in X-Men: First Class. Developed in collaboration with Pixelux, Kali replaced MPC’s existing rigid body system. They found that, generally, rigid body pipelines became very complicated, that you have to pre-fracture geo, and that things would break almost too instantaneously, almost with no sense of effort. But with Kali, MPC relied on tetrahedral sims, which gave them appropriate fracturing, non-adaptive tessellation, a range of material properties and collision detection.

Modelers would create the relevant geo in Maya, but then TDs would wrap the geo in cages and use tetra meshes in the sim with Kali. The result was a huge cut in render times (1 to 2 hours for most sims and 20 hours for more complex ones). We saw some greatscenes from the Sucker Punch pagoda sequence, where Baby Doll and the giant samurai tear up the building, until it ultimately collapses. One shot has Baby Doll crashing through a stone floor – for this artists could treat all the tiles as one big slab with Kali, not as individual stones as they may have had to do before. It also meant different angles of that scene could rely on the same sim.

Unfortunately we couldn’t stay for the other talks in this session – Digital Domain on the character destruction effects in TRON: Legacy (presumably the great de-rezz work) and Disney’s hair sims and dam break effects for Tangled – but they sounded incredibly interesting.

 

 

Presentations on the Exhibition Floor

Although many of the Talks are in dedicated rooms and halls at SIGGRAPH, a number of shorter but still in-depth presentations were also conducted at booths on the Exhibition Show Floor. Companies like NVIDIA, Autodesk, Pixar, MPC and Digital Domain all hosted talks at their booths. Today we saw DD present on the use of Mari to help create their Jotunheim environments for Thor and a great breakdown of the Platige film Paths of Hate at the Autodesk booth by director Damian Nenowm, who used mostly 3ds Max. We’ll soon have an interview with Damian about this amazing textural film on fxguide.

 

Tokyo Race Lighting for ‘Cars 2’

Pixar’s Mitchell Kopelman spoke twice on Tuesday, once at the 1000 Points of Light talk and then at a dedicated Tokyo Race Lighting Studio Session about the special requirements of lighting Tokyo for Cars 2. His session covered how to balance the creative needs of the scene with the reality of limited rendering resources. One might think that a company such as Pixar would have enough render capacity that they wouldn’t have to worry about such things. But the reality is that even at their large scale — and maybe even because of it — they need to take those things into consideration.

The race sequence had an incredibly large set and was surrounded by lights.  It was a daunting task, so he broke up the problem into segments so that he could more effectively find solutions. His basic approach was to look at several areas, separately:  diffuse lighting, reflections, organization, and optimization.

Copyright Disney/Pixar 2011

For the diffuse lighting, there were three main areas of concern: track lights, illuminated signs, as well as headlights. For the track lights, they ended up using line lights which are paths,  evenly illuminated over the entire distance.  They basically draw a curve for the light which can be bent to fit the path of the course…and these lights provided the main diffuse light source. The lights are very efficient, but shadowing is a very expensive part of the equation. Initial ray tracing tests took well over 100 hours per frame, which definitely ruled out the solution. While a line in the 3D scene, the line was actually segmented to create the various multiple overhead lights. He then tried a ray trace shadow for only the segments, but this still took too long to render even with caching.

The next approach was to try simple perspective shadow maps, placing one on each section of the track. But with hundreds of segments, computing the textures still took far too long. They ended up placing one shadow map per block…one on the inside of the track and outside the track for each side. This worked great for the road/buildings but the car shadows looked horrible.

In the end, they built a small model that they called a gizmo which was placed under the car to create a shadow. The gizmo was attached to one end of the car and the other end was computed based upon the nearest point on the line light as the car traveled along the track. So as the car travels down the road, it travels with the car and is dynamically adjusted based upon the light position. This also has the side benefit that when the cars are moving side to side within the track the angle changes as well.

For the self-illuminated sign diffuse light, Kopelman started attacking this problem by using one area light with a texture for each sign. He set his first render off on Friday thinking it was a great solution, but when he got in on Monday the frame had not come back from render. So then he tried a simple point light on each sign giving each a random color to provide some variation. But even thought it’s the fastest light, and even with no shadow; it was still “ridiculously expensive” to render. The bottom line was that there were so many signs and therefore so many lights, that it simply overloaded the renderer.

So he ended up doing something he called “completely counter-intuitive”. He attempted using ray tracing, his thinking being that because it’s only a single light, even with the increased rendering time it would be more efficient. But his first test with straight up ray tracing was even “scarier than before.” He still felt it was the solution, so he worked further on the ray tracing and relied on what they call “point-based irradiance lights,” which effectively bake out a point cloud of the casters of diffuse illumations. In the case of the Tokyo Race, it was a single light and all the signs, which was very simple geometry and very inexpensive to render. Once that’s done they simply did standard irradiance lighting, but instead of getting color from the shader, they got that from the baked point cloud.

For the headlights, each of the 12 cars needed a distinct setup because they were the characters of the film and needed to convey their personality. The setups themselves were relatively straight forward — point lights with cutters and cookies. However, what was expensive were the shadow maps, due to the fact with 12 racers it ended up being 24 shadow maps…each needing to cover the other cars and entire set since they didn’t know where the lights were going to hit on the set.

For the initial tests, the renders took an extremely long time due to the size of the set. However, the solution was quite simple. Since each set of headlights had a finite distance the lights actually traveled, they set clipping planes for the lights just past the far end of the light travel distance. This way, it didn’t need to render the geometry outside the influence of the lights…which turned out be ignoring something like 18 of the 20 blocks of the set. This took the render time for the shadow maps down to about 45 minutes from the initial tests of 12 to 15 hours.

A big factor in speeding things up in the lighting workflow had to do with organization and optimization. They used a technique called “light rigs”. The rigs are a python plugin for Pixar’s lighting tool. It’s used to save and share snippets of lighting rigs between setups….one could think of it as a kind of copy and paste for lighting. This technique was developed around the time of the original Cars. They realized on jobs that they shared so many lighting parts and techniques that it made sense to come up with a way that one lighter could export something and other lighter could bring that into another scene.

Another critical part of the light rig is an awareness of Pixar’s naming conventions that facilitiates copying, by having the lighting rigs match the modelling naming. Thus when copying a rig from one character to another, as long as the conventions are the same, the rig will be updated to have relationships with the new character.  It will update the illumination relationships, fix the shadow maps and link as much as possible to make it an automated process.  The Tokyo pit areas were a great example of this action. They had 12 pit areas, each needing to be lighted in a very similar way. It was quite straightforward to create one rig and then quickly copy it to the other 11 pits as a starting point. This proved invaluable as the actual shape of the master track for this great race sequence changed 4 times during production. Without this level of careful planning, procedural lighting and careful naming, Kopelman joked that the film would never have been done.

 

Tech paper:

Non-Rigid Dense Correspondence with Applications for Image Enhancement

This was just one of the Technical Papers we were interested in seeing at the conference, from Hebrew University and Adobe Systems. This was an image enhancement method presented as a way to essentially ‘recover reliable local sets of dense correspondences’ between two similar, but not same, images by analyzing shared regions of the images, which may have different lighting and backgrounds. It then uses certain methods to adjust the tonal characteristics of a source image to match a reference, transferring a known mask to a new image, and kernel estimation for image deblurring.

The short demo video below is definitely worth checking out: