SIGGRAPH 2010 Monday

Siggraph starts fully today Monday in LA. Avatar dominates the production sessions, but Siggraph delivered as usual with a stunning array of technical papers and creative sessions. In addition to the Avatar highlights, the Computational Photography session provided a timely insight into the cutting edge of photography. The parties also hit with the huge Foundry User party, the Mental Images User group, and will continue into the night.

Monday: LA 2010

With so many sessions we can only offer a personal set of highlights, but the first full day was extremely successful.

The trade show opens tomorrow (Tuesday) – as does the job fair. Highlights scheduled for tomorrow include a major session on the new Tron Legacy with a panel consisting of Joseph Kosinski (director), Jeffrey Silver (producer) and effects & animation leads Eric Barba and Steve Preeg.

Computational Photography

10Jul/sigmon/frankencamera
Check back later in the week for a special feature on The Frankencamera Computation Photography tech

The morning session on computational photography was remarkable. There were four papers were presented.
Later this week we will be doing a special profile on computational photography. We were so impressed with this area, that we think it deserves a special focused feature of its own.


Avatar

10Jul/sigmon/makingavatar In another packed session on Monday, Weta Digital presented ‘All About Avatar’. VFX supe Stephen Rosenbaum started with some set pics from the virtual production stage where James Cameron filmed the actors (and sometimes vehicles) in mocap suits and facial capture rigs with a virtual camera, before delivering a template to Weta. One cool piece of background footage showed Sam Worthington being pushed in an office chair for the shot of his character going down the river, – as a simple way to get Sam’s body motion approximated.


Weta lighter and shader writer Kevin Smith then discussed the approach to lighting the dense jungles in Avatar, with artists moving away from shadow maps to pre-computed shadows for things like trees. In some scenes like a tracking shot of a flying craft going over a waterfall, there could be 6000 objects and 60 million polygons, all of which had to be ray-traced. Weta partnered with Nvidia to develop PantaRay, effectively an occlusion caching engine that works with point clouds.
Other tools demonstrated included Ensemble, a tool to extract objects from Maya and visualise geometry efficiently, and xPalatte, which was used to store pallettes live and then assign them at render time. Weta developed an data cache-oriented pipeline for the show, sometimes generating 50 terabytes a week, and at peak times up to 2 TB an hour.
The studio chewed through 2 petabytes for the entire film and had 1 PB online by the end.

Antoine Bouthors took us through Weta’s volume rendering tools, used for things like God rays seen in the jungle, waterfalls, clouds and even muzzle flashes. A volumetric renderer was built in Maya allowing for artists to tweak shapes in real time. For shading, Weta used pre-computed irradiance maps and developed a new plugin to RenderMan in order to write volumetric deep shadow maps.

Matthew Welford and Peter Hilman finished up with a look at the deep compositing techniques developed for Avatar to deal with the number of shots and the stereoscopic elements. Overall, there were 2,300 shots and 50,000 rendered elements to deal with. Testing started in 2006 where it was concluded a single flowpath and a stereo image format was required. Ultimately, a stereo EXR (SXR) format was proposed which has since made its way into Nuke, RV, Silhouette, SGO Mistikia and Fusion.

In the afternoon Avatar session

10Jul/sigmon/letteriJoe Letteri, of Weta Digital led a summary overview of Avatar, starting with the earliest discussion Peter Jackson had with James Cameron around the time of Kong Kong. The team explored 3D and discussed using it on Kong, but it was not possible given the time frame. This lead to Cameron approaching them with what Letteri called the best treatment document “he had ever read” for Avatar.

From the initial discussions to tomorrow when Weta Digital expects to deliver the last dvd extra shot, almost exactly five years will have past.


From the out set, Cameron wanted to be working with actors. In Letteri’s opinion Cameron lays out a result and some key aspects but the actual route is not so important to him, so long as the results are delivered.

Letteri expressed the opinion that “really there is no more post production”, and that facial rigs needed to working by day one, as you needed to work out so much before principle photography now. One key idea was a Helmet camera for facial capture. After a meeting with Cameron in New Zealand, the team had just six weeks to test if the team could produce real time facial system. As it turned out, “we were still writing the code as Jim was having lunch”. As it happened Andy Serkis, the famous motion capture actor who had worked with Weta on Kong and the Lord of the Rings films, was also visiting that day. The initial Weta test was to have a complete facial capture system in real time control a digital face. Weta had hired another actor for the Cameron test, so Serkis was able to join the presentation to see his own face controlled by another actor, joked Letteri.

The first full Avatar shot was shown and its various stages of its development. The shot actually took a year and a half to be completed, “as we knew what was coming and we needed a pipeline that could cope with the last 6 week crunch”, he explained. A theme of the talk was Letteri’s intent to not ‘fake’ the visual effects with the normal bank of tricks and techniques, and instead light, model and animate as accurately as possible. This was illustrated by their strong use of ambient spherical harmonics, the provide “a really nice directional ambient light”, explained Letteri.
He explained by decoupled high and low frequencies of the light the team got a realism in the final result – using almost normal ‘on set’ style lighting techniques in the digital domain.

10Jul/sigmon/avatar2He also explained the strong use and benefits of the depth based compositing, (similar to deep colour) to achieve the extremely complex shots required, and which could account for the complex volumetric compositing required in the film. Weta Digital developed a lots new of tools, stated Letteri, so much so that for one shot the Weta team started re making a shot that was a close up of a small bomb falling and catching on the side of one of the ships. When asked by Cameron why they were re doing this shot, the Weta team answered confidently that “it would look better” than the already 100% studio filmed version,- and sure enough it did. The best Letteri could do to explain was this was is that the spherical harmonic lighting approach produced more accurate exterior lighting than the professional studio lighting that had been used to film the original.


Letteri went on to discuss the use of maya particles, Niaid and the Weta in-house Deluge in house water system.

There is no doubt that the work of Weta Digital on Avatar is remarkable and Siggraph is a great opportunity to get an insight into the vast amount of work that was required to achieve it.

Volumes and Precipitation – Monday afternoon at SIGGRAPH

The ‘Volumes and Precipitation’ talk at SIGGRAPH on Monday started with Weta Digital’s Christoph Sprenger and Diego Trazzi talking about the amazing water effects in ‘Avatar’ – specifically Jake in the river, the dream paint, Neytiri drinking and the ocean by the cliff shot. Weta worked with Naiad for the water shots. We saw tons of test simulations and an insight into Weta’s Deluge and Sashimi tools used for rendering.

Ian Coony from Walt Disney animation studios then demonstrated the multiple kinds of snow his studio had to create for ‘Prep and Landing’, an animated holiday special. They started with 40 different snow sims and then developed a snow pipeline based on Python, MEL and Houdini scripts for light snow, blizzards, flurries and even a snow globe, rendering mostly at 1K on a dedicated GPU render farm.

Dr. Jerry Tessendorf from Rythm & Hues discussed the ‘tank drop’ scene from ‘The A-Team’ which sees a C-130, predator drones and a tank flying through a 3D cloud set. Cloud detail ranged from 1cm to 20km, which meant that basically six orders of magnitude needed to be represented. Cumulo, a Felt script with a Houdini SOP UI, was used to achieve the right clustering, layers, smooth valleys, advected material and variation. We saw an early test of semi-lagrangian mapping (SEMAL) and a one minute breakdown as the tank drops. Dr. Tessendorf also hinted that if you look closely in the clouds, Rhythm has hidden something, but he didn’t say what.

This session was rounded out by an in-depth talk on single scattering in heterogeneous participating media, essentially a proposed algorithm to compute light interaction based on real world environments.

CS 292: The Lost Lectures

10Jul/sigmon/ed A 30 year trip down the computer graphics memory lane took place on Monday at SIGGRAPH in ‘CS 292: The Lost Lectures’, a Q&A with Pixar president Ed Catmull conducted by Richard Chuang. The session featured excerpts from a series of lectures given by Catmull at the University of California in 1980, while he was also working for Lucasfilm. As it turns out, Chuang was able to see the lectures via video feed at the time and very soon after went on to found PDI (of course now one of Pixar’s biggest competitors). Other key lecturers at the time included Jim Blinn, Alvy Ray Smith and Loren Carpenter.


The first lecture excerpt we saw was about polygons, which Catmull described in 1980 as ‘just a list of vertices’ and then on stage as really just the same thing. He also noted that whereas 30 years ago they’d talk about thousands of polygons, now they would just add more zeroes. In fact, that was the fascinating thing about the lectures – much of what Catmull candidly laid out for his students in 1980 still stands. Some of his telling comments in the lectures included:

– On the nature of his work: ‘Computer graphics is the science of tricks. You find yourself in a tug of war between complexity and being able to do it simply’
– On 16 bit address space: ‘I hate 16 bit address space’ (he said he still does, and that he’s not that fond of 32 bit either)
– On hidden surface algorithms: ‘You need to work out which polygons should be visible. It’s the best way to get rid of things that you can’t see’
– On CPU versus GPU rendering: ‘All we need really is to add a processor to this graphics device and you’ve got a computer – it would be nice if it were programmable!’
– On why animation is so hard: ‘For rigid objects, animation is a well-defined task, but for non-rigid bod motion, it’s really hard to simulate a person doing something (turns out Catmull’s first film was a face and hand animation).

What was clear from the lost lectures was Catmull’s love for art and animation even then, in addition to his technical ability. He said the most influential film on him was Pinnocchio from 1940. In one of the lectures Catmull discusses animation and the number one point he writes on the blackboard is ‘story’ while at the same time saying, ‘Unfortunately that’s where most of the films fall down’. This was 15 years before ‘Toy Story’ came out.

Other fun pieces of discussion from the Q&A included the insane cost of framebuffers from the early days (up to $80,000 for a random access buffer with only 16 bit access space) and the cost of thousand lines monitors, with Chuang remarking that even today there is still not a reliable way of seeing colour. Catmull said he had grown up inspired by animation but did not know how to get into it, so went with the technology side. After college, he believed that something important was going to happen but that a lot of things needed to be done first, such as improving the quality of the imagery. He noted that while still at Lucasfilm (before the computer graphics department became Pixar) his counterparts at ILM could not at the time see any relevance in CG work for the film they were currently working on – The Empire Strikes Back.

In fact, both Catmull and Chuang noted that they spent the entire 80s convincing other people that things could be done with CG. ‘We spent the entire 80s spending’, said Catumull,’ who was grateful for the protection both George Lucas and then Steve Jobs gave him during stressful times at Pixar while there wasn’t really a business model in place. When asked whether he could relax when things came together in the late 80s, Catmull quickly interjected and said, ‘It finally came together in 1995!’

Finally, Catmull acknowledged he was at the right place at the right time, and that working at the frontier of computer graphics gave him and others more energy to continue. A question from the audience about Catmull’s thoughts on the future of computer graphics led to this answer: ‘I don’t see the future – just the direction. I’ve never been good at predicting the future, I’ve just been good at taking the next risk.’

Computer Animation Festival – Electronic Theatre

The Electronic Theatre is a two hour screening of CG work, animation and visual effects from feature films, short films, commercials, scientific visualizations and student works.

‘Loom’, a German short film by Jan Bitzer, Ilija Brunck, and Csaba Letay about a spider that attacks a moth, won the Best in Show award and definitely deserved it, not just for the stunning close-up animation and rendering but for taking the story to a mind-binding level. The Jury prize went to ‘Poppy’, a New Zealand World War I film featuring two soldiers on the Western Front. The quality of mo-cap and facial animation was incredible. Other highlights were ‘Dog Fish’ by Bitt Animation and VFX, ‘Sweet 16’ by boolab/The Ebeling Group and Visualizing Empires Decline by Pedro Cruz.

There were some neat ‘making of’ reels from commercials work and recent films like Sherlock Holmes (Framestore), The Last Airbender (ILM), 2012 (DD and Scanline), Alice in Wonderland (Imageworks), A Christmas Carol (ImageMovers), Iron Man 2 (ILM), Prince of Persia (Framestore) and The Secret in Their Eyes. There are also separate screenings of commercials and cinematics and long shorts and student animation which make up SIGGRAPH’s Computer Animation Festival.

Party Party Party

10Jul/sigmon/party

As with every year at Siggraph there are a host of user events, parties and get togethers. Last night was fxphd’s own member party at the Yard House with some 50 members. Tonight there were at least six parties and functions, several of which are still in full swing such as the Chapters Party and the Lighter/Darker party as we post this.


Foundry User Party

10Jul/sigmon/party_simon_screen Held at the Maya Theatre, the Foundry event was the communities chance to see both user breakdowns as well as new technology from the foundry including Nuke, Ocula, Mari, and Katana. The new technology demos included GPU performance enhancements, and the new Disney PTEX technology.

There was a great turn out at one of LA’s coolest theaters.

Key demos included GPU acceleration of
– Bruno Nicoletti showed an impressive Blink GPU technology demo of Kronos processing a PAL resolution clip at at 200fps using GPU vs. only 7fps in the older CPU version.
– 3D paint/ptex mari large complex texture and paint system.
– a roadmap of the Nuke and the new Nuke 64bit OSX in public beta this week

Mental Images

10Jul/sigmon/mi_party

Mental Images had a party at the Marriott, highlighting the new Cloud computing and iray GPU rendering on an ipad ! Watch for Mental Images on fxguidetv later this week, with more details.