SIGGRAPH final day

Here is our third and final report from Siggraph, but watch for fxguidetv eps and podcasts rolling out this week and next.
Today we look at Day&Night, Airbender and a summary of cool things around the show, including the mysterious Spheron HDRV camera.

The Last Airbender

ILM visual effects supervisor Pablo Helman and members of the ILM crew delivered a production session on The Last Airbender on Wednesday. It started with a great befores and afters reel that is also part of this year’s SIGGRAPH Electronic Theatre. Then we saw a bunch of concept art and a look at the CG characters in the film. This included work on human digital doubles, for which ILM took a more performance capture approach than on previous films – shooting the actors on a mocap stage with 6 cameras directed at their face and going through the scenes.

For the elements in the film – fire, water, earth and air – ILM updated their toolset to rely on GPU renders. Fire, in particular, had to be 3D because it would be coming at camera and needed to be art directed in certain ways. Developing a new tool called Plume, artists got a render at sim time and did not need to save out any grid data.
Helman noted that one exciting aspect of the tools was that instead of talking about the fire in technical terms, they would instead say things like ‘it needs to be hotter’, and this reminded him why he had got into visual effects in the first place – to make art.

The Making of Day & Night

SIGGRAPH attendees were part of another packed session for Pixar director Teddy Newtown’s talk on his short film Day & Night (which appeared before Toy Story 3 in cinemas). It’s a stunning stereo film that blends 2D techniques with CG animation. Newtown discussed the genesis of the film as a story he wanted to tell about interpreting
things differently. His initial inspiration was a show called Popeye Theatre hosted by Tom Hatten, who drew pictures based on squiggles sent in by viewers. Newtown would also get his friends to draw squiggles and was fascinated by how different people saw different things in them. For The Incredibles, Newtown had himself contributed artwork in a very different Matisse style.

Encouraged by a colleague at Pixar, Newtown was further inspired to pitch a film based on the idea of peering through a keyhole and seeing the action through it. This formed the basis of the film, where the
characters Day and Night witness each other’s lives and the things happening in the world from their perspective at a different time of the day. Newtown said it helped that the horizon looked like the pants for each character. He pitched the idea to John Lasseter and brainstormed with him. In the end the approach was to complete the
characters using vectorised 2D drawings and the action seen through them in 3D. Newtown always wanted the story to have some substance and he learned from director Brad Bird to always try and stay on point – it can be easy to be swayed by amazing ideas or designs.

Watch out for a video interview with Teddy Newtown in an up coming fxguidetv episode

Technically, Newtown initially thought this would be an easy film to make, but he was quickly reminded at Pixar that it would essentially be three movies in one – the daytime would have to rendered, then the nighttime and then the 2D animation incorporated. Early tests revealed timing issues for the 2D work to line up with 3D. Each of the 18 virtual sets in the film were constructed as large areas (‘like a miniature golf course’). Newtown noted that just getting one aspect right, either 2D or 3D, was like solving just the blue side of a Rubik’s cube – it doesn’t mean you have won. Ultimately, the pipeline at Pixar for the film changed four times as the technique was refined.

A recording by Dr. Wayne Dyer that Newtown had listened to years earlier made its way into the film in the radio sequence to form part of the contrast the director wanted to portray. After watching the film twice, Dr. Dyer wrote to Newtown praising the work and said, fittingly: ‘When you change the way you look at things, the things you look at change.’

Show Floor


In New Orleans we first flagged the Spheron HDRV camera. It was not working last year but this year Spheron did have footage to show, but was still in semi -media blackout. We begged for a formal interview but Spheron believes that any information on the camera will allow others to catch up. So while they have sold a second camera, they are still refusing to disclose much about how this new camera works.
This is what we do know
• the camera has 20 fstops of latitude
• it outputs a raw format that then gets converted to an OpenEXR that seems to have a very low noise floor
• the sample images have chromatic aberrations in the highlights
• the sensor has a global shutter – no jello effect
• the camera can run up to 50 fps and as low or slow as you like
• the shutter allows 180 degree or less filmic shutter action
• the output is 1920×1080
• PL mount lenses

We dont know if it is a CMOS chip, Bayer pattern, how the camera is recording such latitude or the camera’s price.


William Shatner and Dick Van Dyke were at the NewTek stand – yeah we dont know why either… but it was popular with the fans.

(ED: People have emailed to point out that Dick Van Dyke is a real Lightwave user and a real supporter of both Lightwave and visual effects. Respect)

10Aug/sigwed/geekroom One of the things you can do if you find yourself with a clash of two sessions that are on at the same time, is to hit the geek bar ! This room is fitted out with multiple split screens and attendees sit in complete silence, able to choose the audio of any displayed session on special audio headphones.
The net effect of this is that the room has the feel of a room of a geek chapel, where uber geeks sit in silent homage to the Siggraph gods.

Thursday’s final talks

The ‘Furs, Feathers and Trees’ talk featured reps from Tippett, MPC and Disney talking about pipelines and challenges for a couple of different shows. VFX supe Scott Liedtka started off Tippett Studio’s ‘Ways to Skin a Hairless Cat’ talk about the very wrinkly and almost hairless Kitty Galore character in Cats and Dogs 2. Aharon Bourland and Michael Farnsworth continued by showing different attempts at simulating the wrinkles for the hairless sphinx on which the character was based. The approach they settled for relied on tangent space vector displacements which allowed artists to track the animation and give fill wrinkles. A complicated shader had to be written, sometimes with 75 floating point maps firing at once. For the limited fur on Kitty Galore, Tippett utilised their Furator tool, occasionally
relying on a ‘fudge factor’ to nudge things like hair origins and to fine tune peach fuzz. Perhaps the most interesting to take away from the presentation was that a multiple number of techniques were used to produce a final very cool result.

Damien Fagnou from MPC then explored his studio’s Furtility tool and how it was specifically used on Wolfman. We saw MPC’s software stack and the origin of the tool, including how it uses distributors to change things like hair length, noise, scrabble, clump and curl – all procedural to allow for quick changes. MPC’s James Leaning then discussed the Pegasus wings and eagle shots in Clash of the Titans, which relied on a feather system where every single barb would be modelled and then rendered at render time. Again, a base distribution allowed for changes to colour, length, inclination, curl and spine geometry for the feathers. Finally, Walt Disney Animation Studios presented on their procedural and highly art-directable trees based on a language of hierarchical curves for the film Tangled.