VES 2002

VES 2002
This year the Visual Effects Society conference was held in San Rafael Film Center instead of the usual LA base, to allow tap into the wealth of Northern Californian talent at places such as ILM and PIXAR, both of whom where heavily involved in this years conference. Visual Effects Supervisor Peter Webb ( Phenomena VFX, Australia) reports for FXguide on this years sessions including Spiderman, a review of Ridley Scott’s Classic “Blader

The Event started with a Pixar party on the Thursday night.

Pixar has a big imposing gate, large grounds and a big central building. We weren,t allowed into the artists animation areas but we did get to look all around the massive entry hall .

VES sessions – these are just points that I thought worth reporting. Also they’re just my recollections and probably I misheard a bunch of it and the presenters would laugh if they read this but I think a lot of it is valuable nonetheless !!! I find the VES conference is so relevant to the work I do, and this year, as in past years, I have found there are really concrete points or techniques that I can apply to projects we are working on here in Melbourne.

Spider Man – John Dykstra.

John outlined his strong creative liason with the director on this film, he also pointed out that when breaking down the script he plotted how much money was being spent on VFX in the first act, second act, third act etc. This was to monitor the dynamics of how the vfx “brushstrokes” were being applied to the story and make sure that they were in the right places.

John had a theory that Spiderman’s grace and ability to control his movement should improve throughout the movie, as he got better at doing new things. This worked out well as they animated in story order so the animators got better at animating him progressively through the film. This also applied to camera movement as the camera followed Spiderman through the city. The camera had trouble following him at the beginning but got progressively better throughout the film. Subconsciously this enhanced the audience feeling that Spiderman was gaining control as well.

Tobey Maguire was cyberscanned and then “physiqued”. The spandex suit was revealing so this was an important stage. Spiderman was not modelled as a muscle system over a skeleton and needed to be explicitly animated.

Willem Dafoe as the Green Hornet was modelled as a muscle system over bones, then fat layer, then skin. Much simpler than Hollow man but similar. For both costumes they worked closely with the costume department. For some of Spidermans finer texture detail they shared CG textures that were printed onto the fabric for real.

Motion capture was used but mostly for the crowd movements only. They found that the MC data was good but difficult to scale and blend with animated movement. For instance – a given movement might look right at real speed but if they wanted to ramp it or make it faster or slower it didn’t work well and they needed that freedom.

The building texture detail was photographically acquired. Imperfections were very important in the buildings both in geometry and texture. They didn’t use global illumination or radiosity because the photographic textures already had that “baked in” – they did do a lot of matte painting on the textures as well. For the interiors of the rooms in the skyscrapers they did a kind of qtvr capture of various office interiors, creating a library that could be viewed from any angle outside the building.

They did what they called a “single pixel pass” which was black with some fine coloured lines and looked like an edge detection filter in photoshop. They used this to detect detail that was too sharp – too CG – and created a matte from the single pixel pass to soften only those details that were too sharp.

For that final shot with the huge camera move – they did shoot one practical element. Cable camera swooping down on a road with taxi cabs and people moving around. They did a camera match move on it and then replaced the buildings and put Spiderman in there, blending from the match-move to their own camera moves either side of the practical shot. John commented that it was good to have a real element to help them with the look.
A great session over all.

Creating the Character. Panel session.

Phil Tippet – spoke about mythology, from cave paintings, through middle ages to current time, talking about human psychology and how we relate to beasts. I thought it was pretty interesting. I feel I did get some insights into how our current day psychology and responses to mythology contains echoes of fears and survival motivations. He also spoke of how cave art was often way deep in the caves so the artists and audiences would have had to carry simple oil lamps long distances just to sit and see the stories shown on the cave walls. He even theorised that if you held two lamps, one either side of the art, they would flicker in and out of phase giving the art a kind of life and animation which he said would have been not unlike sitting in a movie theatre !

Ken Bielenberg – vfx supervisor on Shrek

Ken created “gestural maquette’s” which were posed, in the development phase. They created another “neutral sculpt” which were for scanning into 3D CG data.
They scanned using a digitising arm on gridded maquette, loose cloth on characters done using Maya cloth, close cloth however was done using proprietery software.

Ken also went into a detailed discussion on the multi layer “sub surface” scattering techniques they used for skin shaders etc.

David Barclay – animatronic supervisor Jim Henson Creature Shop

David spoke for quite a while about projects he had worked on and showed examples of putting animatronic models together. His main examples were for “Cats and Dogs” which were replicating real animals so the creature design wasn’t an issue. It was pretty funny seeing the cat armature moving around and talking without any fur on. He also showed on tape a demo of new technology they’re developing where they use a Henson Animation Control System (two hands on a complex looking levers / knobs / grips / device) that puppeteers a CG character in Maya. It worked well and looked like a great way to get the realtime performance of a puppeteer into a CG character.

Dan Taylor – animation director ILM

Dan spoke about Jurassic Park III and had some great tips. He used all the disciplines to bring the creatures alive and carry the audience along. He added numerous subtle details that he called “life givers”, for example he watched out for things like unnatural texture stretching on CG, that he called “life killers” and taking the trouble to create skin that slides over a sub-structure avoids that “life killer”.

When designing a creature he has a list of things it must incorporate or invoke, such as “fear” or any other emotional response. They thought about the creatures nature or personality and took time to animate in keeping with that. For instance, the raptors were intelligent so they had them pause to think, turning their heads and eyes scanning while they worked out what to do next. Eye movement flicking from one thing to another, processing information. These pauses also weren’t random, they were related to things that were happening. Sounds obvious but I think it’s all too easy to forget these aspects and just have the creatures barging around in a way that looks “cool” but doesn’t really trigger the correct subconscious cues in the audience.

The raptors were prone to pause then explode into movement. This was a conscious design. Other creatures had physical details that showed their “state” such as a crest of splines that sat almost against the head unless the creature was alert, when they would stand on end. Pulling them completely flat against the head showed submission.

Ray Harryhausen – animator

Ray talked about designing his Medusa. He showed early design drawings, reference images from past interpretations of the myth and work in progress images. He also had a physical model there on the stage – it was about the size of a maquette. He had one of the skeletons there too – it was an armature with articulated poseable joints, about 8 to 10 inches tall I guess. Ray showed a video version of the sequence and it really was great. He said that lots of people asked him how he did his effects but when the films were popular he never told anyone. He said that was because he felt that with a fantasy movie it was better for the audience if they didn’t know, if they were innocent of the techniques and just in awe of the film. He felt that there was too much public discussion of moviemaking techniques these days – he wasn’t referring to industry discussions like VES though !

Douglas Trumbull – visual effects pioneer

Douglas spoke about how he had been messing with digital technology as early as the late 1960’s. He said he’d developed a 2000 line digital image processor in 1969 which he used in the Andromeda strain – using a movie camera to shoot off the screen.

Silent Running was made for US$1.3 million and was shot in 32 days. They used an early MoCon system on the film – and he said that there is an excellent making of on the DVD.

All visual effects plates on Close Encounters were shot on 65mm. He also developed Showscan – a 65mm film format that ran at 60 frames per second. Though it was an excellent format it did yeild a stunningly sharp result, both in image and temporal resolution. He said that this sharpness was arresting and didn’t suit all subject matter. He used it for some of his motion base ride simulations – it was used on “Back to the future – the ride” at Universal. He also developed “Magicam”, a live realtime compositing system that he has used on puppet animated television shows.

Black Hawk Down
Tim Burke – vfx supervisor
Olcun Tan – lead tech. Director. Mill Film

Ridley Scott (director) works with all disciplines closely, including vfx. He wanted no compromise with camera movement / shooting style and no motion control for main unit. Asylum was brought in later in the production process because after 911 the deadline was pulled back by 3 months.

Crash sequences

– they had 2 real Blackhawks, 2 real Little Birds, 50 real Ranger soldiers, armoured personnel carriers, tanks and trucks. Real choppers create too much rotor wash which stirred up much road dust so they couldn’t shoot much with the choppers down low near street level so the shots had to be computer enhanced.
– CG choppers had to seamlessly intercut with real choppers.
– It was shot in Morocco – they found a city that was very similar to Mogedishu in geographic and architectural detail, right down to the shape and size of the town square and a couple of other key locations.
– Practical setbuild of a major building in the town square, and set dressing of the surrounding streets for the “Alamo” crash scene. A location recce was done to establish the overshoot and how much of the set build could be digital.
– Set surveys were absolutely critical and very extensive.
– For this location they went to location and shot many digital stills which they took back to their hotel that night and marked up in Photoshop, with explicit point detail that they would need to survey for each building in the following days. They marked extremities of buildings, each floor, window and door apertures, key ground detail like road edges, fences both low and high. These points needed to be collected either before or after the shoot – not during.
– These were points, not lines. ie. vector intersects.
– Photogrammetry was used but was difficult due to the changing set dressing that was done.
– This location was 275 metres by 158 metres approx.
– Points were collected using standard survey techniques. Plotted into the computer to create point model that is compared to the photographs to test accuracy.
– Points can be translated into models and textured using photogrammetry.
– Some of this was done in the hotel rooms at night !
– Camera moves are reverse engineered / deduced from footage as a camera match move.
– Camera moves are run on the digital set version to test accuracy.

Back at the Mill

– rotor engineering of the choppers done from engineering info. And also material gleaned from the internet. They found a guide to modelling the Blackhawk for physical modelmakers and found it invaluable in adding detail. They also shot stills and surveyed the choppers on set.
– The rotor engineering done in CG drove the dust simulation. The dynamic flow simulation they created for the chopper wash wasn’t 100% correct but it was controllable. They showed us a simulation they did – that was like a box full of little arrows with an invisible chopper inside. The arrows were like little weather vanes, moving around with the turbulence from the rotors.
– They found some footage of crashing helicopters on the internet – showed it to Ridley who said it looked like a dying beast. After it hit the ground it foundered, tilting sideways as the rotors hit the ground repeatedly, growing more bent and generating debris until they snapped off.
– The props guys built a physical model of the Blackhawks body – no rotors. They ran it down a cable onto the ground. On the day it wasn’t versatile enough to move realistically but they did shoot plates of it that proved useful in post.
– An animatic was created to prove that the CG chopper was good enough.
– For one of the critical crash shots they used the practical chopper shell on the day, then motion tracked the chopper body and used that data to drive a MoCon model-mover on a miniature set. This was connected to a 1/6th scale model of the chopper with rotors (I think). Using this they could re-crash it over and over to get a good take of certain parts of the impact. Shot on a dirt base against bluescreen.
– They said they used polar co-ordinates to drive the model-mover, not cartesian co-ordinates.
– The above element gave them some of the impact debris, some of the rotor contact with the dirt.
– Survey data allowed them to matte in parts of the background buildings as well as using the CG to provide collision objects for the swirling CG dust they added.
– Also they created a miniature model of parts of the buildings as a bluescreen, suspended a fan above the street then tipped dust into the fan. The dust dropped and swirled down onto the bluescreen miniature set and created a very convincing rotor wash. Much more complexity than you could get in CG.
– The final comp. Was a combination of many elements – including CG dynamic simulation for the rotors breaking up – miniature dust hits, displacement of earth, debris – CG dust – CG chopper body – some parts of the practical chopper body – and miniature foreground chair breaking up.
Add dust to soldiers on the ground.
– camera match move done by hand with the help of the CG survey model
– create CG dust that reacts with the buildings – using CG survey model
– hand roto the soldiers to enable compositors to drop them back into the dust

UN food drop on hillside – add tents and thousands of people. which required a massive location survey.
The first step was to survey the only building on the site and survey it carefully.
They then shot stills from choppers – maybe five or six key stills about 45 degrees apart (looking down onto the building). Using survey info of the building that is clearly in shot they could deduce the camera locations for each still.
Then they created these positions in one 3D scene A line is then projected from a camera to the same recognisable landmark on at least two, pref. three photos. Where these lines intersect is where that point is in 3D space.
Step two is do this for a whole bunch of points, roughly 10 metres apart to “grid” the landscape. These points should describe that landscape. To verify this they could then run objects over the landscape, using a photo as bg plate and viewing from that photos camera posn.
Only then could they put stuff anywhere on that hillside.
Step three: When the plate shots come in, camera track them using what are now known points on the building and landscape.

Stadium
This location requred digital set extensions and the director wanted the camera free to shoot any way at any time – no motion control. Obviously careful site survey and texture aquisition was needed.

The crew placed flouro tracking markers – self illuminating – around the stadium to aid in camera tracking and surveyed the site as in previous shots.
They mapped the area from the survey info. And give a map to the on set people to show them where the CG set extensions will be. This avoids having actors walking through where there will be walls etc. added in post.
Photogrammetry textures were extraction as per previous shots.

Rob Coleman – ILM – Star Wars EP II

EP II shoot in Sydney June to Sept 2000. It was Jan 2001 when the animation began. There was 70 minutes of character animation in the film and over 400 people worked on it for 2 years. (60 animators, 2 animation supervisors, 14 lead animators). The throughput rate was counted in “feet per week” which was 96 frames, or 6 feet. The big change was for the animators to think of themselves as actors, use that talent to give performance to their characters. They used an in-house character animation system called “carry” . Softimage was used for moving the characters around and they found that Softimage on Linux gave much faster turnaround for their work.

Blade Runner Retrospective – Panel Session.

This was a great session – an all star panel with many disciplines represented. Some artifacts, such as the acid etched panels that were used on the big Pyramid – were shown. Some visual elements and reels were shown as well. Very interestingas the panel discussed and chatted – rather than did shot breakdowns or formal presentations. Blade Runner is a classic that grew in status over time. Each of the speaks seem to have really enjoyed the experience.

15 years of Pixar.
Pete Docter, Tia Kratter and Thomas Porter

Another great session. All the speakers told us what they had done prior to Pixar and the process of interviewing and joining the company. Also they spoke of what it was like to work there, about some specific projects they had done, they showed reels including Pete Docter’s short he did at film school, Tia showed some backgrounds she had painted at Disney, they had reels of some concept art etc. that was done for various Pixar projects. And like the Blade Runner retrospective less of a formal session.

Pablo Helman – ILM – Star Wars EP II

Kaminosaurs.
They wanted them to have a language of gestures that resonated but seemed alien.
They used tai-chi like gestures as inspiration for this movement. Care was taken on subtle things like – making eyes converge slightly on objects as they get closer to the character, incorporated pauses in their behaviour / performance. Having their eyes scanned around, flicking as they think to show intelligence and making their breathing complex, not just in-out.
They shot reference for the action – reference of themselves performing as well as whatever else seemed worthwhile – ie. Other creatures.

Droid Factory – (hard surface models)
6 to 8 animators were working on any given shot. The team aim to avoid the video game look as much as possible. The team visited real factories for reference.

Clone Troopers
All the Clone Troopers were 100% CGI – no actors used or costumes made. 85% of their movement was motion capture, they made libraries of motion capture performances that were assembled into movement. Large groups of troopers were driven by particle systems in the big scenes, for example, a trooper moving in a queue of troopers towards a pickup area would shuffle forward, wait, shuffle forward, wait, shuffle till they got to the front then they would move to pickup their helmets for instance. For this they would execute capture shuffle A, then cycle capture wait A (which might be moving slightly from foot to foot), then execute capture shuffle B, then cycle wait A again, etc. and each of these would be triggered.

There was about 2200 shots – 2000 of which made it into the film, the other 200 will be on some DVD someday(!)

John Knoll – ILM – Star Wars EP II

The film was shot on Sony HD 900 with a video split to big plasma screen and calibrated QC monitors for checking – hooked up to framestore for capturing reference images. They used vectorscope to check levels in signal and ran tests on both blue and greenscreen and found that they were both OK but that extreme care must be taken to light the screen well and all the things that are good ideas for film shoots become essential for HDCAM.

-They adjusted the shutter simulation on the HD Cameras and setit to 180 deg. instead of usual 360 deg. to emulate film shutter.
-Some pyro elements were shot overcranked on film @ 120 fps.
-Bit depth – images shot on tape stored 8 bit HDCAM
-Comps done in prop. Floating point environment
-Comprehensive pre-vis done, so the camera movement style partly came from the pre-vis. The procedure was to shoot the plates “wild”, no motion control and do 3d camera tracking using proprietary software, then match-move data used to drive MOCON for miniatures.
-The main limitation of HD was that it couldn’t be altered much in post, otherwise it was fine. Final approvals were done on HD projected, not film projected

Ben Snow – ILM – Star Wars EP II

Ben discussed the June pre-production, and October production. There was many partial sets in some shots with actors shot bluescreen. The big shots were a combination of actors on b.s., CGI sets, miniature sets, CG smoke as well as real smoke and spark elements. The dream chamber was a 1/6 scale miniature.

Droid Factory
The team shot reference on DV at a car factory and a foundry. Many stills as well. Some stills used as basis for textures. Most of the animators went on the visit – and later, the DV was used as inspiration – they chose an outdated and cluttered factory as reference.
The factory was a CG build – and some of the CG shape files were laser cut and used in building the miniatures. To accommodate the complexity of the 3D factory, they built a lego-like program for assembling components like conveyor belts, overhanging robots, support beams etc. Kind of like a “sim city” for droid foundries.
They simplified some of the lighting by making “slide maps” which were basically textures that the lights projected onto the 3D – they could create one light that looked like many when rendered, then they faked some cast shadows on the textures to save lighting / rendering.
For the molten metal pouring shot, they shot a real element – using Metacil, poured into a blue cup shot with UV light. The metacil appeared to glow from within like molten metal – they were able to colour correct it to match.

Big droid / clone battle
Location surveys were done in Monument Valley Utah, and other places. Many photographs taken there both as reference and for textures / backdrops. USGS data of areas they had surveyed for real, converted to 3D CG height information – used in Maya as a layout tool for the terrain. Firstly they used it as a kind of virtual “location survey” tool flying around the USGS canyons to see what would be good terrain for the various shots.
They used a mix of procedural and photo based textures. The procedural ones at first used height information to suggest stratafication, they also ‘texture bombed’ to randomise the procedural textures.
They did tests to see what they could “get away with” in terms of how real the textures looked depending on their distance from camera. They came up with four main levels of detail.

1. Sky – matte painted, often from photographic elements
2. Background Plane / horizon – flat plane – large matte painted from tiled photographs –
3. Midground – textured CG modified from USGS data.
4. Foreground – miniature shot live action.

Craggy rock spires modelled in clay, photographed then photogrammetry used to create 3D models
There were lots of layers of haze, smoke, ground haze, dust rising from the clones feet etc.
Many smoke and haze layers were CG particle system which allowed the 3D clone models to sit in the atmosphere in Z space

The exploding droids had two models – one for normal animation, which was switched to another breakaway model which was pre-broken. This model used rigid body simulation for debris etc. as was done in Pearl Harbor for the aeroplane crashes.

Compositing was done in “comp time” – ILM proprietary floating point compositor. Elements were converted to floating point colourspace before compositing.

In Summary another excellent in depth conference with all the speakers clearly spending alot of time and effort to prepare extremely detailed and helpful talks. The conference will return to North California in 3 years.

  • […] fxGuide noticed the use of the filter, and summed up Dykstra’s description in this way: […]

  • >
    Your Mastodon Instance