fxphd prof. and close friend of fxguide, Matt Leonard, recently attended the BFX Festival run by Bournemouth University - an animation and effects event showcasing films like Pacific Rim, Star Trek Into Darkness, The World's End, World War Z and more. Here Matt takes you inside BFX for fxguide.

The BFX is Bournemouth University’s first festival designed to celebrate and promote the visual effects and animation industry worldwide. The festival started with a 6 week summer competition to find the UK’s best animation and visual effects students, followed this week by the main conference itself. Attended by both professionals and students the five day event included talks by the likes of Industrial Light & Magic, MPC, Dreamworks Animation and Double Negative. Layered throughout the week other events took place such as master classes, workshops, screenings and various award ceremonies.

Alex Hope from Double Negative.
Alex Hope from Double Negative.

The event kicked off with an inspiring keynote by Alex Hope, MD and Co-founder of Europe’s largest visual effects company, Double Negative. From there Willi Geiger (Computer Graphics Supervisor) from Industrial Light & Magic took to the stage to discuss the work his team carried out on some of this year’s biggest summer blockbusters. Geiger started his talk with a basic background on himself, having originally studied Music Technology and Computer Science at the University of York (UK) before working at the BBC, the NCCA, Psygnosis and finally MPC before moving to ILM in 2001. From his opening introduction Geiger moved on to discuss some of the work ILM had carried out on Star Trek Into Darkness, The Lone Ranger and Pacific Rim.

Starting with Star Trek Into Darkness Geiger talked about how the visual style of the second movie followed on from J.J. Abrams' original 2009 film and how ILM simply had to continue the look moving forward. ILM contributed about 1700 visual effects shots overall and one of their biggest challenges was to keep the movie grounded in reality as much as possible. For the opening scene which takes place on Nibiru, extensive previs was used in order to carefully plan the sequences.  Only a partial set was built for the Red Forest chase scene with ILM building the rest of the environment around the background plates digitally. These sections included the majority of the jungle, the volcano and surrounding ocean. Many of the island’s natives were created by ILM using motion capture data for the bulk of the performances.

The Nibiru volcano scene proved challenging as the team had little to go on outside of initial concept art.  Utilizing heavy effects sims ILM created digital lava, smoke and fire which surrounded Spock who was photographed on a small practical set. The DigiMatte team created the background environment, rocks and other elements using 3ds Max and V-Ray. The compositing team then layered in more smoke, sparks and atmospherics using Nuke.

ILM's Will Geiger.
ILM's Will Geiger.

Meanwhile Kirk and Bones trying to escape the natives of Nibiru dive off a cliff into the sea where the submerged USS Enterprise is waiting for them. Stunt men were filmed against a blue screen on wires pretending to swim and were then comped into the digital underwater environment. Once inside the ship airlock area, ILM added digital water and extended the set around the filmed plate.

From here Geiger moved his focus to work Industrial Light & Magic had carried out on Disney’s The Lone Ranger. Director Gore Verbinski had the classic western in mind, wanting to film as much in camera as possible and use the big vistas of America’s mid-West for the backdrop. The visual effects shots where split between ILM and MPC London, with ILM covering the main train chase sequences. The plan for each shot was to try and film 50% in camera and do everything else using visual effects.

Geiger started with the final train chase in the movie, explaining how over 5 miles of train track was laid in the desert of New Mexico and how two real diesel trains were dressed to look like steam locomotives. For some shots ILM would replace a train in part or fully, such as when one of the lead characters has to step from the front of one locomotive traveling forward to another just in front traveling backwards. The shot was deemed too dangerous for a stuntman so they filmed the actor leaning out from the first train as if about to step across, and then added the second train in CG.

For a shot of the Lone Ranger pursuing one of the trains moving backwards, a huge wide vista shot was called for. ILM’s DigiMatte team ended up building 95% of the shot in 3ds Max, rendering it in V-Ray and then handing it off to the compositing team to add the live action element of the Lone Ranger on horse back.  In finishing up on The Lone Ranger’s effects, Geiger talked us through the final train crash sequence which ended up being a combination of three live action plates, enhanced and sometimes completely replaced sections of digital environments, extensive wire removal, and various atmospheric effects all seamlessly blended together into a fantastic climax to the film.

Will Geiger.
Will Geiger.

Finally Geiger moved on to discuss the work ILM had carried out on Pacific Rim.  Overall the team covered 2000 visual effects shots, with the main challenge of the show being the sheer scale and complexity of almost all the shots. As often happens the model shot started by building physical models of the Kaiju which were then transferred into the computer and rebuilt using Maya and ZBrush. The team spent a considerable amount of time studying real world creatures for model, texture and animation reference before adding their own slant to the characters. From there the Creature Department took over adding veins, muscles and flesh simulation over the the keyframe animation.

For shots of the Jaeger wading through Hong Kong harbor the production team filmed real actors inside the head set which was mounted on a gimbal to create the feel of the robots moving. Everything outside the head was completely computer generated with ILM relying heavily on Solid Angle’s Arnold render engine to produce the final images. Once the models were complete, the animation team created the robots performance before choreographing the water around the robots legs. At this point the water is just a deforming surface, enabling the animators to quickly work out the timing and size of the sea. Then the simulation team added dynamic water which surrounded the robots but then quickly blended off into the key framed water.  From there they added spray, foam and splashes again using dynamics and also real photographed elements. The 3D shots were rendered out with a huge number of render passes and then passed to the Nuke compositing team to layer everything together into the final shots. With so much detail some of the final renders were clocking out at 90 hours a frame. During the following Q and A session Geiger mentioned that even if a shot is completely computer generated, adding in just a few real photographed elements can really help to sell the shot.

Dneg's Frazer Churchill.
Dneg's Frazer Churchill.

During the afternoon there were a number of other presentations including Fraser Churchill talking about the work Double Negative had done on the science fiction comedy The World's End.

Having just heard ILM talk, it was very interesting to hear how D-Neg worked on the effects for a significantly smaller budget. Relying much more on practical effects, projection mapping in Nuke and clever compositing, staying away from 3D as much as possible.

The second day of the conference headlined MPC talking about their work on The Lone Ranger and specifically their Crowd and Digital Stunt Double in-house software.

Jo Plaete is MPC’s Lead Crowd Technical Director and he started the presentation discussing their proprietary crowd system ALICE (Artificial Life Crowd Engine). MPC have a small Motion Capture stage on site which animators can use and that data can quickly be fed into ALICE. The software has been used at MPC for the last 10 years, first seen in the effects work of films such as Kingdom of Heaven and Troy.  If MoCap data has been recorded with a performer just walking forward, the data can be fed into ALICE and then added to a curve - ALICE understands the motion and removes any foot slippage which would often occurs in situations where the movement is changed so dramatically. The crowd team can promote certain objects enabling the animation team to over-write the motion capture data adding their own performances to the character. This can be a straight replacement or a layering process where additional animation is applied on top of the motion data.

MPC's Tom Reed.
MPC's Tom Reed.

In a fierce battle the cavalry engage the Comanche, and for a number of shots MPC were asked to show the death of both horses and riders during the battle. This obviously involved something that couldn’t be motion captured safely, so MPC used another proprietary tool PAPI, a dynamics physics engine similar to NaturalMotion’s endorphin. PAPI is built on top of the open source Bullet engine and enables the creation and animation of ragdoll digital doubles which can then be seamlessly integrated into MPC’s animation and ALICE pipelines. Digital characters can switch from system to system as the shot requires. For instance a character could start running using motion capture data, then trip and fall, at which point the animation is controlled by the PAPI system, and finally that could be switched to keyframe animation for them getting back up.

For the cavalry vs. Comanche battle MPC used dynamic forces to enhance the animation controlled by PAPI.  When a character was shot or hit by an axe for instance, the animators would hit the dynamically controlled character with a blast of air or radial force. This created a much more realistic simulation, especially if the force was applied to just a specific part of the character. The animations also had 2D curves drawn as guns in Maya which enabled them to fire on the 3D models giving them more control as to where the bullets or other weapons came from.

The animation team also had the ability to decide whether the horse or rider died first and how that would affect the simulation. Complex layering techniques were used to add more realism such as enabling the horse to still kick its legs after it had been shot and fallen down, but before it finally died.  The whole system worked with ‘clips’, in a similar way to Maya’s Trax, with animation layers blending into each other. Also key poses could be introduced helping to force PAPI to match certain positions or poses if required. If this complexity wasn’t enough, both the horse and rider could interact with each other and all the other characters as well, so a rider could fall from a horse knocking off another rider. Also other dynamic objects could be fed into the system such as trees, rocks, etc.  Plaete explained that the simulation time was not overly long, an average scene of 100 frames containing 20 horses and riders only took about 4 to 5 minutes to simulate.

Tom Reed on WWZ.
Tom Reed on WWZ.

Next Tom Reed, Global Head of Rigging, took to the stage to talk about the work MPC did on World War Z.  The company worked on four main sequences, the Israel attack, the plane attack, the World Heath Organization, and the final epilogue.  Overall they contributed 432 visual effects shots, ramping up to a maximum of 666 artists and covering 83 zombie crowd shots.

The first job was the modeling and texturing of the zombies. This was done using Maya and Mudbox before the texturing was done utilizing Mari, Photoshop, Mudbox and Zbrush. The characters' hair was added using another in-house tool, GROOM, and the look development and rendering was achieved through Pixar’s RenderMan. For the rigging MPC works on a MEL based scripting setup which has been altered to work more like Python, and the muscle rigging again is done using another propriety tool enabling a digital human to be put together in just a few days. Overall MPC produced 870 male zombies, 1000 females and 130 children.

Animators used an in-house system called ‘Blob Rigging’ which enabled them to quickly block out scenes of rampaging zombies before feeding the data into ALICE. They also used fluid dynamics to help control how the zombies should move and when to hand-off between Mo-Cap data, PAPI or more traditional keyframe methods. For the cloth animation MPC primarily uses Maya’s nCloth or Syflex, though they are beginning to look at Fabric Engine too.


- Above: see how MPC crafted the digital zombies in WWZ.

The Airbus crash sequence was done in 3 months starting with animators blocking out the scene, working out where trees should go and how they should break. The idea wasn’t to have the scene physically accurate, but more to “look cool”. The ground dust simulation was done using Scanline VFX’s dynamics software Flowline.  Maya fluids were used for the engine smoke and Maya Dynamics (rendered as Sprites) for the dust coming off the trees. The actual animation of the trees was first done in Maya before being moved to Houdini which was used to add secondary motion and then being fed back into Maya for the animation of the needles, etc.

The conference continued over the next few days with talks by Jellyfish, Studio AKA, The Mill, Framestore, and Rhythm & Hues. Various manufacturers also attended such as Next Limit, Autodesk and the creators of HDR Light Studio. During the evenings the festivities continued with showings of Rush, Finding Nemo, and The World's End, Best of Annecy 2013 and an art show on the Friday night.


Thanks so much for reading our article.

We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.