FMX 2014 got underway in Stuttgart, Germany on Tuesday. Over the years, this has become Europe's premiere VFX related conference and this year is no exception. With over 260 speakers from 22 countries, the event has a wide range of presentations covering feature films, commercials, games, and research. There is also a large number of manufacturer sessions, including presentations from Adobe, Red Giant, Autodesk, and Side Effects.

We recap some of the highlights from the first day, with help from fxphd prof Matt Leonard.

Creating a Universe for Thor: The Dark World
Alex Wuttke, VFX Supervisor, Double Negative

Alex Wuttke, VFX Supervisor, Double Negative
Alex Wuttke, VFX Supervisor, Double Negative

FMX 2014 started strong with an excellent talk by Alex Wuttke (VFX Supervisor) from Double Negative discussing their work on Thor: The Dark World. Double Negative were bought in at the start of Pre-Production advising on shoot planning and acquisition. From the start it looked like they would be handling over 900 shots and it was imperative they got the correct data to streamline the visual effects process. In the end, Wuttke oversaw 920 shots covering a mix of environments, characters and ships. The shots were split into two groups, environments and main sequences, with Wuttke overseeing one side and Pete Bebb (VFX Supervisor) the other. A third unit was setup to act as hub between the two main visual effects units to make sure everything worked smoothly and continuity was kept consistent.

Principal photography, which was shot anamorphic, was undertaken at both Shepperton Studios and Longcross Film Studios, before plates where handed over to Double Negative in Soho, central London. Overseeing the whole project was of course Marvel Studios who keep a close eye on production, making sure everything kept in step with the Thor comic books.

From there Wuttke examined four different aspects of the film. Starting off with Asgard’s Dungeons, where Loki was being held, Double Negative initially received a number of artwork stills showing the overall look and feel of the Dungeons. One of the key aspects was that time had passed since the first Thor movie and everything was now older and more worn. It was important to Director Alan Taylor that the movie was very much steeped in reality and this touched every aspect of the visual effects. For instance, virtual cameras were all based off real world lens characteristics and any move such as a dolly or crane shot required that jitter and vibration be added for realism.

The dungeon area itself was lit entirely by fire light and using Double Negative's new physically plausible pipeline the final images where stunning. Key to the sequence was the holding cell of Loki which as protected by a futuristic forcefield. The brief was to create an effect that radiated out if the forcefield was touched and Norse designs were used to keep the look of the technology in keeping with the overall style of the film and overall environment. Once initial designs had been signed off Wuttke team designed just how the forcefield could work including proximity censors, tracers and other effects which came into play when someone came into contact with the forcefield. To help tie everything together anamorphic flares, atmospherics and various depth cueing effects where added.

The next sequence Wuttke discussed was the transformation of Kurse, a Dark Elf warrior. The transformation took place through heat effects, almost as if the skill was becoming molten lava. Wuttke's team look at all kinds of reference including various melting, burning and fusing effects. A reel was assembled of the best material and this was shown to the Director for approval. To achieve the effect, Double Negative tracked a digital double of Kurse to the live action performer, played by Adewale Akinnuoye-Agbaje. Through a combination of 2½D projection mapping and paint the live action performance was enhanced and in some situations completely replaced by a digital double.

The next sequence covered was the Throne Room in Asgard. A huge set was built in ‘H Stage’ at Shepperton Studios, currently being used for the filming of Within the Woods, and that was then digitally extended by Wuttke's team. As the Throne Room was to be partly destroyed by the arrival of Malekith the Dark Elf, a full 3D version was also built. Using Side Effect’s Houdini, full destruction sims were employed when Malekith ship later crash lands in the Throne Room. A key aspect to the crash was multiple pillars along the length of the room being smashed to pieces. To achieve a realistic simulation Double Negative created the digital columns using realistic materials such as concrete, a softer inner section, and gold supports. Lighting was also important, especially the interplay between the keylight and bounce lights. Backlight was also used to emphasize scale and texture.

Double Negative crafted the vistas of Asgard.
Double Negative crafted the vistas of Asgard.

The final section Wuttke discussed was the Asgard world itself. Taylor was keen to try and find a real world location they could use to film and eventually a island chain in Norway was found. The production team shot 5½ hours of footage on an Arri Alexa which could be used not only for moving footage but also stills due to the short exposure used to minimize motion blur. Double Negative then tracked the required shots and using photogrammetry techniques projected back onto the geometry as the starting point for the digital environments.

A large attention to detail was taken to build the town and surrounding areas in as much detail. This included taking into account the age of the buildings, their layout, both in the city itself and surrounding fishing villages, along with how the buildings looked from a distance. This became important during the battle sequences when the dog fight takes place round the city. Because of the amount of complexity Double Negative used procedural textures whenever possible and fed these into Pixar RenderMan Co-Shaders.

In the closing minutes of the talk, Wuttke touched on the ship Loki and Thor were piloting. The key to this sequence was that the Director didn't want to lose the kinetic energy of the Dog Fight every time they cut to the interiors of the ship so a 3D style holographic map was employed to show what was happening outside on the cockpit of the ship. To achieve these graphics Wuttke team rendered the digital city using a fisheye lens shader in RenderMan and then created the holographic display using NUKE's 3D system.

For more fxguide coverage of Thor: The Dark World, check out:

Feature story: The dark side: behind the VFX of Thor: The Dark World
The vfx show podcast: the vfx show #175: Thor: The Dark World

 

Plume: Digital Pyrotechnics at Industrial Light & Magic
Olivier Maury - Principal R&D Engineer

A multi-session track running throughout FMX this year is 'Lighting and Rendering' and in the second session Olivier Maury from Industrial Light and Magic talked about their in-house GPU-based simulation and volume rendering tool Plume. Maury started out confessing that ILM still used a lot of practical pyro effects, however the limitations of lighting integration and original camera position have caused ILM to seek alternative digital methods over the years.

Olivier Maury, ILM
Olivier Maury, ILM

Maury outlined the key milestones in ILM's digital simulation tools including shots from Pearl Harbour (2001) - smoke plumes, Terminator 3: Rise of the Machines (2003) - nuclear blast, Indiana Jones and the Kingdom of the Crystal Skull (2008) - nuclear blast, Avatar (2009) - explosions, and Harry Potter and the Half-Blood Prince (2009) - fire tornado.

The Harry Potter fire tornado was created on the GPU (in GLSL), which opened the GPU door for ILM. The team had beautifully detailed fire results, though somewhat limited in looks and camera positioning/moves. Plume was created to leverage the GPU power and yet be as flexible as possible, much closer to our CPU solution. Plume's simulation and rendering core are entirely in CUDA.

Plume has become ILM's main pyro tool being used today in over 20 movies, including Captain America: The Winter Soldier, and Transformers: Age of Extinction.

Plume is entirely GPU based using only the CPU for its I/O, and fits into ILM's proprietary Zeno framework as a Python module running CUDA. Plume uses an Eulerian Gas Solver (MAC Grid) structure with sculpting controls for velocity and temperature fields. Although it can produce physically accurate simulations it is more often than not used in a more artist and controllable setup allowing artists to direct the simulations to meet the director's requirements. The particle sourcing of quantities helps the bulk motion of a flow to be the same across different resolutions since the particles have world sizes instead of cell sizes. On one hand, you can chain Plume grids together to create simulations that travel long distances, and Plume offers tools to do that more easily.  On the other hand, you can output deep buffers in order to better integrate multiple elements at compositing, which is what was used in Winter Soldier.

Maury went on to state that Plume uses one ray per thread and can be controlled via color/density ramps. As a combustion / buoyancy model Plume's renderer is physically plausible and often utilizes area and environmental lights with multiple scattering and deep shadow maps. It also supports fully motion blur per particle and the final renders can be output as Deep Image Data with holdouts and additional AOV's (Arbitrary Output Variables) if required. To help artists, ILM have produced a number of templates which act as a starting point when create various dust, smoke, explosions and sticky fire effects.

To process the vast amounts of data, ILM has a dedicated GPU render farm utilising 128 Quadro FX 5800 cards, each with 4Gb of RAM, housed in NVIDIA Quadro Plex 2200 S4 unites. This is controlled via ILM's proprietary batch-scheduling program ObaQ which also runs their CPU based farm.

Maury finished off his presentation giving a few stats. Transformers 3: Dark of the Moon ran over 13,000 simulations during the show and used 50-60 Plume elements in some shots. Today a 65 million cell sim rendered out of Plume at 2k takes only 70 mins for 90 frames on their current hardware setup. In addition to Plume ILM also use Houdini and in the past Naiad which has now be rolled into Maya 2015 under the new banner of Bifröst.

For more fxguide coverage of Plume and Zeno, check out:

Feature story: ILM's scientific solutions

 

Interactive GPU Ray Tracing for Physically Based Lighting Preview
PhD Jean-Daniel "Danny" Nahmias - Technical Director

Following on from Olivier Maury was Jean-Daniel Nahmias from Pixar discussing their GPU based interactive lighting preview tool which runs inside of The Foundry’s KATANA and built on NVIDIA’s OptiX framework.

Jean-Daniel Nahmias, Pixar
Jean-Daniel Nahmias, Pixar

Nahmias began by discussing the way Pixar used to light scenes using hundreds or even thousands of lights, each hand-placed by artist. Today, RenderMan supports physically plausible lighting and rendering which means lights are now energy conserving and shaders are physically correct within the scene.

This means that today Pixar can light their scenes using only a dozen lights instead of hundreds. This new physically plausible setup was first used in the animated feature Monsters University and the beautiful hyper-real short The Blue Umbrella.

From there, the main part of Nahmias' presentation moved to demonstrating the integration of their lighting tool within KATANA. Using production assets Pixar leveraged the power of the GPU to render scenes from Monsters University with the same illumination model of the final CPU renders coming out of RenderMan. Pixar often uses Co-Shaders, a modular way to compliment other shaders, and these needed to be rewritten to take NVIDIA’s OptiX framework into consideration.

After initial tests were done with hard surface models Pixar went on to add hair and fur functionality and then Sub-Surface Scattering. One limitation is that the system currently doesn’t support motion blur. Going forward however Pixar hopes to add Bi-Directional Path Tracing shaders (RenderMan 19) and the ability to share code between both the CPU and GPU render systems.

In case you missed it, fxguide had coverage of a similar Pixar presentation at the NVIDIA GPU Tech Conference earlier this year. Here's a recording of the event from the conference earlier this year. The total presentation is about an hour, with the real time lighting research segment beginning about 27:00 in.

 

From Previs to the Final Image - Deconstructing Captain America: The Winter Soldier
Ron Frankel, President, Proof
Russell Earl, Visual Effects Supervisor

For this presentation Ron Frankel and Russell Earl jointly presented a session on Captain America: The Winter Soldier. First off Frankel took to the stage discussing the work Proof had played in bring this new Avenger’s hero to the screen. At the time this was the biggest movie Proof had undertaken and they faced some interesting challenges along the way.

Marvel, the production company, requested all pre-visualisation to be done in black and white. This was because Marvel's storyboards weren't in color and they felt the previz would be jarring when cut into the edit alongside the boards. One idea Proof had was to render and composite everything in colour and then send a desaturated version to Marvel. However on testing this theory they found it didn’t produce a pleasing image so other ideas were tried. What transpired in the end was a system using Proof's in-house toon shader where most of the image was very desaturated but certain key areas, such as Captain America’s shield, were in full colour.

Proof's Ron Frankel
Proof's Ron Frankel

Another interesting situation was that Marvel requested Proof begin with the previz of the third act of the movie first. Initially there was a lot of discussion about whether to use motion capture for the bulk of the animation but it was decided to use a completely keyframe method utilizing the skills of Proof’s character animation team. The system of approaching the movies was to break the sequences down not into shots but into what Proof described as ‘action’. These could vary in length but formed a coherent continues sequence within the movie. Once all the animation was done (in Autodesk’s Maya) the team would then go in and place cameras in the correct position and render out the action in time blocks, in effect creating the individual shots.

Previs frame from Proof
Previs frame from Proof

To aid in the setup Frankel and his team created ‘action maps’ which were overhead diagrams of the entire set. Here they would add in camera positions, props, timings data, etc., so the onset crew would know where to place everything, how far any given action may travel (a jump or chase for example), and the overall possible timings.

Proof worked closely with the stunt team working out things like how high ramps should be and if what they where imagining was safe or even possible. In the final act of the movie Proof had a single goal, ‘destroy the bad guys, and the Helicarrier!’ The two big challenges, however, were firstly trying to get a cross the shear scale of the ships and there surroundings, and secondly how to break up the action but still enable to audience to understand whats going on. On top of all this, the entire sequence is 25 minutes long.

Proof also worked on the post-visualisation of the movie. This involves adding animation on top of the live action plates filmed during the main production of the film. This was a very important tool for the film makers, and especially the editor, as it enables them to see more fully the final look of the movie before the main effect houses, lead by Industrial Light and Magic, are added.

The whole style of the film was to look like a 70’s thriller, such as The French Connection and Bullet, and Marvel were very keen that there was massive object destruction but not massive civilian casualties. Proof were on the project for 17 months and grow from a small team of three into fifteen people during the course of production. Previz took up 60% of their time on the film, producing around 2,250 shots. Postviz made up the remaining 40%, producing 1,613 shots or 20-30 shots a day.

Following Frankel, Russell Earl from ILM took to the stage to briefly discuss the work they had done in the movie including the pre-shot setups, the reconstruction of the Washington DC area, and various other aspects of the visual effects.

For more fxguide coverage of Captain America, check out:

fxguidetv:  Episode #188: Captain America: The Winter Soldier
quicktake: The Winter Soldier: ILM’s Helicarrier crash: Exclusive


Thanks so much for reading our article.

We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.