In this special fxinsider article, we take a look at the visual effects behind two separate scenes from Jonathan Liebesman’s Wrath of the Titans – Framestore’s enormous modeling work for the Labyrinth sequence and Nvizible’s intricate all-CG shot showing the armillary sphere.
Click here to read fxguide’s coverage of the creature effects work in the film.
The Labyrinth
The Labyrinth is a cavernous and ever-changing structure encountered by Perseus (Sam Worthington) and his followers as they search for weapons to combine to kill Kronos in the film. Inside, Perseus is separated from the group and must defeat a Minotaur, while also having to deal with shifting stone walls. Framestore, under visual effects supervisor Jonathan Fawkner, constructed the ‘smorgasboard of bits and pieces’ required to build up passageways and doors of the Labyrinth.

“It’s essentially a central core, then a gap and another column – and they’re on the inside,” explains Fawkner. “There are doors up on both sides of it. So it’s a bit like the Colosseum in Rome, only 300 times as big. So all the doors had to look different and some had bridges going to them. I think we cracked the design when we started building in a modular fashion. What we would do was come up a menu of different designs of hallways, some big, small, some human sized and ginormous columns and plinths. Then we had a small team of people throwing these Lego bricks together and making variations.”
Various parts were built procedurally, but the complicated geometry made manipulating the elements in Maya particularly challenging. “Because it was being procedurally built,” notes CG supervisor Mark Wilson, “it was a slightly different set-up to our usual sets. We built the Labyrinth in rings and exported those in particles and took those particles and instanced the geometry onto those particles. That meant we could block out a shot that way. Once we’d done that, the modeling department worked on specific doorways and components and then they were procedurally assembled and replicated around the whole Labyrinth to build the entire world inside. We didn’t have a hero model asset – it was all about making components and sections and replicating those. We had internal tools to handle the instancing, and convert our layouts into particles, which then feed into the instancer. That fed through to Arnold to instance all the geometry at render time.”
Framestore chose the Arnold renderer as, at the time, it had instancing built-in (although the studio still used RenderMan for its work on the Cyclopes creatures in the film). “But we could mimick our RenderMan pipeline – the way we build our models and do textures could completely switch into Arnold – and we could run the two renderers together,” says Wilson.
Click here to read fxguide’s Art of Rendering article, which discusses the Arnold and RenderMan renderers in detail.
Parts of the Labyrinth whizz past and interact with the characters – who also had to peer out at its vastness – which Fawkner wanted to make as convincing as possible since those scenes would be filmed against greenscreeen. “The thing that worries me about filming against greenscreen,” he says, “is that a DP and gaffer can just chuck lights out there and say, ‘Well it needs to be a bit soft out there…’ and then in the end it’s up to us to conceive of how it should look.”
“This time ’round,” continues Fawkner, “because I knew we were raytracing and using a physically based shading model, I had them body track all of the performances so we had digital versions of the actors. Then we would light the set, and only when the digital models – dummies with no features, just gray shaded humans – only when they looked like they were lit correctly in the plate, then we knew we had a fighting chance of making them sit in. It was a very satisfying experience to then drop the plate over it. Our digital dummies bounced light on the walls, cast shadows on the walls, they really interacted with their environment. When you’ve got a CG environment interacting with the plate, you’ve got a more viable result.”
– Framestore’s visual effects for the Labyrinth are featured in this Wrath of the Titans featurette on the minotaur character.
To help light the scenes, Framestore invited the film’s gaffer to the studio to explain how the live action had been shot. “He looked at our shots, and we asked him how would he light a football field sized area,” says Fawkner. “He said you would light things across the surfaces to emphasize them, you’d wet them down. He would blast some surfaces and use blast lights – all were physically based lighting of surfaces which we could then achieve.”
In one particular scene, the camera tracks around Perseus as the Labyrinth changes around him. “We had Sam Worthington standing on a green turntable and the camera is on a crane that pulls up and back and Perseus rotates,” explains Fawkner. “So what you get is a wide rotating pull-back once you’ve keyed Sam, onto which we had to put the entire environment. The trick to that shot was that the very first frame needs to look like the same set he has just been in. We had to light it and texture and shade it such that it was indistinguishable from two shots earlier. Thanks to Arnold’s high quality atmospheric volumetric lighting, it exactly follows the cone of the lights you’re putting in.”
“That’s the first frame – and then the whole of the environment has to break apart and re-jig itself – the idea is that this Labryinth constantly moves. It reveals a vast internal cavern that’s four miles wide and loads of little bits of Labyrinth are all moving around and changing. Columns are falling over and crumbling. Another thing we did was pass some light over the actor, and because you’ve got the gray dummy as a proxy it allows the compositor to animate light over the top – it helps tie the two together.”
A late addition to the Labyrinth sequence required Framestore to construct an elaborate door leading into the structure. Fawkner recalls how the shots came together:
“It was decided they needed a bit more drama at the top,” recalls Fawkner, “so the door itself was made more complicated. It’s now like a Rubik’s cube where the stones need to be moved up and across to open it.
So what happened was we all showed up at Longcross Studios in Surrey and they had a great big green screen with mechanical green blobs on it you can move around – we replaced the greenscreen with this door, all in close proximity and contact with the actors. We used photogrammetry and scanned the set, took high detailed reference, silver balls – now that we are in a physically based world – which were essential. We body-tracked Bill Nighy’s character and when you pressed render it really paid off to get all of the detail from off camera to influence what was in-camera.
When we put it in front of the director, we’ve got a massive Labyrinth, three giant cyclops – but his favorite effect is still the bloody door! His point is – the other things are visual effects – but no one wanted the door to look like a visual effect, it had to be seamless and it was.”
The armillary sphere
For a key scene in which the fallen god Hephaestus explains to Perseus how to enter the Underworld and save his father Zeus, Nvizible created an entirely CG depiction of the ‘armillary sphere’. Overall, the studio (a sister company to previs outfit Nvizage, which also worked on the film) crafted 69 visual effects shots, jointly supervised by Martin Chamney and Matt Kasmir. The shots included greenscreen comps for scenes of characters entering Tartarus and the augmentation of weapons with incandescent molten metal treatment, dripping metal and sparks in the final battle. Below, Martin Chamney breaks the sphere sequence down for fxguide:





“The sequence begins with a wide prelude shot where a huge armillary sphere, containing a mechanised structure of the earth, has been set in to motion by Hephaestus. Perseus, Andromeda and Hephaestus leap out of the way as the sphere breaks open like segments of an orange, in a cloud of dust.
The hero shot which extends the live action arcing camera move around the sphere, is further synthesized in CG from the tracked plate as we journey towards the pyramidal model of Tartarus at the centre of the mechanism. The inner support arms of the sphere spring open and the field of view reduces to a macro level of detail as the camera translates inside the structure along a maze, symbolizing the Labyrinth and journey to the underworld. The original previs by Nvizage was a duration of 20 seconds, but then a decision to continue the camera move further inside Tartarus, extended the timing of the shot to nearly a minute.
Prior to the shoot Nvizible embarked on a comprehensive texture photo shoot of both the prop and the whole surrounding set of the forge. For the actual shot, the original full size 6ft diameter prop of the globe was removed and we acquired the standard HDR images, and various tracking markers were placed where needed.
The physical prop of the sphere was fully reconstructed as a CG element, to facilitate all the animation possibilities, which would have been difficult to achieve with practical fx combined with the impossible camera move. Andreas Graichen took on the challenge of building the model in Maya, based on lidar data. The prop was highly detailed with over 200 continental parts, all organized in a plan layout, before being deformed back to the correct position. The inner workings of the mechanism were designed and created from early construction plans, as no practical model existed. For the interior of Tartarus, Oliver Cubbage simplified the rocky structure to look like a hand crafted miniature, with a tiny rocky bridge that leads the camera through to the encased Kronos and the sacrificial pillars. Rigging for animation was devised to control the opening segments of the surface continents and the weighty pendulum style of motion.
CG Supervisor Stefan Gersthemier worked with Mari creating 20 x 4K texture tiles for the colour and ptex vector displacements were built in Mudbox. This combination generated sufficient image resolution to survive the definition required for the camera’s motion. Shader look development and image based lighting from HDR photos provided the foundations for photorealistic rendering in RenderMan.
The shot was planned to be fully native stereo, with the intention to control the dimensionalization of the live action part in conjunction with the stereo settings for the CG part. During post-production we experimented with convergence and inter-axial at various waypoints in the camera move. We wanted to craft the best stereo aesthetics in conjunction with changing shallow depth of field, to ensure the experience of moving through tiny space felt realistic.
In Nuke, Adam Rowland reconstructed the whole set, simplifying the environment from the Lidar data, and textured all of the background and set dressings from our photo reference using a combination of an automated camera projection solver in Nuke and plate re-projection techniques. The data was then exported to Mari for further patching and painting, before exporting back to Nuke as a final set of uv tiles for final projection. The actors were roto-scoped and projected back on to deformed geometry meshes and augmented at the correct depth in the scene. The final composite combined all of these elements in Nuke, with all the rendered passes from CG, for both camera eyes. The finishing touches included volumetric shards of light and fine dust elements generated with Maya fluids and particle fx work.”
All images copyright 2012 Warner Bros. Pictures. All rights reserved.