Immortals: Q&A with VFX sup. Raymond Gieringer

For Immortals, director Tarsem Singh called on visual effects supervisor Raymond Gieringer to oversee a wide range of environments, battles, natural phenomena and ‘blood and guts’ action. In this fxinsider interview, Gieringer delves into the planning stages and on-set virtual sets and live-viz systems used to help create the visual effects.

fxg: A lot of this film takes place in stylized environments. What approach did you have to planning the effects work, both in terms of say previs, and also working out what could be achieved with practical sets and effects, and what would need to be achieved digitally?

Gieringer: I realized as soon as I was brought on that I had a certain mandate to achieve, because we had so many greenscreen environments to build out. There were a few principles we had to follow in order to accomplish it. The main one was, and my recommendation to production was, if possible let’s build out as much as a practical set as we can. So that to a certain extent we can shoot into that set and not have any visual effects whatsoever. The idea was we would establish environments with wide shots – extreme wides. Then we would shoot off the sets to establish environs as well. But a good part of it was that we were going to be covered by shooting on the built sets and not having any visual effects whatsoever. That was the rule and mandate to proceed with – ‘containing the action’ was what we termed it.

Watch a visual effects featurette on ‘Immortals’.

fxg: Can you talk about the design phase for the visual effects?

Gieringer: In pre-production or prep, we wanted to know as much about the environments as possible before we got to the stage of shooting them. So to that end, during prep and for about a month into the shoot, I assembled an in-house team that consisted of previs, matte painting and concept design artists. That team worked hand-in-hand in conjunction with the art department and we basically fleshed out their initial designs and illustrations to have not only a sense of what each environment would look like but also how they would tie together in the larger context of the world. That was quintessentially useful as a step before previs in my mind because we were doing hand-in-hand design work and taking that to the next level.

Original plate.
Final shot.

fxg: How did each facility’s contributions break down?

Gieringer: Image Engine created three large environments, including the cliff location, and a lot of other shots, including blood and gore and more complicated roto/matte weaponry.

Scanline was responsible for the implementation and design of four environments, plus interior and exterior destruction at the end of the film, as well as the tsunami sequence. They did a lot with respect to water and destruction, and environments.

Tippett Studio worked on the design and finished look for the God and Titans battle at the finale of the film.

Rodeo FX were initially involved in some early design and concepts, then ultimately responsible for six or seven keystone environments throughout the film. In terms of all the vendors, Rodeo had the lion’s share of the environments in one go.

BarXseven were responsible for Aries’ rampage. They were responsible for the blood sims and the gore. There were five or six shots in that sequence, and did a lot of other smaller shots.

Modus FX created Phaedra’s main vision, a fairly complicated set of three shots in a row. They also did a bunch of digital armies and workers in conjunction with Rodeo.

Christov Effects were part of the team I assembled to help flesh out the designs for the environments.

Prime Focus completed a transformation from old Zeus to young Zeus which was a complicated set of imagery that we needed to piece together, as well as some muscular and armature enhancements to Zeus in the finale, plus other shots.

fxg: What approach did you take to filming – what kind of greenscreen work, virtual sets and on-set compositing was involved?

Gieringer: We had two main tools. During prep we had a system called InterSense. What this system was, was basically a live view where the director could walk the virtual set and see the virtual environments. How it works is this: we would get a model from the art department for one of the sets that they were going to build. We would take that set and build out the environment around it, which was an exploration. That was a combination of the design phase and previs. We had a set, let’s say for the village. We built that out with other sets around it, whether they were 500 yards or a mile off and we’d do some basic preliminary texturing.

That virtual model would then be converted and brought into a system called InterSense. The InterSense system was based on Motion Builder and Maya. It was really a combination between motion capture and Motion Builder. It allowed you, with a viewer like a Wacom tablet, to walk around not only the set that the art department would ultimately build, but also see the environment around it. We programmed a set of lenses specific to Tarsem’s desires – what he usually uses – and we would spend one to three hour sessions walking the virtual sets well before they were ever built. It gave Tarsem the ability to move around that environment in real time and pick his angles. He could do his set walks, what he would normally do with a camera or a lens on a stick, before the set was even built. And so he could get a real sense of what was built. That gave us a library per set of keystone shots that might change on the day, but he came to the set knowing what he was going to see. We also developed a snapshot system that showed the different views with different POVs and lenses that Tarsem liked and we would have that on the day on set.

Then during the shoot, we had a live-viz greenscreen system that we developed literally a month before we went into filming. Basically it gave us the capacity in a very rudimentary but very useful way to see specifically where the real camera was versus the virtual camera in our model. Tarsem could look at the live action feed and see in the background an oasis of villages, or see a King’s army encampment and the horizon et cetera. So we got around the greenscreen issue on set by having a knowledge of what we were going to see, and then we could look out onto the environments with very little slow-down to the production process. We could move things around in a down and dirty way, but it was very useful.

fxg: How did the live-viz system work?

Gieringer: The way that worked on set was that all the camera lenses were charted and graphed, including the zooms. We would go into a set a few days before we would shoot there. The main methodology was to find a zero point on that set and find the same relevant zero point on our virtual set. Once we had those two pieces of information, we basically tracked the movement of the dolly, crane or camera based on that zero point, since they matched. The software did the interpolation and figured out the XYZ in terms of where we were in space. All the sets were in a virtual form – very clean and very detailed. The real sets were also LIDAR’d and then photographically surveyed so they could be re-built as necessary. Then we had the typical documentation in terms of cameras and also we did HDRI’s. So it was a combination of all the things we would use, plus a virtual survey of each set. Ultimately we knew that set would have to live in space in terms of our establishing shots.

fxg: There’s one particular scene of Poseidon jumping from Mt Olympus and creating a tidal wave. Can you talk about designing that shot?

Gieringer: Beyond the live-viz system we also had a designated compositor who would do a lot of on set comps, and he and I would do a lot of on-set previs’ing to try and figure out the ins and outs. Tarsem liked the jumping sequence to the drilling crane in Star Trek and we used that as inspiration. One of the most difficult things about skydiving sequences is that you don’t have anything that’s juxtaposed in relative terms. You need to have something to show that they’re moving fast. So we wanted to add texture around the actor or double to give it a sense of scale and speed.

Watch the ‘Poseidon Jumps’ clip from ‘Immortals’.

So we previs’d the sequence. From the previs we realized we needed two or three close-ups of our actor, which would be done with rigging on greenscreen and fans. Then for the actual shot of Poseidon jumping off Mt Olympus, that was filmed on set, which went into our first fall following him into the clouds, which was a CG take-over and into visual effects by Scanline. Then we were down below him going through the clouds and we see the rock around him to help show the sense of speed. Then we cut to the close-ups of the actor live. We tried to put as many chromatic aberrations in as far as lensing or just kind of atmosphere-y hits. There was also a medium wide shot of him falling which was a digi-double in between the close-up cuts.

Once he hits the water, we developed with Scanline the look of the wake that was the initial wave that would become the tsunami. The water was also not just water in the traditional sense – it was supposed to be an oily viscous substance. That made it more interesting but then involved extra development by Scanline as it differed from previous water they had done before.

fxg: With the stylized look, can you talk about how the vfx fit into the colorspace of the Immortals world?

Gieringer: The grading was an evolution over the course of the show, but we definitely had an early look to match to. The mandate was to really live within the inspiration of our renaissance style – the lights and darks – and the inspiration being the renaissance paintings, Caravaggio in particular. A lot of the newer generations of films are taking comic strips and extrapolating from those, but ours was to take classical paintings and an era and a look and that was the point of departure for the evolution of our environments. And our environments were always meant to be hyper-real. We were never going for a photo-real look. That then dictated where we were going with colorspace and look over the course of the film.

fxg: How did the stereo nature of the film impact on the vfx work?

Gieringer: We shot a few scenes with a 3D camera rig, but for the most part it was a 2D to 3D conversion. In that way it didn’t impact too greatly on the visual effects because we were always trying to make the most beautiful images possible. Having said that, we certainly had many conversations over the course of the show with the conversion team in terms of what we could do to make things better. For instance, simple things such as where Theseus and Hyperion are relatively close, and they’re pulling back a bow and an arrow is appearing. Sometimes we had that arrow breaking the frame and we would go back and revisit that and move the arrow into frame just so from a 3D experience it would come out into the audience versus if it broke the frame to get the most 3D impact. We also hooked up the conversion team with all the vendors to map out what elements they needed to provide to make the stereo as beautiful as possible.

All images and clips copyright © 2011 Relativity Media.