For Martin Campbell’s Green Lantern, Rising Sun Pictures created the visual effects for the film’s dogfight sequence between two F-35 jets and unmanned combat air vehicles (UCAVs). We talk to Rising Sun’s VFX supe Dennis Jones about the work.
fxg: Can you describe the work on the jet dogfight?
Jones: We did about 129 shots for the dogfight. It came to us as a Pixel Liberation Front (PLF) previs, and was pretty nicely flowing as an action sequence already. We had to sprint originally to get some shots for the trailer out. Once we got into full production we had to take on the broad beats of the action sequence and ground it in realism which included fully understanding the flight mechanics of the two different aircraft.
– Watch a clip from the dogfight sequence in ‘Green Lantern’.
Essentially the story point we have is to establish is that Hal is a rogue pilot who will do anything he can to win and get the upper hand, to the point where he sacrifices his wingman. So you have these unmanned drones or UCAVs and then there are the two test pilots, Hal and Carol (Blake Lively). It starts off as a training exercise and then eventually Hal loses control and it’s a fight for him to gain control of his plane.
As the show progressed, some flashbacks into his father’s demise, who had also been a test pilot and had crashed, were introduced. Storywise, it took on a bigger emotional arc and you started seeing the vulnerability of Hal. Once the narrative of his father’s flashbacks came into it, the sequence got more rounded with good action supported by a better understanding of Hal’s character and past.
fxg: How was the sequence shot in terms of live action?
Jones: About 33 shots occur inside a cockpit, which was filmed with a F-35 nose mockup against bluescreen. Hal and Carol, wearing their test pilot costumes, had a helmet on but no visor, and then a plane cockpit mock-up was built on-set but without canopy or glass which would be added digitally. On set there was a gimbal motion rig which created some very dynamic moves and allowed the actors to react accordingly.
For backgrounds, about 40 plates were shot from a helicopter in the Mojave Desert by Kent Houston, our VFX supe on the studio side. Using the pre-vis as a guide Kent filmed as many scenarios as possible so we had good coverage for the sequence. We had to re-time and re-project the plates, because obviously the helicopters a lot slower than an F-35. In additional to this we also had to create full-CG environments for certain shots.
fxg: Can you talk about the F-35s and UCAVs that you had to build?
Jones: These were based on the real thing, so we had pretty good reference available. We consulted with a military pilot about the sequence which was a great experience and provided many subtleties regarding formation and types of combat tactics. We ended up with a bit of hybrid of the F-35B (vertical take-off) and F-35A (Conventional take-off). We modelled two of those and did decal and dirt variations. I’d read a lot of the stuff from Ben Snow’s work for ILM on Iron Man 1 and 2 so for our shaders we went into development to update and refine our hard surface shaders. It was good because for each job we try to refine certain aspects of our toolset, and we hadn’t done hard surface shaders for a little while.
Then we had the UCAVs, which are based on drones that do exist but but for this production they’re very much their own design; very sleek and stealthy. That idea of stealth raised a few issues because by design they are devoid of geometric detail and the material surface is very dull, so it doesn’t reflect light in the usual manner. Normally with metallic surfaces you want to show all the specular and lovely details on the surface of the models, but we found we were dialing it back and back to get a realistic matte finish.
fxg: You mentioned that some of the background plates were shot with a helicopter – how else did you create the environments?
Jones: Well there are certain moves that only a helicopter can do in terms of flying and twisting, whereas a jet is more about forward propulsion. There was a fair amount of re-projection and camera clean up onto geometry, which we topped up with our CG environments. This included a 16K sky dome, middle distance layers of 2D matte painted objects on cards, and close proximity interactive clouds developed in Houdini.
fxg: Did you use of environments change depending on the type of shot?
Jones: Well, the start of the sequence is pretty much a direct chase through canyons, very low to the ground, which was where the plate material really helped. Then we have the vertical climbing, the idea being that if you can reach the point where the engine runs out of oxygen you’ll gain the upper hand. Kent Houston had gone up pretty high – about 3,000 feet maybe – and got some nice top-down orthographic views of the Mojave Desert. Those plates came to us at about 4K each and we stitched these together to make a collection 10K desert floors, which covered all the up and down shots.
And then we had the falling down, which was pretty much the reverse, where everyone peaks at 50,000 feet and the engine flames out and then it’s basically free-falling back to earth and that’s when he takes out the UCAV’s because they’re losing control. The finale for us is the ejection sequence where he finally goes into a flat spin and eventually ejects.
So we had quite different environment work for each of these scenarios, as you go higher the skies becoming a richer blue and the ground is getting hazier. We had a way to manage to our environment so we could introduce different clues and cues as we got higher and higher. Then as we came back down we could re-introduce these, just so we could give people a different cue as to where we were. Otherwise, blue sky can feel very repetative very quickly.
fxg: Can you talk about the details you had to add in to these shots – things like highlights, heat haze and reflections?
Jones: To start with we had a lot of reference, and we used Top Gun itself which had very practical dogfights. There’s a lovely shot in Top Gun that has a barrel roll and the way the sun hits the silhouetted hill you get a great red sky. There’s an aircraft that flies through and the heat haze gives you all this lovely break up and heat ripple distortion. We identified a list of additional touches, such as contrail wingtip vortices, which gives you that lovely tracing of the lines in the sky. It just really helps describe the spatial motion. This was also a request from the stereo guys to at all, whenever, possible to introduce more dimensional aspects such as have contrails that receded into the distance.
For reflections, we were asked originally to keep it relatively light-on because of the diffuse and matte nature of the stealth craft, but we had little moments where say the wing tip would catch a bit of the sun and it plumes up and flares. These were things we wanted to introduce partly for readability as well as for the moments where you get a little schwing on something metallic.
It was the same thing with the skies as well. Something for me that was important was the photographic energy in the skies – the light bouncing around everywhere – so we tried to capture that energy and keep those light values up. After-burner heat haze was also something we added to just about every shot, and then for some of the faster and more dynamic interiors we would run environment passes – there’s middle, near and distant clouds and wispier stuff which really helped sell the air speed. In the bluescreen cockpit interior shots, they’d filmed with quite a lot of lens flares happening in the live action frame. So it meant for us there was an established cue that we could actually play with lens flares and light and all these corruptions and photographic properties. By the time we layered up things like canopy scratches that react to the light, and dirt and de-focus, some of the canopy shots are fairly complex in terms of front and back layers, visors and the backgrounds.
fxg: How did you deal with the reflections in particular?
Jones: Well firstly we were working primarily in Maya with all the animation for the planes – then rendered in 3delight, and additional things like contrails were based in Houdini. The environments are a mixture of paint and Houdini and Maya. What we did was build a 2D replica of that structure inside Nuke using spheres and cards, and then we were actually rendering out spherical camera projections from that. Nuke has a camera node that lets you output spherical transforms. It means you can render animated HDRIs straight out of Nuke.
We didn’t do it per shot, but certainly flying straight and flying up and down worked well. We ended up rendering fairly generic cycloramas, almost as animated HDRIs that would sync up all with the same data. For the more dynamic shots like a barrel roll, you’d end up with moving horizons and all this detail tied together. Initially we were doing them with more arbitrary maps, and there was just a little bit of a disconnect between the horizon and the visor and the horizon and the canopy and what was actually happening out the window. So we put our heads together and came up with this uniform solution – export the anim into Maya, bring it into Nuke, set up the environments, set the altitude, then pretty much hit ‘export HDRI’ and away you go. The next morning you plug that into your render and everything comes out the right kind of view.
fxg: Were your lens flares being replicated photographically or do you have a digital way of inserting them?
Jones: For this production, we went and acquired some real lens flares on a 4K RED. It can go both ways. Personally, I haven’t seen a digital answer that’s as random as the real optical thing. It’s been a passion of mine – I started as a comper – to keep it as real as possible, just for some of the weirdness you get and the aberrations and the caustic nature of it. You can do it really well digitally, and the principles behind it are very understandable, but there’s just something about the randomness of each lens and the way things travel that makes it look real.
fxg: How did you approach the helmet visors?
Jones: They shot without them just for the ease of performance and wanting to control how visible and covered up their faces were. We developed that fairly early on and Kent and Martin Campbell were very pleased with the initial result. So something we thought could be quite fiddly, because you’re obscuring the actor’s faces, ended up being quite straightforward. We would match-move, file and repeat. There were about 45 shots done in this way. At one point we had some head up displays – I think the real F-35 has a very complex system for pilot targeting awareness that’s fed back into the helmet based on where the pilot turns his head. So we had some initial floating graphics in the face plate but in the end thought it was a little distracting in the performance, so we ended up keeping the visors a lot cleaner than they potentially should be.
fxg: I heard that in animating Hal Jordan as the Green Lantern that they referenced him as a test pilot in the way he flies the jet.
Jones: Yeah, there’s actually a neat term I learnt on this show called ‘jinking’, which is when they’re trying to shake off the bad guy and they do the quick left or right shimmy. So apparently they were after our jinks, because we had pretty good ones.