Scenes of Monument Valley, New York City, the English countryside, a distant space station and a vast icescape were brought to life as a continuous shot by MPC in a recent stereo piece for Sky Movies. We chat to MPC’s 3D supervisor Duncan McWilliam about shooting the 360 degree pan with a stereo rig, creating worlds and then stitching it altogether.
fxg: There’s so much going on in this spot and it seems like such a fresh approach. What was your brief from Sky?
Duncan McWilliam: Sky was both the creative and the production team. Their idea was to encapsulate epic movies – big landscape photography and large scales. They wanted to not only promote their movie channel but also their 3D TV movies. So the brief was based around the idea of the cyclorama, something that was invented at the turn of the century as a way of seeing a vista photographed and stitched together. So it was a like a 19th century QuickTime VR that you stood inside and looked around at this printed wall of photographs. We used that idea to present all their different movie genres. Stitching that together was obviously a big challenge moving from one idea to the next in a 60 second spot while spending long enough in each section for it to still feel epic in language without jumping into each. Plus we only had eight weeks to do it in.
fxg: How involved was MPC in the pre-production process?
McWilliam: We were very involved in pre-pro. We got on Google Maps and looked at Monument Valley and New York and looked at all the different terrains that we’d be shooting, and layed out cubes a kilometre from camera or six kilometres from camera for the mountains, and 20 metres from camera for our furthest point.
We worked out early that if you do a pan across something, you’re very sensitive to a strobing effect. So that was another big thing – you can’t move too quickly or you start strobing, so we did a lot of previs to help get the creative working at a technical level. And not only did we have the strobing issue to combat, we also had the stereo depth aspect. If you needed an epic view, you can’t have objects very close to camera and mountains far away and have a good 3D effect on all that. You’ve got to choose where you want your focus to be, and set you depth budget there for all your shots. If it wasn’t for previs, I think we would be winging it on set to be able to make decisions about inter-ocular without really knowing what you were doing.
fxg: So how far did you take the previs process for this spot?
McWilliam: The previs was extremely accurate dimensionally speaking, which meant that we made the decision on where to shoot partially based on that. The previs slowly became the final product as we went along. So we recced all the locations. We measured them out, we built them – not photorealistically – but diagrammatically correctly and we chose whether we could do it in CG or cheaper to shoot. For example, with the Arctic scene we were going to find some vast icescape but we realised that was going to be insanely expensive. We would have had to have gone to the Southern Hemisphere because of the time of year, which wasn’t an option, so we took that one on as CG. Instead, we did go to Monument Valley, which was affordable.
fxg: What approach did you then take to acquire the plates? What was the motion control and stereo rig that you used?
McWilliam: I had met a guy in LA when I was setting up our studio there, Simon Wakley, who had done a lot of motion control. He showed me this great little machine that he’d invented and built which was a strong little motor that you clip onto the back of your dolly and it runs off a Flair system which is what runs our normal mo-co. It’s incredibly accurate – basically just a rubber wheel on a track driving your dolly around. I knew for our 360 pan we needed to keep the camera speed consistent. The camera speed could not deviate from shot to shot no matter what. The only way to do that was motion control, but you can’t take a Milo out onto Monument Valley, or onto a rooftop scaffold on the lower east side of New York. So we used what we called this poor-man’s mo-co which was just that little motor no bigger than a briefcase. It’s accurate across a 20 foot track to about half an inch. That was a lifesaver, because that meant every time our cameras would match up and we wouldn’t get dubious stretching.
fxg: Was that something you also plugged into the previs?
McWilliam: Yes, we spec’d up the track in the UK, so we were working to a known radius of track that we could have. We checked all our speeds, timings and accelerations by sending Maya files to the motion control guys so that they could map it out and tell us if we needed to change anything.
fxg: Can you talk about the stereo rig you shot with?
McWilliam: We had two REDs mounted on a mirror Tango rig. We worked with the German company supplying that and our own stereographer here Chris Vincze and they really kept on top of all that stuff. We then mounted the cameras on our dolly rig with the mo-co. The rig was a mass of wires – you almost couldn’t see a camera! We shot in 4K, which was great because when we came to align the stereo at the end we were not losing picture resolution.
fxg: How did you avoid the anticipated strobing effect?
McWilliam: Well, there is nothing in the world you can do other than pan more slowly. You’ve got to be careful with any CMOS chips or any electronic capture devices more so than with film because of rolling shutter issues, but the RED doesn’t really suffer from that. What we normally say is if the object being filmed crosses the frame in under five and a half seconds, you should be alright. We were running strobing tests in CG, but there’s always a discrepancy between running your previs and your actual camera and the way it produces the frames. So we did find when we brought our initial shooting tests back from Blenheim Palace, that there was strobing issue. To counter that we time-stretched it to double and then interlaced the time-stretched frames and that just took the edge off the high contrast strobing errors, which worked fantastically. Contrary to popular belief, stereo actually fixes strobing brilliantly – the way you’re overlaying those two frames with the slight off-set. The convergence just takes away the high contrast pixels turning on and off which takes away the strobing.
fxg: What kind of other tracking and stereo corrections did you need to do?
McWilliam: We used Ocular to match grade the two lenses because there’s always a colour difference, plus employed some stabilisation techniques and camera tracking. Actually, in terms of tracking, if I ever do this again, I would LIDAR everything, because without the physical 3D-geo, the tracking software has a real problem understanding that the left and right eye exist in the real world so that you get a cohesive track, especially as our image had almost no parallax. We were shooting 18mm super wide and panning across a very big space. It wasn’t quite nodal because we were travelling around that track, but it was nodal enough for the tracking software to think it was nodal and give us a really bad track every time. Luckily, our 2D supe Matthew Unwin and the guys did a really great job with the tracking.
fxg: It would be great to go through each of the main landscapes, not necessarily in order but maybe starting with Blenheim Palace. How was that shot and what kind of things were you doing on set to help you down the track?
McWilliam: Blenheim was the first place we shot and probably the most tricky in terms of survey data because we had a big open stretch of water, rolling hill and we mostly had to be looking at the track behind the monitors doing technical work. In the end, the only real stuff left in there is the near-ground grass where the kids and the couples are around and the road with the car on it. And a couple of trees. The bridge isn’t real, but the sheep on the other side are. Everything else got matte painted, re-projected in Nuke or tracked back in.
The other thing is we were on a hill and had to make sure the track was 100 per cent level. If you have a level change as you pan from right to left, you have to stitch that into your next one. You can imagine across all six – if one was going uphill and one downhill, the horizontal tilt would be all over the place. My goal was to keep the horizon in the middle of the frame throughout the entire piece.
fxg: That Blenheim landscape starts with the transition from New York. How was that scene accomplished?
McWilliam: We wanted to shoot on the lower east side of New York, but it can be hard to shoot there. In the end, we had to get this quarter tonne dolly rig as high as we could. The idea was the lake in the countryside scene would almost come up to a Venice-style canal system that we were going to bolt onto the side of New York, but as it turned out we could only get the camera up on a deck that some builders had built out of scaffolding for a house they were re-building. So we could only get 16 feet up which meant as we were panning off New York into your invisible world, at that point, the lake can only be 16 feet down. So we had to come up with a way of making New York feel like it was way higher up than the lake with a big pier that drops down.
When we were shooting, another complication of using that scaffold rig was that as the dolly moves around the track, it creaked and groaned. It was just scaffolding and not concrete and not really strong enough. It was safe but it creaked and sagged, so we had to stabilise it and fix level changes for each pass that we did. The other thing we wanted was a beautiful sunset coming down that street, but you can’t shoot into the sun. Any optical effects you get in your stereo camera are different for each eye. So we couldn’t shoot into the sun, but the sun was trying to bounce into the camera all the time.
fxg: What kind of augmentations did you make to the New York plate?
McWilliam: Well, the backgrounds are all Photoshop. We took a lot of photos in New York on our down days. Our matte painters painted that up, and that goes into Nuke. We ended up using Nuke’s tracker for that stuff, and lots of 3D camera projections in Nuke. We could check our stereo output directly from Nuke in anaglyph mode back up onto the HD monitors, so there was a really nice tight line for set re-build in stereo with editability, without going anywhere near Maya, without laying out 3D geometry in real world space based on a survey we did.
fxg: Let’s jump back to the Monument Valley pan. What were some of the considerations in shooting that?
McWilliam: The idea was to get the sun just coming over for that orange red, dusty, very Ayres Rock colour and wonderful light. As you can imagine, you’ve got a dolly that’s timed on very specific timing, all computer controlled, but you’ve got to get 20 horses to run into frame on that. So shooting it was very difficult – we ended up getting one shot only that truly worked.
Then we had the canyon to the left and the snow to the right – the beauty of that being they’re both CG so we could sandwich Monument Valley in between. But the horizon, the backdrops, the buttes, as they’re called, they’re all what was shot. We may have cut them out and shuffled them right or left a little, but it was all as shot. There were no stills. That was the most epic of all our shots and I think when the horses run through and kick up all that dust, it’s fantastic!
What is interesting is that we had to shift the horses around a little bit. They are all roto’d, and re-comped back in. The client wanted the timing from the horses as shot, but a slightly different timing for the backplate. So you’ll see the horses run in with the cowboys – they were shot in the afternoon but the backplate was shot in the morning. The light in the morning was exceptional. The grader had to do a lot of light wrapping and grading to get the two to match together. The problem was there were always little bumps in the camera track and the horses were running up and down dips in the ground – so if you off-set that timing, you’ve then got to track your horses up and down to match it. That was all done in Nuke, apart from the CG canyon on the left, and the matte painting for the snow was projected in Nuke as we came in from the other side.
fxg: This then transforms into the futurescape.
McWilliam: Yes, and the entire shot there is 100 per cent CG. I laid down a load of brightly colored cones in Monument Valley where we believed the canyon would go. So our previs was accurate enough on set to look at the shot and say ‘I think the cones should go from A to B down there – let’s send the horses towards the cliff which doesn’t exist yet’. And that’s where you put your heart in your mouth a little bit when you hope you’ve put the cones in the right place. As it turns out, they get just close enough to the cliff and no more. Then we add our CG cliff in, which was modeled in ZBrush with some fairly basic geo and some nice textures.
Then the entire spacescape was concepted by Kenichiro Tomiyasu, who is a great matte painter I work with every now and then. All the matte painting was done in Japan for that shot and we would be on the phone – he doesn’t speak English so we had a Japanese translator. The spaceships and everything else were designed in-house. There was so much detail, like tiny little mining diggers about four pixels wide churning up the ground down there. We’ve got Maya fluids for the waterfalls to bring the matte painting to life, lots of little spacecrafts and flashing lights that we did in Nuke.
The thing the shot was lacking, right before we delivered, was any foreground parallax and we found that the stereoscopic just wasn’t quite working. It looked like we were panning across just a painting. So at the last minute we decided to put a cliff and some dust in. As the spaceships fly over camera they kick up some dust. The cliff went in over a day and so did the dust – just a quick Maya fluids solution. I also designed the trails from the spaceships to go in because I wanted to create a more lasting stereoscopic effect. Those ships go through so quickly that we needed to leave something behind but not obliterate the background, so we came up with this weird sort of energy trail. When you watch it in a cinema, they look really nice and really make the piece.
fxg: Then there’s the icescape which begins and ends the spot. How did that come together?
McWilliam: Originally, the icescape was a kind of Sleepy Hollow magic and mysterious world. And then it became much more light-hearted to represent the family film genres. We wanted to shoot the glacier, but it was going to involve a lot of work and expense to go somewhere, so then we decided to do the whole thing in CG and have full control. For the mountains way in the background we had a freelancer use Terragen and concept those out. The fairies we designed purely in-house. They were the only thing designed to come into negative screen space. Usually you want to keep everything in positive screen space, keep it all inside the frame and don’t break the frame. We animated them in such a way that while they were in the frame, not at the edge of the frame, they would come out from the trees and pop out off the screen at you and then go back in again. It was very subtle for TV because Sky is cautious about the 3D effect on television. In the cinema, it was a different convergence and they come right out at you. There was quite a lot of character development for fairies, a little cloth sim in Maya nCloth and some sub-surface scattering.
Then we’ve got the boat which was a galleon model we built up and had cloth sims on the sails. We had our own in-house shattering system that we used for the ice around the boat. At the last minute, we animated some penguins in there. The whole tree line down there started off as some dressed some trees down at Blenheim which we shot for real against bluescreen. We wanted to have at least some sort of natural transition, but in the end we replaced all of it with CG trees, just because the style wasn’t there with the brought-in trees.
We also just wanted that lovely bleak low-lying sun. A significant thing to think about across the whole thing was where is the sun the whole time? We really wanted it in one place, but every scene needed to have that golden hour feel to it. And we couldn’t see the sun six times, so light direction becomes a really interesting problem throughout the whole piece. That is the only one scene where you see the sun and you get a flare, which was all post created because of the stereo issues. We were so pleased that we hit 55 seconds exactly – it was an absolutely flawless continuous loop. I think to do that in stereo with live action and CG was the biggest technical achievement and I’m really pleased we did that.
fxg: How much did you design each of the blends or transitions?
McWilliam: The blends were really designed early. We knew we had things like level changes and starkly different environments. The idea was that each transition crept in onto the horizon first and was then a slow reveal, because with stereo you’ve got to allow the viewer’s eyes a second or two to pull from the background up into the foreground and then back off again. So each transition is almost like a wedge from camera – it kind of goes out and away from you before the next scene comes up and towards camera again. The only exception to that was New York which was the only vertical wipe in the whole thing. There were lots of transitions mostly on the horizontal planes, which were much more interesting to look at.
fxg: What would you say was the biggest challenge of this spot?
McWilliam: The single hardest thing on the entire job was stitching a photographed plate to another photographed plate – the New York City scene into the Blenheim Palace countryscape. That was a nightmare, but where we could put a CG scene between the scenes meant we had a lot more flexibility to line things up and deal with level changes.
Also, something really big that came out of this was: in normal working practices, your roto and your physical heights and dimensions can be covered up. We can cheat it. We can use all sorts of things to trick it. But when you’re doing stereoscopic, if a level changes from one shot to the next and you’re trying to tell the viewer it’s the same, that’s tricky. So for example, the lake was very low down when we shot it in the countryside and we come across from New York, which was only shot 20 feet high and the lake was about 60 feet below us. So that 40 foot discrepancy you can’t cover it up in stereo – you feel it as you go from one to the next. So we had to build that big wall at the side of the New York-scape to deal with that problem. That meant that New York ended up being 80 per cent CG, although it was a shot plate, which wasn’t something we necessarily set out to do. But you kind of live and learn, don’t you?
Client: Sky Network Marketing / Sky Movies
Agency: Sky Creative
Agency Producer: Sharon Kersley
Executive Creative Director: Clare McDonald
Creative Directors: Esther Wallace, Nick Tarte and Craig Marsh
DOP: Magnus Auggustenn
Post Producers: Justin Brukman, Gen McMahon, Michael Stanish, Vittorio Giannini
VFX Supervisors: Matthew Unwin and Duncan McWilliam
VFX Team: Chrys Aldred, James Bailey, Andrew Brooks, Jason Brown, Remi Cauzid, Maurizio De Angelis, Lacopo diLuigi, Michael Diprose, Dominic Edwards, Adam Elkins, Darren Fereday, Ahmed Garraph, Andreas Graichen, Michael Gregory, Liam Griffin, Alex Harding, Joey Harris, Richard Hopkins, Nicholas Illingworth, Spiros Kalomiris, Carsten Keller, Adam Leary, Duncan McWilliam, Jorge Montiel Meurer, Prashant Nair, Maru Ocantos, Vicky Osborn, Mikael Pettersson, Christophe Plouvier, Fiona Russell, Jim Spratling, Janak Thacker, Charlotte Tyson, Matthew Unwin, Fabio Zaveti