In crafting a digital forest for an elaborate bee and bird chase in Journey 2: The Mysterious Island, Rising Sun Pictures had to significantly expand upon its existing effects pipeline. The Adelaide-based studio not only had to create digital assets with fur, feathers and digi-doubles, but also had to populate a vast foliage environment to fill out the chase’s backgrounds – and all in stereo. We talk to Rising Sun digital effects supervisor Mark Wendell.
Read our story about the rest of the visual effects in Journey 2 here.
– Above: watch the bee chase sequence.
fxg: What were the initial challenges of the forest environment that you had to solve?
Wendell: One of the first challenges was that we had never built an environment like that before with that many pieces built in CG. So before we even worried about what the trees and flowers would look like, we had to come up with new tools for doing those massive environments. That involved pretty much re-building the layout pipeline from scratch. We gave the layout artists the ability to hand place a lot of the key objects – the key trees, the ground and the characters flying between the trees. That gave animation in turn something to respond to. So if the bees needed to fly around a particular tree, we placed those trees by hand. That was the first pass at building up our forest, and we’d get sign-off from the client on that for layout and approvals.
Then we would hand that over to the Houdini guys – they would do a massive instancing of the rest of the foliage, filling in extra trees, background trees, thousands of flowers, ground cover, leaves on the ground. They also instanced all the leaves on the trees. They added secondary animation on the leaves so as the birds went by they moved things. And then the final rendering happened in Mantra.
fxg: In terms of hand-placing the trees – what tool did you do that in?
Wendell: That’s all done in Maya. Our character rigging and animation and layout and camera work all takes place in Maya. Then we switch over to Houdini when we needed the advantages of a procedural approach to things like instancing the forest, as well as dynamics for leaves, and so forth. In addition to the layout for massive amounts of geometry, we also expanded layout/camera department to be able to handle stereo cameras.
fxg: How did stereo impact on your work here?
Wendell: One of the areas of the stereo that was particularly complicated was having to put live-action photography of actors on top of CG bees into either live-action backgrounds or CG backgrounds. So the actors were shot on big blue bucks on a greenscreen stage. Tim Crosbie, the 2D supervisor, and I went to North Carolina to help supervise that shoot. We had stereo footage of the actors on these stand-ins. Then there was footage shot in Hawaii of background forests, with less mindfulness of matching up with the foreground photography. So it was up to us to come up with ways to marry the photography of independently photographed backgrounds and foregrounds and make it all work in stereo. It would be a lot easier job in mono because there’s a lot of stuff you can cheat and get away with but there’s no hiding in 3D stereo. That was almost a conceptual stretch for the artists in some ways rather than just about the tools.
fxg: So once you understood the process what tools helped with that stereo work?
Wendell: We expanded our set of tools in Maya/MEL/Python, adding to the scripts and utilities we had. We had tools to save a whole forest asset and pass it to the next department. Or tools to quickly switch between left and right eyes to check the character is properly sitting on the bee.
fxg: Going back to the forest, how was the overall look developed?
Wendell: The first thing was that we had to be able to cut back and forth between the live action photography that was shot in Hawaii. It was shot on REDs on the ground and a SI-2K stereo rig mounted on a mini-helicopter through the canopy. The initial brief was that we may have to do all-digital forests, but for the most part they anticipated we would augment the existing footage. As it turned out, our approach to building and rendering and lighting the fully digital forest that the client expanded the amount of shots with all-digital backgrounds.
Since we had that initial brief to make it look like the real forest, they took a number of Lidar scans of the forest for us – it was a massive amount of data. We used that to build our basic tree shapes in Maya, along with the reference photography. In addition we just took a lot of the plates and built some of the assets by hand. We look at some foliage libraries but our needs were so specific.
fxg: How do you go about bringing those assets into Houdini once they’d been modeled?
Wendell: That was part of expanding our toolset to do. We typically use 3Delight with our Maya models, but in Houdini and Manta – because of its physically based rendering (PBR) mode, we were having a lot of luck with some test renders. So we decided to keep the rendering in Houdini. In terms of exchanging data, we have a custom in-house geometry storage format called Meshcache. It’s something like our own in-house Alembic and allows us to exchange data between applications. We had to establish transforms for where all the trees sit, as hand-placed by layout and then that file would go over to Houdini. The guys would replicate the layout and add all the bits and pieces.
fxg: There are so many different types of foliage in that scene – how were the different material properties instanced?
Wendell: It’s funny, we looked at the footage and said, ‘That’s a pretty plant, let’s make one of those…’ but we sent an email out asking the production what some of the actual plants were. A lot of them we would just call ‘sapling number 1’ and ‘sapling number 2’. We realized early on that we had to come up with a broad scale range for the forest. Real forest foliage was really rich and there was almost no ground for a lot of the area that you saw. There were so many plants at different levels from the ground cover to the treetops. If we just created small ferns, a few flowers and then big trees, we had this big gap of space that still needed filling in. So all the scales were important.
As far as shading and lighting goes, we did have the advantage of Houdini’s PBR mode which does a beautiful job of ambient occlusion and bounce lighting and translucency. The shading that we developed for the trees and foliage was fairly simple. There was a little bit of translucency. The texture artists painted color maps and bump maps and in some cases translucency maps. Because there was so much geometric variety and because we were flying through a lot of this stuff really fast, it turns out we didn’t have to do too much with complex shading models, since PBR’s lighting and shading looked so good. In a lot of the shots we got away with basically one skylight and an ambient environment light as well. So there were essentially two lights, lots of geometry and a simple shading model turned out to work for a complex environment that you’re moving fast through.

fxg: How did you add movement to the actual foliage?
Wendell: We added it where needed. All the background trees you didn’t need movement on so they were static – now you know our secret! – but for all the shots where the characters fly close to the foliage we made sure that’s where our efforts went to blowing things around. That was a secondary simulation pass that was run in Houdini on top of the initial instancing. The Houdini animators would just take the path that the characters were flying through and added a lot of wind and noise to the foliage in their vicinity. Your eyes don’t need to see a lot of motion to trigger your brain that motion’s there.
fxg: What kind of motion blur worked for that scene?
Wendell: I don’t think we messed around with motion blur too much, but the render times were getting relatively high so we did play around in some cases with the sampling rate and accepted a slightly higher amount of grain. The reason we could get away with that was that luckily a lot of the live action footage we were matching too – particularly the mini-helicopter footage – was all shot in relatively low light. So we could leave a bit of sampling noise in our Houdini renders and it matched pretty well. We did a little bit of de-noising of the live action footage and allowed a little noise in the CG renders and they met in the middle.
fxg: How did the film help RSP expand its character pipeline, say for the birds’ feathers?
Wendell: The feathers were mostly done in Maya and rendered in 3Delight – and then Houdini and Mantra for the crashing scene. There was a little bit of work done in integrating the two renders for things like motion blur, hold-outs and shadows and making them match.

fxg: What about the digi-doubles?
Wendell: That was a big learning experience for us, especially learning where to do pick-ups between live action and CG and how to do those blends. For the digi-doubles, we got cyberscans of their bodies at various resolutions, color maps and also Boyd partnered with the group at USC’s Institute for Creative Technologies to provide us with specular and diffuse normal maps for all of our human characters. It was an extension of the Light Stage work that has been done with multiple scans and multiple frequencies in order to get very accurate reproduction of skin. All that data was eventually provided to us for all the actors in the sequence. It took a bit of shader development to switch over from the skin shader model we had to they data they provided.
fxg: Was there one or two shots you wanted to single out?
Wendell: There was a particular series of shots where Josh and Vanessa are chased up a massive tree by the birds and they go flying through the branches and up past a bunch of ferns that slap them in the face. That entire tree was modeled and textured by one artist, who spent a huge amount of time on that – his name was Ray Leung. The challenge of that one was that the tree ends up being thousands of feet tall because we kept on extending the animation, climbing and climbing and climbing. So every time Ray thought he had the tree long enough, we’d look at the cut and realize it had to be longer. He’d roll his eyes and add more length and more branches, but it was an admirable challenge for him.
We played around a lot with ground shaders for the forest floor, doing things like texture bombing – taking leaf and dirt and stick textures and tiling and repeating them on a surface. We had built a small library of dead leaves that we knew we would be using for certain shots where the leaves would fly around a character. One of the Houdini guys tried instancing a lot of these leaves all over the ground and it was such an immediate good luck that everybody just did that on all the shots. So the CG ground in every shot is covered with thousands of these instanced CG leaves. It’s not a single surface – it’s a complex, layered and three-dimensional build-up of layers of dead leaves at various angles. It gave a lot of organic richness for the environment.
Images and clips copyright (c) 2012 Warner Bros. Pictures.