Jon Favreau’s The Jungle Book uses the latest in onset approaches, virtual cinematography and rendering technologies to bring the world of the famous Rudyard Kipling story to life. But Oscar award winning visual effects supervisor Rob Legato still approached the film’s digital creatures and environments with some caution.
“My belief system is that all creativity is really based in analogue thought and behavior,” Legato told fxguide. “Sketching, scribbling something down, erase it and scribble again. That’s the way it goes. Computers are very hard and fast, they have an instruction set that’s inflexible so you need an input device that allows you to do that. It’s the same thing with a movie, you can’t just record a bunch of voices in different parts of the world and put it altogether – it’s not going to work without this give and take.”
That meant that, during the making of The Jungle Book, Legato, and director Jon Favreau, adopted the latest tech but were not beholden to it. They readily welcomed ‘happy accidents’ during voice recordings and on-set performances. So even though the film was generally filmed almost entirely against a bluescreen set in Los Angeles, the use of a virtual camera and captured performances allowed for multiple takes – just like live action – that could then be edited in various ways to form the scene.
So just how was this film made? How was that bluescreen photography and virtual camera approach turned into the striking jungle imagery realized by MPC, Weta Digital and other vendors that you see in the final film?
Shooting itself began with a motion capture pass of the film, but only to inform the shot design and blocking. The animals are all key frame animated in the final film, but Legato points out that the team completely blurred the line between pre-shoot and post. For example, each shot would be done several times over as it was being ‘found’, and that this continued in a structured way until the end of the film. There was no strict previz, shoot, post-piz and vfx pipeline. This is not to imply the film was random, MPC’s Adam Valdez stressed how key their planning and ‘creative intent’ tracking was during production. The way this team made the Jungle Book was using a new form of fluid creative, iterative virtual production pipeline and it was all done this way so as to produce a film that feels like it was shot live action.
The man-cub Mowgli (Neel Sethi) donned a mocap suit to create a starting performance. The resulting data then aided in establishing a virtual camera – simulcam – setup on the bluescreen stage orchestrated by Glenn Derry’s team at Technoprops, aided with motion capture cameras from Animatrik and a virtual camera system controlled by Digital Domain.
When principal photography was done on the stage the team used a live simulcam feed into the ALEXA camera – shooting native stereo – for a live composite of the previous mocap and previs. The process did involved previs and yet this is not traditional previs but virtual production. For example, The Third Floor were on the film for 6 weeks but working in the Art Department. Brian Pace headed the Third Floor team and his work was a mix of technical previz so they could very accurately build sets, working on colour palettes for those sets, but his team of 5 artists did no lensing or blocking of the shots. For example, they would work out how the King Louie chase sequence would work with the columns but not animate or previz King Louie.
The simulcam approach was vital in framing and orchestrating elaborate shots. Says Legato: “We had a very specific plan so when we are judging the work – we are seeing a composite that is telling us what it is really going to look like. You have to have the presence of mind to accept that and imagine in your head it finished so you can make tangible decisions of how you want to improve it. The image on the screen for an outsider would not necessarily give you the same sense that we get.”
On set, Sethi performed scenes with the aid of bluescreened performers standing in for the animals. Oftentimes these were trained Jim Henson Company puppeteers who helped the young actor imagine the creatures he would be interacting with. “We thought,” says Legato, “what if you get a Henson puppeteer who’s used to engaging children and trying to keep their attention and be spontaneous and improvise the next take and the next take and keep it fresh so you always get a fresh performance from a non-professional actor? And in many cases what you saw [in the final film] was really spontaneous laughing and all that stuff – what would have been considered outtakes – surprises. The goal was to keep him continually engaged take after take.”
Other times during filming, moments of interaction would be captured with specially built bluescreened rigs and bucks. For the scene of Mowgli riding Baloo down the river, Sethi sat on a carpeted rotisserie rig dubbed the ‘Favreator’ made by Legacy Effects that sat in a bluescreened swimming pool built above ground outside the LA stage. “Instead of doing just the traditional solid buck that moves,” explains Legato, “we had pre-programmed the movement where it has all the movement that Baloo would have. We pre-animated what the animal would really do, then created a motion control rig device as if you were to take this top end of the animal off and put it on a hydraulic rig and you would move the pistons so it imitates the same roll and pitch of that.”
Not only were the animals of The Jungle Book designed to be photorealistic, they also had to exist in photorealistic environments. MPC had a key role in fleshing out concept designs and previs for the jungle, for example, into concrete tech vis. “We would work out where the terrain was sloping, where the branches might be, say when Mowgli was walking up the branches,” explains MPC visual effects supervisor Adam Valdez. “Then we would export those pieces of terrain back to the art department and they would build pieces of terrain that would later marry with the digital sets.”
MPC generated 1984 Terabytes of data, and ran up 240,000,000 renderfarm hours, creating 54 animal species on over 284 unique sets with some 500 different plants based on hundreds of thousand reference photos the team took in India. Over 800 artists worked on the film from MPC alone.
MPC also utilised its team in Bangalore, India to undertake comprehensive photo surveys of over 40 locations across the country, generating hundreds of thousands of photos in the process. “This was the core of our asset creation for the world,” says Valdez. “Trees, rocks, plants, individual leaves and twigs for all the detritus, debris all over the ground. Then there are key sets in the movie based off real locations – like whole walls of rocks which we used as textures or built from photogrammetry. Our set lead in London would use initial set sketches, flesh them out, add hero modelling, like a hero tree and piece together them bit by bit and use new scatter tech to distribute plants and items and pebbles and rocks and dead leaves. Then a scene by scene design process where set and production and lighting would come together for pre-comps – a few key images per set. We’d show them to John to see if he liked them and then continue to work on them. Then there was a whole team of people who took that and did render optimisation to it and gets it through the render farm.”
While Sethi would be almost the only live action element in the plate (depending on the shot), he was at times a digital double. The actor was scanned via USC ICT’s Light Stage and built by MPC.
“The likeness is uncanny,” remarks Legato, who was particularly happy with this work. “The skin and the quality of the skin – it’s hard to detect. On the lazy river shots – when he’s sitting on the bear, the top half from his red diaper up is him and the bottom half is digital. Parts of his arms are digital double too.”
Stunt shots were commonly digital double work, but even very subtle shots made use of a CG Mowgli. “There’s a shot where a squirrel flies into the tree and you’re looking over the squirrel’s shoulder and you see the boy wrapping up vines for the build of his thing for the honey cliff,” outlines Legato. “I shot that with a motion capture double, and even people over here didn’t realize I did that. We just made it up because we needed it for the scene.”
“There’s also a straight over the shoulder looking down part of his body, looking at Bagheera because he kind of gets pissed off at Baloo,” continues Legato. “We just didn’t have that in live action. I told the colorist that that was a digital double and he said, ‘Well you’re full of it because he’s breathing!’ I said that’s what we captured. I did X number of takes and we picked the one where he was jostling around enough to make you believe that he’s real, without it being too obvious that it’s acting. Just enough so you believe it’s the body there and the character in the scene. But the colorist said, ‘Look, he’s breathing and the way he moves he’s foot…’. That’s the reason we captured it because we don’t think to do that. We do it as a human being pretty naturally and that’s why you shoot take after take until you get something. And you believe it’s real.”
Indeed, this is a central aspect to the way Legato always approaches his visual effects work – he is aiming for as realistic as possible, but it’s actually a movie kind of realism. “Movie acting isn’t real,” he notes. “It’s acting for a camera! The difference of opinion I have with a lot of CG movies is we’re not creating real life – we’re creating movie life. In movie life, the lighting doesn’t necessarily match from shot to shot. It doesn’t really – if you front light something and do a reverse back light, you pick the light that makes the scene work the best.”
Lighting The Jungle Book
That approach was reflected also in the way Legato and the visual effects teams pressed for imperfections to be included in the film in terms of focus, framing, lighting and other aspects to the scenes. The visual effects supervisor says he was aiming to re-create what a jungle movie might look like based on “on our collective memories of what a movie looks like, which is photographed, as opposed to perfected. The perfection of it I think starts to make it look much more artificial even though what you’re looking at is real objects. Photography is photography and if you want to expose for someone under a tree that’s in perfect shadow and then also keep the sky, you have to do a trick with it – you have to bring lights in. In the 40s that would bring lights outside and you could tell it was a studio lit thing and it’s a little artificial. And then when we got better at it and photographers like Gordon Wallace came in and said no the window will blow out, that’s fine, that’s normal. In fact if you want to shoot all this live action stuff on a stage and want to make it look like sunlight’s pouring in, you overexpose the hell out of the backdrop and the light pouring in through the window. You can control it, but as soon as you control it, it feels like it’s a set.”
That meant that Legato would ensure shots of Mowgli say under the canopy of a tree were correctly exposed for that kind of undesirable lighting condition. “If you want to expose for the kid under a tree, well then you expose a stop under and reset the backgrounds 2 or 3 stops over and live with that, because you’ve seen it before, in a real movie and that’s suggesting it’s a real movie – not real life – a real movie life, that tells you you’re looking at our version of movie life. It’s so different than computer life, where you have to put in these imperfections. And why would you do that? Because you don’t want it to look like you’ve controlled everything, you want it to feel like it’s a real film. When everything’s done really well – the focus, the lighting, the camera – all those disappear into the fabric of the movie and you just watch the movie. And that was the drive of every portion of the movie in terms of how to light it and create it in a computer these same conditions.”
What helped here to a large degree was a major effort from MPC to orchestrate the approach to physically accurate lighting early – prior to the shoot – during a pre-lighting phase with the DP Bill Pope. “We had a team in LA working alongside everybody in physical production,” explains MPC visual effects supervisor Adam Valdez. “Our lighting lead from London was there using PRMan 19 with the raytracer to sit with Bill and take every scene once it had come out of previs and take the environments which had been generated in previs and we would build on them. Then we would sit with Bill and pre-light several key angles in the set but in this case using domes and soft boxes as key lights to represent sky and reflection cards and the sun. They would establish in a precise way the time of day, the angles, the ratios of all the lighting.”
The effect was a pre-light approach to every scene in the film which were then turned into spherical renders of the digital sets. “It was a lot like Google StreetView,” says Valdez. “We had early 360 VR bubbles and you could look in any direction. We had two or three points of view rendered per set with all this advanced lighting work – we put that on an iPad that when they were lighting on set that could refer back to. They could stand on set with a $5 iPad app called iPano and load in these kinds of panoramas and spherical photographs. You could look in every direction and see up above there are trees but there’s dapple coming from the left, then over at the right there’s a big rock wall. Then there’s open sky. They could see in the digital set where the light was going from and what they had to consider while filming.”
Legato commends the use of, in particular, the new RenderMan RIS path tracer renderer for replicating the real world (or movie world) lighting (as used by MPC). “You put a light where the light is and it’s going to do all the various things and you can put bounce and fill cards and all you want as a real cameraman would, and you get the same kind of effect. It’s what you would do for real and you do exactly like that in the computer and do not try to embellish it too much, not too fancy, and it looks real.”
Moving completely to raytrace rendering was a major move for MPC, although the facility has been a RenderMan user for some time. “One of the biggest issues was replacing our entire shader set,” says Valdez. “We switched over to more of a mono shader. We did have multiple shaders for skin, hair and hard surfaces but we switched over to a singular shader model. That’s really a strong model because when you have a ray tracer that knows how to transport light, you don’t really want lots of other variables.”
“Also,” adds Valdez, “the lighting department had to learn different disciplines that come with ray tracing – although it’s not alien to us at all. MPC has been doing so much work to emulate ray tracing using REYES and building so many components to get you close to that ray-traced look anyway. But what raytracing does is – the more accurate your light transport model, the more robust your assets need to be and your lighting setups and lookdev setups need to be.”
When it came time to deliver the 3D character animation, one of Legato’s initial concerns was delivering animals whose performances would be voiced by well-known actors. “Part of the gestalt of the movie was movie reality – if you stylize the animals, you would never suspend disbelief. They would always be an animated character and you would always sense the actor playing them because it’s a larger than life thing. If you make the initial leap that if a leopard can speak, he would enunciate every word but he could only use the jaw God gave him and no more. If you give him a sense of intelligence and only let him speak as if such a thing is possible, you would eventually get sucked into the performance, just like you accept Marlon Brando is the Godfather.”
Adds Legato: “We did tests where the animators animate every syllable, but we thought that looks too weird, it looks too funky and if you look at Ben Kingsley talking sometimes his lips don’t move as he’s saying the next word. We don’t do that. So we thought, let’s not do that with the animals either.”
Another issue to solve was a slight contrast in the characters themselves as the movie progresses. In the first half of the film, the animals are completely photoreal in look and performance (albeit with speaking roles). Then as we meet Baloo and later King Louie, these creatures are much more larger than life. “At the beginning, you’re in the zone,” suggests Legato. “With Shere Khan, for example, there’s something about Idris Elba’s performance capturing the calibre of that animal and the majesty of it, and the point of view it had, and it’s all really strong. You’re into the realism of it, forgetting it’s an actor in a booth and a 1s and 0s rendering.”
“But when you get to Bill Murray,” says Legato, “he is a larger than life funny man. The bear is particularly difficult to make it look believable when it’s doing something un-bear-like. In our case, a friendly bear is not a real thing. You have to hope that the audience is with this movie to take a leap with this guy and now you hopefully just enjoy the camaraderie and the friendship and the story point that’s being made. The same thing with King Louie – he’s a larger than life character and so is Christopher Walken’s voice. It was a little Apocalypse Now – we stole as a reference the first introduction to Marlon Brando in that movie, who is also playing a larger than life character.
Legato heavily praises the work of MPC and Weta Digital in animation, especially the tiger Shere Khan created by MPC and the performances of the monkeys in the King Louie scene by Weta Digital. “Next time you view it, stare at the background creatures – they all have different peculiar personalities and every time we would look at it, you forget the peripherally that some animator somewhere is putting the personality into the backgrounds.”
MPC, taking on the lion’s share of the digital creatures particularly with Baloo, Shere Khan and Bagheera, launched into a major reference gathering process – mainly sourcing online clips and some photography at wildlife parks. “The important use of the reference was in animation,” notes Valdez. “We worked really closely with animation supervisor Andy Jones on working out, well how does a wolf say it’s happy or sad or tense? You realise you can do a lot of this as reverse engineering. You can look at look at lots of examples and you find a clip that suggests something to you – moving in a lackadaisical way. I don’t need to know why but I’ll copy it and if I copy it well enough the audience won’t care – you don’t want to ‘speak’ wolf to the audience, you want to speak human and hope the audience gets it.”
Under the hood, MPC also carried out extensive R&D on its muscle, skin and fur systems. “We made a choice to go very hi-res with the models,” states Valdez. “I was concerned with previous approaches to representing rippling and subtle skin deformations which we’d done with texture maps techniques. I thought it was too hacky for today’s standards and really didn’t let me see skin moving over ribs or muscle definition. I wanted to see fine movement in the skin and wrinkling in the geo.”
Weta’s Cold Lairs
Weta Digital’s King Louie sequence, in which Mowgli is taken to the Gigantopithecus’ lair in an abandoned temple, involved several design stages. The first was for the jungle and temple environments. “We sent our VFX photographer out to India for a few weeks and he shots 10s of 1000s of reference photos across several temple sites,” outlines Weta Digital visual effects supervisor Keith Miller.
“For the jungle, at the time we did not have instancing as part of our Manuka renderer when we started the show, so we used that as a test bed. It meant we could throw a large number of trees at it. We also upgraded our Lumberjack foliage system, which let us add more complex motion and the ability for the animators to drive primary interactions with the apes and the branches and then pass that stuff to FX.”
Character design was also a large part of Weta Digital’s mandate; here it was for King Louie and his monkey minions. “Most of the animals we’ve worked on previously are based on real life reference,” notes Miller. “You can go and take photos. That said, to a degree King Louie is more or less a giant orangutan. You study what you can about the Gigantopithecus – you look at the fossil records, you read about the research on them.”
Ultimately King Louie, voiced by Christopher Walken, was realized as an 11 foot tall animal. Walken’s sound recording was videotaped and that used for reference in the performance, but not to the point of any facial scan or performance capture. Instead, director Jon Favreau himself wore a capture suit in a volume – although even this just served as reference and not used as mocap data directly.
Weta Digital has created a photorealistic orangutang before – Maurice – on the Apes films. Here, the studio did capitalize on that build but because of King Louie’s scale they needed to take certain aspects much further such as the waddle – the air-filled sack below the neck.
“Whenever we were looking at Maurice in Apes he was never right up in camera,” says Miller. “Because of the scale of him, you’re looking right into Louie and the waddle because of the viewpoint of Mowgli. It wasn’t quite holding up so we implemented a system of doing dynamic displacements that really allows you to see the compression and expansion of the higher frequency structure and detail in the waddle.
An extra detail added to King Louie was he mass of hair and fur – groomed in Barbershop and rendered in Manuka – and the several monkeys mingling in it. “Jon actually referenced Davy Jones’ tentacles from the Pirates movies here,” explains Miller. “One thing that stuck in his mind was the quality of those tentacles – he wanted to play on idea with Louie by giving him really thick dread-like disgusting and dirty layers of fur. So you see those parasitic tiny monkeys climbing in and amongst all that stuff – that’s going to stick in your head well beyond.”
The song and dance sequence involving King Louie was originally intended to be part of the closing credits, but at some point during production was scrapped – and then re-imagined to feature in the main film. That meant Weta Digital had to turn around animation fairly quickly for that section, sometimes without enough live action plates or mocap reference. The solution was to rely on a digital double of Mowgli for portions of the song and dance.
Legato says during the King Louie scene, look closely at the shadows on the wall – one looks strikingly similar to a famous Disney character. He’s also fond of the moment Bagheera tries to attract the attention of Mowgli so they can escape: “They asked me how would we do that, and I thought he’d just flick his head. If you’re trying to get someone’s attention you just move your head in that direction and it creates the illusion of ‘let’s go’ – they did exactly that and it worked.”
The film’s stereo presentation also helped present the characters, although Legato says this was not an ‘in your face’ stereo film. “I only wanted to use it judiciously and make it feel perfectly natural that you could melt into the world and not pay attention to it. However, the one character that is continually off the screen is Shere Khan. He is continually in your lap as part of his personality of intimidation. To make that subtlety work without being hit over the head, is that the other characters are comfortably at the screen or back, but not so obviously so. It’s literally like someone who’s in your face, tends to walk a little closer to your comfortable zone than somebody else does.”
A fun end to the movie is included as the end credits roll – here an old-style book flips through its page to reveal many of the animals from the film which literally pop out of the book in accentuated stereo. Legato shot the live action portions for the credits himself as a tabletop – and used the ACTUAL book from the original Disney animated Jungle Book film from 1967 which was also augmented with a CG version (as the pages inside had also been used for Robin Hood in 1973).
“Jon wanted it like a tabletop,” recalls Legato, “so how do we make it look like a tabletop? You put it on a table with a piece of velvet around it, and you can’t help but feel like it’s a toy – a little miniature creature. Only one person is allowed to touch it which is the archivist so it was like the first bible. So he had white gloves on. We set up the first shot. We had holes below the surface of where the velvet was and we said well we want to poke the book to make it move, what do you say about that? The only way to sell it was to say, we are nervous about it so we want you to do it. And he went, OK! Our archivist went under the table and we gave him a musical beat to hit and we shot it.”
Weta Digital had a hand in the end credits sequence too in which King Louie wreaks some havoc in a temple environment. “Jon wanted us to play with scale and keep the whimsical aspect to it,” says Keith Miller. “We had to start looking at it in terms of not only the animation and performance, but the simulation too. Jon said it should feel like a stage or a set piece. So we didn’t have to concern ourselves about real-world physics. It was the same with the sim of the fur – is he smaller plush toy or a scaled down version of Louie? We did keep the physics of the fur the same, but we took the scale of the environment and brought it up. We made a gigantic version of the environment all around him.”
Just as fun are the film’s opening titles which reveal a different Disney logo that merges into the jungle. These were designed as a throwback to the original way that animation and animated titles might have been designed in the 2D cell era – ie with a multiplane effect, a technique Disney had pioneered.
“Coming out through the clouds, seeing the stars, it’s all cell animated fireworks with a Disney animator animating it and my son shooting it one frame at a time with my still camera instead of a Mitchell. My son then shot that stuff with a 7D camera. We were looking at material which was the big innovation for Disney – the multiplane camera and we wanted to have a nod to the fact that we appreciate the past, or where we came from and not wipe everything clean and do this fancy CG thing. We shot it in Technicolor. I had him shoot it with a red, green, blue filter for every frame. And so it would have this patina of how they had to do it back in the day. We basically created this multiplane shot which is the old version of the multiplane shot blending into the CG version of the multiplane which is the opening of the movie which is going from the old to the new. We shot in separate cells, we put it together in NUKE. First I tested it in After Effects and made a camera literally with multiplanes mimicking what that camera did. Then MPC finessed it a little bit to get the 3D right and blended into their own version.
And as is often the case with the films he works on, Legato was involved as a second unit director. “It’s like shooting a dog, quite frankly,” he says. “You’re trying to get a kid to act and he can’t act, so you just try to get him to react. The main thing that got his performance was an iPhone. He was looking up and reaching for the face of Bagheera except what he was reaching for was an iPhone and we kept pulling it away from him and he kept on reaching more and he really wanted that thing, and it creates the illusion was reacting to Bagheera. His mom was dressed in a little blue suit and she’s holding his favorite toy which is the iPhone and you pull it away too long and he starts to cry, and that’s a different moment we start to use and when you give it to him and he looks happy…you’re just stealing reactions.”