Marmaduke

Director Tom Dey called on several FX shops for Marmaduke, the story of a loveable Great Dane who moves to Orange County with his unsuspecting family. Under overall VFX supe Craig Lyn, artists brought to life several talking animals mostly via CG face replacements, although some key scenes featured an all-digital title character. We take a look at the work by Cinesite, CIS Vancouver and Rhythm & Hues.

Making Marmaduke talk

The majority of the real talking animals in Marmaduke were handled by Cinesite in London, which delivered 650 shots of different dog and cat breeds. Cinesite visual effects supervisor Matt Johnson spent three months on set in Vancouver during the shoot and drew upon previous work for Beverly Hills Chihuahua. “We developed a projection-based hybrid technique where you start with the real dog’s performance,” said Johnson. “Then you build and model the head and track the live action, then take the geometry and re-project the live action performance back over the animated geometry. Sometimes you have to go with a fully-CG head when the dog is making some extreme movements, but that’s the basic technique.”

10Jun/duke/cinesite_duke1

To replicate Marmaduke, voiced by Owen Wilson, and the other dogs, the animals were laser scanned, often with varying results because of hair and other issues. A more useful source of reference came from tracking markers that could be placed on the bony points of the face. “Dogs don’t stand still for LIDAR scans so well, but we were able to stick these little dots on and take a series of photographs from every conceivable angle,” explained Johnson. “Because the markers are fixed, you get a good sense of the correlation between the different angles that you’re viewing. We also had some little cards made up with standard sizes on them which we could hold up against the dog’s muzzle or forehead to get a true sense of the scale of the animal.”


10Jun/duke/cinesite_duke2

On set, animal trainers from Birds & Animals Unlimited relied on food at the end of sticks to guide the performances. “They’ve got it down to a fine art now – wherever the stick goes with the piece of chicken that’s where the dogs look,” said Johnson. “Plus, Marmaduke isn’t just one Great Dane, there were two, one called George and the other called Spirit. We had to adapt our Marmaduke model depending on which dog we were doing visual effects shots for.” Also on set, Johnson and other Cinesite personnel shot chrome and grey ball references anytime a dog was involved, relying on HDRI setups only occasionally, as the dogs would get too tired between takes.


10Jun/duke/cinesite_duke7

Cinesite’s 3D dog heads were modeled and animated using Maya and rendered in RenderMan based on the dog scans, as well as photographic reference and books of dog skeletal structures and musculature. For animation, artists looked to video of Owen Wilson’s line readings to follow his distinctive style. “I’d supervised an Owen Wilson movie before and had a good idea of his expressions and the way he talked,” noted Johnson. “A Great Dane has these big floppy jowls, which keep moving when it’s talking, so we did a lot of asymmetrical jaw movement and bringing the jowls up to create offsets in the face. That seemed to suit Owen’s comic style of talking.”


10Jun/duke/cinesite_duke3 An in-house team at Cinesite continued previous development of compositing tools in Shake that allowed painting of frames, removal of seams, matching of (often wet) fur and the re-projection of images, rather than going back into 3D to revise shots.

Performance-wise, artists aimed for something in between cartoony and absolutely believable. “It wasn’t just a case of making the mouth move in sync with what the actors were saying,” said Johnson. “We had to do all of that but also make it cartoony in a sense so that it was doing something that a dog couldn’t do but make sure it was believable. It needed to feel as if Owen Wilson had turned into a real dog. But the idiosyncrasies of the dog’s performances were the most important – the eye-twitches, the blinks and all that stuff. The minute you take that out it becomes like a stuffed animal.”

After spending 11 months on the show, much of that in post-production in Los Angeles where he reviewed Cinesite dailies being delivered from London, Johnson came away with a new-found appreciation for man’s best friend: “What you have to bear in mind when you’re shooting a film like this is that you have the actual dogs doing great performances, but we have to make sure anything we do in visual effects will cut back to back with the real footage.”

Cupadogra and friends

Making of clip 1 from CIS Vancouver

Making of clip 2 from CIS Vancouver

10Jun/duke/CIS_1086 CIS Vancouver contributed 30 talking animal shots to Marmaduke, including for Chupadogra (Sam Elliott) and Bosco (Keifer Sutherland). CIS visual effects supervisor Nicholas Boughen approached the shots by looking to talking animals work over the course of eight or nine previous films. “One of the early things we talked about in particular when we’re creating our animation shapes, for example, is that a dog doesn’t have facial musculature that lets them talk,” said Boughen. “Well, dogs don’t talk! So we have to anthropomorphise these animals. We have to make adjustments to their physical reality in order to make them behave in ways like humans. For example, a dog doesn’t have the same facial muscle groups that humans have. But you have to add the humanistic muscle groups because facial expression is about 50% eye shape.”


10Jun/duke/CIS_1101Using scans of the dogs and photographic reference, Boughen’s team matchmoved the performances using a proprietary 3D talking animal head system based. “For Good Boy in about 2003,” explained Boughen, “my boss had called me up and said, ‘We have this show with 350 talking animal heads – can we matchmove them?’ and I said, ‘Well, ‘course we can!’ and then went away and tried to work it all out! We worked on quite a few iterations. We started by trying to stabilise a plate around a head so it you wouldn’t have to matchmove it so much. And then we went through a bunch of iterations before coming up with the current system. Whereas in the past it took us seven days to matchmove a talking animal head for a four or five second shot, we now have it down to the half a day to one day range. A lot of it is automated, although nothing can totally replace human judgement.”


10Jun/duke/CIS_1174Artists at CIS used Maya and Softimage for modeling 3D versions of dog heads, relying on Maya for blend shapes and animation rigs and for cloth sims and texturing. Rendering was handled out of Lightwave and Mantra and compositing was shared between Shake and Nuke. “Mostly, we did full face replacement, above the eyes, between the ears and down to the neck,” said Boughen. “For one particular shot when Marmaduke meets Chupadogra, they have an exchange and the shots range from wide angle to extreme close-up. When you have full face replacement at film resolution on a full size theatre screen and that entire screen is filled up with your work, essentially, that work has to be photo-real, organic, with jiggly jowls, fur – everything! We all looked at that with excitement and we each took it as a personal challenge.”


Marmaduke surfs and steps out

Making of Surfing from Rhythm & Hues

10Jun/duke/Rhythm_0050Fully CG versions of Marmaduke, seen as the Great Dane rides a surfboard, rescues a pal from a sinkhole and busts some dance moves, were handled by Rhythm & Hues. “It was different from what we usually do,” commented Rhythm’s visual effects supervisor Mike O’Neal, “because usually we’ll have a CG character that’s CG for the entire movie and that means we’ll have a lot of artistic control to make the animal do whatever we want. The challenge here was that we had to make it look like the real animal as much as possible and also match the facial animation that the other shops were doing.”


10Jun/duke/Rhythm_0060To model the digital Marmaduke, artists relied on rough scans and photographs taken from many different angles. Using Maya, Rhythm came up with a hybrid model that looked like both Marmaduke dog actors, passing this through the studio’s proprietary Voodoo system for rigging, animation, grooming and lighting. Compositing was carried out in another proprietary package called Rampage that also runs out of Voodoo.

For the surfing sequence, one of the biggest challenges was achieving the right scale. “That’s always a game between how fast things should move,” said O’Neal. “Everybody usually wants a fast-paced sequence, but almost all of the surfing reference we had was shot in slow motion. It’s a cascading effect because once you start the speed of one thing, that drives the speed of everything else. We ended up having all kinds of challenges – how fast could it go to the camera and still have it feel realistic, how much camera shake can you give it without making it feel like a miniature? How explosive can the white water be to make it look really powerful but doesn’t make it look small because it’s going too fast? A lot of it was timing and re-running simulations and checking things.”


10Jun/duke/Rhythm_0040Rhythm & Hues approached the surfing shots by breaking the wave down into separate components. “We started with hard surface water,” explained FX TD Walt Jones. “The shape was a little bit different but the surface properties are similar. Then the spray was fine particulate and that had its own challenges. Then the third category which was the transitional piece which was somewhere between water being solid and completely vaporised. It has to move from one to the other, and that was ultimately the biggest challenge. It’s something that happens fairly effortlessly in the real world, but we had to do it without showing obvious breaks.”


10Jun/duke/Rhythm_0137 For Rhythm, it was important to be able to control or art direct the wave simulations, as well, so that there the photorealistic nature of the shot could blend with the relatively crazy nature of a dog riding a surfboard.

Explains Jones: “We effectively ran a fairly straight CFD fluid simulation using the contact point between the wave coming over the top and the flat surface of the water to get this very rough approximation of the motion vectors that would be created by this kind of event. We had this big blobby skin that looks like a fluid but is very low resolution just to get the gross movement of things. Then we would use that to drive a particle system in Houdini. We get all of these particles out of it – we can run them through the vector field and we can have them timed out with the event. Then when we actually start putting stuff into our renderer. We’re going in and we’re stamping individual volumetrics, not spheres. We have a fair amount of control as to how the points are rendered so we can have a fairly sparse particle system but end up with a fair amount of volumetric density out of it because we can stamp different shapes of volumes onto each of these points.”


10Jun/duke/Rhythm_0064 At the lighting stage, artists started with an existing shader for the look of the wave, with two separate lighting passes. “One was a fairly straightforward subsurface scattering pass,” said Jones. “We’d make point clouds, a radiance cache and then by doing an elementary algorithm we’d figure out what the subsurface contribution was going to be based on different points and scattering distances. That became the basis for the murky look. The concept was we were going to use that to approximate the look of light being reflected and scattered amongst little bits of particulate. We had a lot of code in the lighting pipeline to try and push more of that into thinner areas of the wave and push it into the deeper parts. We tried to think about it from a physical perspective – where does this churn exist and where do air bubbles show up? We built some fairly simplistic expressions that would drive the influence of that subsurface scattering.”


A beauty pass made up of almost 100 per cent reflection and refraction was then run over the top. For backgrounds, Rhythm matte painted a sky onto a dome and had a murky deep sea out onto the horizon. “It ended up being a conglomeration of a number of different styles of waves, and something that required a different amount of directability,” noted Jones. “So we went away from something that was much more based in true simulation into something that was much more controllable by hand.”

A subsequent shot of Marmaduke falling into the water, seen from below, relied more heavily on simulation. “Things don’t really look that interesting from underwater when you look straight up, so we ended up having to augment and exaggerate the influence of things,” said Jones. “We ended up doing several dozen simulations and picking the ones that most closely resembled what we thought would work. Actually, a lot of us are avid surfers so it was really cool to go out in the morning and come back and analyse the waves in dailies.”

The digital Marmaduke was also necessary for shots of the Great Dane rescuing Mazie (Emma Stone) from a sinkhole and other shots of him rocking out to a video game machine. “The trick with that one was to make it feel like it’s the real dog,” explained O’Neal, “but with very specific choreography. He had to do things that were almost ridiculous for a real dog to do, but we had to try and transition that from completely real to the audience buying it the whole time. And we had to also replace the dance floor, because the reflections wouldn’t work.”

10Jun/duke/Rhythm_0020 Near the end of the film, Marmaduke partakes in a major dance sequence with 40 other dogs, all realised completely as CG animals by Rhythm & Hues. “Normally when they shoot dog scenes like that they will film 10 or 11 groups of dogs and then comp them altogether,” noted O’Neal. “But for a dance number with a moving camera there’s no way you can get the dogs to do the right things at the right timings during the right part of the camera move to be able to comp them together.

The cameras were also doing some crazy moves, a lot of which they weren’t able to do on set. We took the plates they shot and projected them onto geometry and created much more dynamic camera moves to showcase the dance tag number. So we had to create environments as well, which was easier in terms of controlling everything but harder because there was a lot more to worry about. I think in these and in all of our shots we were aiming to suck the audience in to a point where we can then go a little over the top, but always have it sneak up on you in a way that you don’t realise you’re seeing an effect.”

>
Your Mastodon Instance
Share to...