Merlin: VFX Arnold TV pipeline

Merlin is a BBC UK drama about the life of the young wizard in the era of the Round Table and Camelot. Much has been written about how the show is a medieval version of Smallville. Its fifth and final season is now airing on Syfy and contains visual effects headed by Michael Illingworth of boutique facility Vine. We take a look at how Vine crafted the effects of Merlin with a new Arnold pipeline, and how several other companies assisted with scanning.

How Vine began

Himself a veteran digital artist and VFX supervisor who had handled the matte painting for the previous series of Merlin, Illingworth created Vine in 2007. He was looking for a fresh challenge after 12 years working in large facilities such as The Mill, Mill Film and Cinesite. After working freelance, Illingworth was invited by Tim Burke to join the Leavesden Harry Potter VFX team.

Illingworth worked closely with Burke and the editorial department from pre-production through to the end of post. One major sequence he contributed to was the Occlumency sequence where Harry could enter into the Voldermort’s memories. Towards the end of Harry Potter, Illingworth was approached by the producers of Merlin who were looking to create their own in-house matte painting department for series 4 of the show.

Illingworth approached post production supervisor Claire McGrane, and together they recruited 3D supervisors Ivor Middleton and Sally Goldberg. Middleton and Goldberg had both worked on several award winning films over the years and had experience from major VFX facilities in London and overseas, as Illingworth explained in a recent 3D World magazine interview. From matte painting in series 4, the team expanded to tackle complex 3D characters and dragons in series 5.

On the set of Merlin – scanning

The team started with on set scanning and HDR, but inside the constraints of episodic television. “Working in TV is very different,” says Illingworth. “When I was on Harry Potter on set, there you get two or three different people capturing all those different HDRi’s for every single location and unfortunately when you work on a broadcast TV series, the people on Merlin did not necessarily want a VFX supervisor on set so we weren’t there to capture everything. So sometimes we got really good HDRi’s where we got our full 360 degree with the fish eye. Other times we were struggling. Often we had to have our guys build little (digital) sets to light our CG elements for say a 3D sword.” The HDRi’s that were captured used an 8mm lens on a Canon 5D, bracketed and stitched back into an HDR-lightprobe in PTGui.

At some locations the team did get LIDAR scans and these would be used to build a digital environment with plate photography projected back over this to build more accurate 3D CG sets/environments. “Then we use that to scatter light back onto say the dragon. A lot of image based lighting was used but when we didn’t have HDRi’s we’d have to use more good old fashion techniques, and even just ‘bog standard’ (‘off the shelf/standard sky map’) HDRis,” notes Illingworth. This happened especially at some of the exterior locations later in series 5.


Original plate.
Final shot.

For example, at the Clearwell Caves location, Near Coleford, Royal Forest of Dean, Gloucestershire (near Wales) the location was LIDAR recorded. Duncan Lees of 4Dmax was in charge of scanning the 2000 year old iron mines. “For the Merlin gig we used a Leica C10 time of flight laser scanner, processing in Leica Cyclone and Geomagic software to create highly accurate irregular triangular mesh models of the caves used as locations in Merlin,” he says.

As we are not experts in LIDAR, fxguide suggested that the C10 was perhaps a lot like the FARO Focus LIDAR scanner which we have recently covered being used in such films as Beautiful Creatures. Lees amusingly replied that “the C10 is in the same broad area of technology as the Faro Focus 3D, but is only a comparable type of kit in the way an instamatic camera is the same as a top of the range Canon 5D DSLR. The Focus is a cheap toy that anyone can use (badly) whereas the C10 is a precision survey instrument. Too many people in VFX think you can point and shoot any laser scanner and get great data. Definitely not true. It requires training, expertise, experience and good survey methodology to collect precise and complete LIDAR data.”

LIDAR can be very data heavy, admits Illingworth. “Frankly it was overkill for some shots,” he says, noting that some of the effects shots were lock offs and as such detailed geometry was not pivotal. But there were shots where the LIDAR was critical. “We scanned three different sets inside Clearwell, that we knew they were going to film in but there was a shot where we did not quite get the plates that we needed so we ended up using the LIDAR. We textured them ourselves and made a brand new shot using the LIDAR. Yes it was a lot of data but it came in really handy.”

The C10 has an in-built camera that can color the point cloud, but while it is accurate its imagery is not as good as the master plate photography. So Dante Harbridge Robinson, one of the key matte painters on the team, took the LIDAR scans for a scene in episode 2 and textured ‘little cards’ which were positioned correctly in 3 space. This allowed a complex crane move to be done and ‘create a brand new shot out of pretty much nothing, explains Illingworth. “It was great we could take the photography and project it over the LIDAR geometry and it would do most of the lighting for us. Dante projected six matte paintings to cover the camera move. He created a lower resolution model of the LIDAR using 3D-Coat. Having the LIDAR was great because of the detail of the rock facets which shows up with parallax. He also modelled some medieval scaffolding. We had footage of miners working in Clearwell Caves, and he placed these on cards, 50 in total, all in 3d space on the LIDAR. It was a case of finding a place for all the miners where their lighting matched what was going on around them. Unfortunately none of the miners had been shot with a greenscreen, so he had to rotoscope them all! That was tedious.”

One technique that Illingworth found very productive was using Agisoft’s PhotoScan Pro for image modeling. “About half way through the season we jumped on Agisoft,” he says. “We had this one creature, the Gean Canach, which is a slug about two foot long. We got a model from set, photographed it, did a full 360 turntable and we were able to model that and get the textures quite quickly.”

The Euchdag / Diamair: Dimensional Imaging + Ten24
Original plate.

The Euchdag / Diamair, or the key, lives in the Caves. This character was fully digital, and meant to be a 400 year old semi-translucent humanoid. She was voiced by Josette Simon (Blake’s 7) who also provided the body reference and facial animation scanned motion capture performance. As Diamair emits conspicuous amounts of bioluminescence, the team based their designs on deep sea creatures such as jellyfish and deep sea fish like lanternfish that live below the photic zone of the ocean.

Final shot.

To capture her performance the team looked at traditional motion capture and video motion capture, and they decided to use a 4d facial capture service and imaging technique provided by Dimensional Imaging (DI3D) which captures the whole face, rather than the traditional point-based marker systems. “Even if we could have gotten a point system to work,” explains Illingworth, “it would have meant a lot of work on our behalf with blend shapes, to get the facial capture to work on our CG creature, whereas with the dimensional imaging technique we got every single movement of Josette’s face.”

The team painted the actress’ face with a special whiteish pattern and then filmed her with an array of nine cameras around her head. These feed into a solution not unlike image modeling, where “we get every single movement on her face – which worked really well with us,” says Illingworth. The team then edited the sections of her performance that they needed, sent these off to DI3D and the company sent back the process data that Illingworth’s team needed to animate the face.

ten24 scanning a different actor with the DSLR rig

Simon’s face was also scanned by Ten24 with a 19 stills camera array to provide a master dense model with a very high base texture (extracted from the stitched stills). The DI3D / Ten24 team then retopologized the face into an animation-friendly model with extra detail around the eyes and mouth and combined it with the DI3D export captured animation. “We sent it up to them and it worked really well which is great as it started at 10 shots and it ended up at closer to 50 shots,” adds Illingworth. The final export from DI3D was then returned to Vine who combined the head animation with their body animation. His team was also free to adjust the animation, add extra eye blinks etc while all the time having perfect lip sync detailed facial animation. Ten24 and DI3D have worked together many times. Both companies are in Glasgow, Scotland.

Raw video of facial capture.

While Ten24 produces great 3D scanning and thus creates high resolution meshes, it does need to be retopologized. Often times, a model is created with emphasis on form and detail, however its topology, or edge loop is not ideal, or the mesh is very dense, and thus not efficient. It needs to be adjusted and often baked onto low poly and efficient subdivision surfaces. You can use ZBrush or several other hi-res sculpting/painting applications if the production wants blend shapes, or morph targets from the scan.

The Ten24 rig uses an array of Canon 550D cameras firing, importantly, at 1/10,000 sec for an in sync capture. The rigs only use standard lenses and is quite portable – there is no special calibration. The company was started about five years ago and at the core of their current implementation is a version of the Agisoft image modeling tool. In addition to service work, Ten24 sells hi-res still models online. fxguide spoke to founder James Busby about the company and working closely with others such as DI3D.

DI3D is actually considerably older than Ten24, over twice as old, having grown from research done at Glasgow and Edinburgh Universities. The company started doing key medical work in the early days, but then got involved with games with a significant sale to EA Sports. DI3D sells complete systems in addition to service work and they sold a very early system to do 3D facial capture for the EA Sports FIFA title.

DI3D has two approaches – their original static head model 3D system and a newer 4D system (DI4D) which is a moving set-up that allows for more performance. It was first shown at SIGGRAPH San Diego. The 4D system uses two higher than HD cameras shooting sideways. The 4D system provides performance capture very importantly with a single deforming mesh. This is done with the dense optical flow between frames. DI3D then applies a template mesh (low-res base mesh fitted to high res scans) and checks all the edge loops are fine. This single mesh outputs with vertex cache (that deforms the mesh) and can output other things like pre-frame texture maps/normal maps. This means that there are “no blend shapes in Merlin, for example, the mesh just animates in say Maya,” explains company founder Colin Urquhart.

3D render from facial capture.

For Diamair, the process then moved to the look and matching the on-set lighting provided by the light suit that was worn to provide contact lighting and of course actor eye lines. “She looked like one of the characters from Tron,” joked Illingworth. “But this meant she cast her light on the walls of the cave and the characters which was really good to provide correct interactive lighting.” When the team came to create the digital creature the artists had to take all into account not only this exterior light but her internal organs which also created light, which in turn was reflected and refracted by her skin. “So the balance there was complex,” says Illingworth, “we had two different lighting setups: the skin render pass – a traditional SSS render and then the internals which were a combination of different approaches. But in the end it was almost better to just have a simple internal CG light that was then refracted and treated with blurs. The combination of the two worked well. The skin pass integrated her with her environment and the internal organs also allowed us to cast extra light on objects.”

The layers of CG were combined in Nuke by Vine’s team of compositors, led by Sandro Hendriques, with the internal lighting pass given a soft glow to match the filmed reactive lighting on the cave. The Vine animation team for Diamair included Rachel Ward and Benn Garnish. One of the difficulties in posing the character was that she was a naked ancient woman, and although she was dark and transparent there were certain angles that were just not flattering. Old and glowing, she was still an agile creature, moving silently around the cave and evading detection from Morgana’s miners. Illingworth believed in terms of backstory to evade detection Diamair must have been able to moderate her glow or light output, something the team took advantage of in realizing the final shots.

Watch a DI3D demo.

Dragons

The dragons in Merlin were all rendered in Solid Angle’s Arnold.There are two main dragons, a new dragon (Aithusa) which is small and the much larger Great Dragon – who has been in the show from the first series.

Aithusa

Aithusa was lit from HDR reference on set using Arnold’s image based lighting toolset. “We were given Harry Potter and the Deathly Hallows as a reference for the dragon,” recalls Illingworth. “The design for Aithusa was by Martin Rezard. She was based on a ferret, as she was battered and furtive. She had been in a lot of fights and had a hard life. We tried to take her away from the traditional fantasy dragon type to a more realistic creature-based body and movement.” Aithusa was designed in ZBrush, and was retopologized in 3D-Coat for animation purposes. The textures were painted in Photoshop and Mari with some use of Mudbox to create displacements for the dragon’s scales and to add detail.

The rigging was done by Sally Goldberg who used proprietary tools to build a winged quadruped rig. “We used bats as inspiration for how the wings worked, and build in a cloth simulation to give them a natural billowing motion,” says Illingworth. “This was set up as part of the main rig, so you could toggle between the dynamics rig and the non-dynamic rig quickly to see how the dynamics were working. We built a facial rig, since while Aithusa is mute, she tries to talk to Merlin in the same way that the Great Dragon can, so she needed all the facial animation control to make an expressive face attempt to talk.”

Virgil Manning led the dragon animation team. He based the character’s walk around the way a great dane moves. “Her limp was the trickiest aspect to get right and maintain across all the shots,” adds Illingworth. “When she was flying we used the flight of the bald eagle as inspiration.”

The Vine lighting team included John Roberts Cox, Alex Galan and Jonathan Moulin. In one particular sequence they had to also light the dragon in an exterior snow setting. Using only generic HDR it was hard work to make work, but Illingworth was particularly complementary of the compositing team who he felt really needed to work harder when the team did not have accurate on set lighting reference and it fell on the Nuke crew to eye match the final image.

The Great Dragon

This model was initially created by the Mill and passed to Merlin’s in-house VFX team, headed by Illingworth. It had been lit by mental ray and so the team re-lit it in Arnold. “The great thing about Arnold is that you can get really nice renders right off,” says Ivor Middleton, Vine’s head of 3D. “It’s all about the bounced lighting. It’s not like old-fashioned ambient or diffuse occlusion.” Because it’s a physically accurate raytrace renderer the Vine team could put a single light into an environment and get all their indirect lighting effectively for free.

Middleton had also considered using RenderMan, 3Delight, mental ray, V-Ray and Houdini’s Mantra renderer for the work, but Arnold was chosen for its simple workflow. “With Arnold, we’d typically only have around three lights per scene,” he says, “a key, fill and edge lighting. If you were lighting a similar shot in RenderMan, you would typically end up with quite a few area lights to get the set looking nice, but with Arnold, an awful lot comes from the environment itself.”

And because Arnold does not need to pre-calculate secondary data like point clouds, shadow maps or irradiance caches before commencing a render, the calculation process itself is memory-efficient, placing minimal load on servers and network infrastructure, and requires little human intervention. “If you’re generating point caches, you have to go through and debug them. There’s more TD-ing to do,” says Middleton. “Because Arnold is handling all of that for you, it’s a lot simpler.”

Key to any dragon can be fire. When a dragon was required to breathe fire, a large contact light was used on set. The actual flames are live action flames – often filmed with dummy objects such as boards to allow for the compositors to sell magical barriers or shields on flame attacks. Not only does the flame look very good when shot and composited by the Nuke team but it is also very quick to shoot good elements on set and then manipulate them, rather than have to develop a particle or fluid sire solution.

More magic in Merlin
Original plate.
Final shot.

The visual effects were not limited to character work in the show – they also included:

  • matte painting
  • set extension
  • extensive greenscreen work
  • sword and prop replacement
  • wire and rig removal
  • Merlin magic – from eyes to fireballs

At the end of the day, Vine’s team needed to produce a lot of effects, on an episodic budget and timetable. “If we have done our homework on choosing the right technique,” says Illingworth, “and using companies like Dimensional and Ten24 for work like the facial capture, we could say to them, ‘Look we need another 10 shots,’ and as long as we have the lighting setup for that, then 10 extra shots can be accommodated. It is not going to break us.”

Clearly, Illingworth loves not only the work but the challenges of solving effects of this style on budget and timetable. From his Harry Potter days Illingworth knows what it is like to work on features where schedules can be very different. “I had one shot on Harry Potter that went for nine months!” he laughs. By comparison, he and his team need to be generalists and extremely adaptive, something this seasoned supervisor seems to really enjoy. “Some people seem to get a bit stressed – but I love it.”

Vine credits

VFX Supervisor – Michael Illingworth
Head of 3d – Ivor Middleton
Head of 2d – Sandro Henriques
Producer – Claire McGrane
Coordinator – Annabel Wright
CG development lead – Sally Goldberg,
Animators – Virgil Manning, Benn Garnish and Rachel Ward
Lighting TD’s – Jonathan Moulin, Alex Galan, John Roberts-Cox
Matte painters – Lizzie Bentley, Dante Harbridge Robinson, Clara Parati and Florence Durante
Compositors – John Hardwick, Emelie Nilsson, Caroline Pires, Elysia Greening, David Rouxel, Naomi Butler, Antonio Rodriguez Diaz and Matt Plummer
System Administrator – Chris Hyman