The Remarkable Lucky 13

Love, Death & Robots is a collection of animated short stories executive produced by David Fincher, Tim Miller, Jennifer Miller and Josh Donen.  The series brings together world-class animation creators and captivating stories for the first anthology of short animated stories guaranteed to deliver a unique and visceral viewing experience. One of the most outstanding episodes was the 13th episode is entitled LUCKY 13. The fully animated short directed by Senior VFX Supervisor Jerome Chen, Sony Pictures Imageworks (SPI), is a remarkable cinematic fully CGI short.

After losing all her crew on two separate occasions, the dropship is synonymous with the pilots for being the unluckiest plane in the hangar. Unluckiest that is, until rookie pilot Colby steps forward and proves everyone wrong.

SPI provided pre-production, asset design, creative look development, animation, lighting, compositing, creative editorial and final renders, which were mostly completed in Imageworks proprietary version of the Arnold renderer. We sat down with the team to discuss how they approached the project.

MAKING OF VIDEO

The Creative Process

Jerome Chen had a great relationship with Miller, which really helped make the project a truly collaborative endeavour and a pleasure to work on for the whole team. The small team at Imageworks had complete creative freedom, which was lead by VFX Supervisor Chris Waegner, Executive Producer Mandy Tankenson and VFX Producer Julie Groll.

Production started mid-May 2017, with Chen working closely with Imageworks VFX Art Director Daniel Cox on the initial design for the dropship, then completing 8 keyframe paintings, which would guide the look of show. This was put together as an extensive “lookbook” and presented by Chen, to Miller and Netflix. SPI was given the green light to move into production.

“Lucky 13 is basically a love story between a pilot and her drop ship. It’s both a love story, a war story and a tragedy. To tell this story, Imageworks team built new tools, refined techniques and explored new ideas – it was a great opportunity to do something different.”
-Director, Jerome Chen

Pushing the CGI Aesthetic for Lieutenant Colby

“Jerome wanted the team to push the photoreal CGI aesthetic. We had to cross the Uncanny Valley. I think this project gave all of us on the team the visual freedom to explore our animation medium in a unique way – differentiating this short from previous animated projects here at Imageworks”.-VFX Supervisor Chris Waegner.

The Motion capture (MoCap) involved Samira Wiley (Lieutenant Colby) in a stereo head rig HMC and full body performance capture in a large studio capture volume. Wiley is known for Orange Is the New Black (2013), The Handmaid’s Tale (2017) and The Sitter (2011).

The animation was achieved with a combination of keyframe and motion capture. Performance capture was used for the facial animation. The recorded facial performance was used to solve (or recreate) the actors facial movements using Imageworks proprietary facial rig. Imageworks collaborated with SIE (Sony Interactive Entertainment) which was a great experience and led to the development of many new tools.

FXG: How was Samira’s face scanned?
SPI: We did a complete scanning session with Samira, acquiring a mixture of high resolution markered and unmarkered base meshes for the face, in addition to highly detailed scans for her body and hands in a variety of poses.

FXG: Was a Lightstage scan or just a sculpt based on the scans?
At the base of everything, we acquired a comprehensive set of high resolution FACS scans and neutral face scans via Sony Interactive’s light stage. Our modelling team fit our standard human facial topology to the scans and while there was some sculpting to account for any inconsistencies in the scans or subtle differences between the day of scanning and the day(s) of shooting, we were determined to adhere as closely to the ground truth scan as possible in our base model.

FXG: I assume this is a FACS pipeline?
SPI: Yes. Physical anatomy is the backbone of our approach to VFX facial rigging and we use a very literal interpretation of every muscle action, accounting for all of the potential combinations of FACS action units.
We begin with the ground truth scan data and then, of course, do extensive sculpting, detailing and corrective work to ensure that our resulting rig can achieve the full range of motion and any idiosyncrasies we perceive in our performer. In the end we were around 1,000 asymmetrical muscle actions and combinations of muscle actions for Colby (Samira).

FXG: …and then the HMC was used to drive this FACS rig?
SPI: Correct. We used a fairly traditional marker HMC to acquire the reference footage, leveraging the MoCap stages and HMC at Sony Interactive just a few miles from our Culver City offices. The marker footage was then solved to the FACS rig we built for Samira and the other actors. Upon completing the initial solves – together the CG Supervisor, Animation Supervisor, and Rigging HOD would review each solve and determine what (if anything) needed to be done to achieve the most accurate performance. While we’d occasional re-solve, make minor adjustments by hand in animation, finesse the rig, or hand detail in the shot – the solve results were often quite good and didn’t require any further adjustment. A big bonus with a small team on a tight schedule!

FXG: It looks like Wiley wore a dual computer vision camera HMC, with dots on the actresses face. But it looks like it was using ambient lighting as I can not see special HMC lighting?
SPI: The HMC employed for this shoot had just a single camera, and the performers faces were marked with a 48 dot pattern. There was a small, relatively inconspicuous light mounted to the left of the camera facing the actor.

 

The Animation Style

Jerome Chen wanted to create the most photo-realistic CG character, while establishing an animation style that looked as if it had been shot on film. The Imageworks team researched actual cockpit photography, combat footage, and aircraft hangar reference to help create what we hoped would be accurate cinematography on a distant planet.


A photoreal treatment was decided on fairly early in the project and it became such a collaborative effort that artists in all departments had a huge part in designing every asset on the project. Many departments were working in parallel, which became a necessity as the story evolved and benefited from Imageworks in-house editorial team.

These departments included the animation team lead by Rich Smith, CG Supervisors Stirling Duguid and Jim McLean, Layout Supervisor Simon Dunsdon, CG Modelling Supervisor Marvin Kim, Texture Paint lead Jeremy Sikorski, DMP lead Alyssa Zarate and Compositing Supervisor Trevor Strand.

The Environments

Using the painted keyframes as a starting point, the main landscape forms of Proxima B were built by the modelling team using World Machine. The artists took those models and ran them through a Houdini erosion sim to generate a range of different maps to simulate erosion, water flow and deposition. These maps were provided to the look-dev artists, led by Brian Kloc, Joosten Kuypers and Kieran Tether to produce more varied materials across the landscape.


On the texturing side, the artists created a series of tileable textures using Substance Designer – sand, rocks, and so forth. The team used these textures inside a proprietary texturing tool called FearowPaint. That allows the artists to layer up multiple tiled textures and blend between them using the height data to create detailed, natural-looking transitions. The whole environment was textured using only tileable PBR (Physically Based Rendering) textures.

On top of the textured landscape, the team used Imageworks scattering tool Sprout to litter the landscape with millions of rocks and boulders. Sprout is Imageworks proprietary Maya plug-in that the artists use to hand-paint instanced geometry into a scene. It’s very direct and very intuitive, and every single instance can be tweaked by hand by just clicking on it and manipulating it.

The DMP work was a mix of 2.5D projections and full 3D projections. The team created three master 360 skydomes for different scenes, as well as the HUDs and planet exteriors.