Day three of Siggraph 2016 and as usual it was tough to decide where to go. The crowds in Anaheim have traditionally seemed smaller than when the show is in downtown Los Angeles, but this year we are not so sure that it is smaller. Some sessions were at capacity and even overflow lines outside. The exhibit floor also seemed to have steady traffic with lots of VR vendors as well as the big visual effects players, the job fair and of course the emerging tech displays.  One thing we noted on the floor of this show was the appearance of players like Google VR, Amazon and Microsoft.

Kubo and the Two Strings: One Giant Skeleton, One Colossal Undertaking

The team from LAIKA: Mitch Prater, Eric Wachtman, Steve Switaj, Steve Emerson (L to R)
The team from LAIKA: Mitch Prater, Eric Wachtman, Steve Switaj, Steve Emerson (L to R)

Panel

Steve Emerson, VFX Supervisor
Steve Switaj, Engineer, Camera and Motion Control
Mitch Prater, Principal Software Engineer
Eric Wachtman, CG Look Development Lead

 

LAIKA's next feature, Kubo and the Two Strings, is still to premiere in the United States on August 19th but attendees got a sneak peek of the film during a production session. The session primarily focused on the Giant Skeleton character, which is the largest stop-motion puppet ever built.

The Giant Skeleton was shot in two scales. The full-scale one, which was the focus of the session, is the largest stop-motion puppet ever built, weighing 400 lbs. and standing at 16 feet from head to toe with a 23-foot wingspan. The lower half and upper half had to be filmed separately, with most shots capturing the upper body/torso. For some scenes, a one-sixth-scaled Skeleton was used; standing just over 2.5 feet tall, this is also among the largest stop-motion puppets ever made.

At full scale traditional armature is very hard to do and the fact that the skeleton is an "open" puppet makes it difficult (if not impossible) to hide the rigging internally. Furthermore, it would simply be too much cleanup in post to have an internal armature. Also, due to the scale, LAIKA's traditional rapid prototyping could not be used.

During the design process of the puppet, the preferred software of the various departments creates a bit of a hitch in the workflow. The art department uses Z-brush and the rigging department uses Inventor. To go back and forth between the two departments (which is a natural part of the process), they actually need to run the files through an older version of Maya to translate from one app to another.

Once the design was done, the files were sent out to have the parts cut out in industrial cut high density foam. These foam pieces were painstakingly covered and painted to achieve the final look.

A smaller scale model of the Giant Skeleton.
A smaller scale version of the Giant Skeleton.

In order to animate the puppet on the X,Y, and Z axes, LAIKA's Steve Switaj said that they first went in search of a “giant-ass robot” to move the skeleton. But it was the controllers on the robots that ended up being the primary reason they couldn’t use them: the controllers couldn’t interact with their pipeline. There were also other considerations, such as the requirement to install a concrete pad to anchor the robot...and that wasn't going to happen on the stage.

The next solution explored was to use a Hexapod, which is a mid-sized motion-control table. It is a platform with three pairs of hydraulics that allow the platform to move in X,Y, and Z as well as pitch, roll, and yaw. This ended up being able to serve their purposes, so they built one over a span of 27 days.

The arms of the Giant Skeleton were incrementally driven via independent rigs, suspended from the ceiling and counter-weighted like an enormous marionette. Each of his hands weighed six pounds. The elbow locks of the Skeleton, made from automobile brake pads, had to be replaced three times over the course of the shoot.

Instead of the traditional ball joints for stop-motion puppets, the arms and head were secured with clusters of magnets to provide maximum range. 70 unique swords were made for the skull, and over 1,000 bones were used to make the armor for the torso.

Animating the Giant Skeleton was a different beast from LAIKA's normal puppets. Normally, they are able to record three seconds of animation per week, per puppet. For this model, due to its size and the tedious nature of animating, they were able to record about one second of animation per week.

All in all, it took six months to build the skeleton with 49 seconds and 5 five frames worth of footage from it making it into the final film.

Beetle, Kubo, and Monkey.
Beetle, Kubo, and Monkey.

A couple of interesting points were made during the Q&A at the end of the presentation.

LAIKA's rendering pipeline uses RenderMan's REYES, but prompted by a question from the audience said they are in the process of moving to RIS. According to Eric Wachtman, they were very active in the RenderMan 21 beta program and the short answer is "yes"...they will be moving forward on RIS. It's not a big rush, as REYES will be supported for at least the next two or three years. Emerson jokingly mentioned that they had an informal meeting "under the escalator at the Hilton this morning arguing about it." He pointed out that they actually used RIS on a very limited basis in Kubo and according to the team, things were "really really promising" and the move was inevitable.

LAIKA has shared their "LAIKA Plausible Shading System Toolkit" on the RenderMan community web site. The system is designed to be very physically real, but also allows artists to build non-physically based materials or effects.

 

Perception of Shapes and People

This session had several presentations and we were struck by two in particular.

Perceptual Effect of Shoulder Motions on Crowd Animations

Crowd simulators typically do not account for influences between characters at the animation level. This paper presents a set of experiments demonstrating that secondary shoulder motions are beneficial to prevent spectators from perceiving slight residual collisions and to globally increase the perceived level of animation naturalness.

Ludovic Hoyet, Inria
Anne-Hélène Olivier, Université Rennes 2
Richard Kulpa, Université Rennes 2
Julien Pettré, Inria

Ludovic Hoyet did the presentation.  Interactions between agents is complicated. This often requires two steps, first doing simulation with agents and following that with artist intervention. Shoulder motions are the natural way we act in crowds to squeeze through tight spots. There are many previous papers on this problem but they added undesired computational overhead.

We presented to participants videos of two characters walking past each other at distances ranging from 0.2 to 0.7m. Either none of the characters, one character or both characters were displaying shoulder motions.
We presented to participants videos of two characters walking past each other at distances ranging from 0.2 to 0.7m. Either none of the characters, one character or both characters were displaying shoulder motions.

This approach is looking to evaluate introducing shoulder motions to improve visual quality of crowd animations, in the animation step. Does one character turn, both or neither. What is the interpersonal distance? Would they move at all?  Are there collisions, they calculate the volume of the intersections.

Working with a group of people they showed clips and gathered feedback on perception of whether there was a collision or not. They then created a rule to animate shoulder motion based on this research.

A two step study on the perceptual effect of shoulder motions on visual quality of animations. Adding naturalness and understanding viewers perception of this. Shoulder motions add variety to the animation and other behaviors could also be studied to add additional variety. This approach did look very natural in the demo videos shown.

The Paper: Perceptual effect of shoulder motions on crowd animations

 

Body Talk: Crowd Shaping Realistic 3D Avatars With Words

This paper estimates perceptually and metrically accurate 3D human avatars from crowd-sourced ratings of images using linguistic shape descriptions. Words convey significant metric information, suggesting that humans share a shape representation mediating language and vision. Crowd shaping uses the "perception of crowds" to create body shapes without a scanner and enable novel applications.

Stephan Streuber, Max-Planck-Institut für Intelligente Systeme
M. Alejandra Quiros-Ramirez, Max-Planck-Institut für Intelligente Systeme
Matthew Q. Hill, University of Texas at Dallas
Carina A. Hahn, University of Texas at Dallas
Silvia Zuffi, Istituto per le tecnologie della costruzione
Alice O'Toole, University of Texas at Dallas
Michael Black, Max-Planck-Institut für Intelligente Systeme

M. Alejandra Quiros-Ramirez started with a crime scene example, asking a crowd "what did you see"?  The next question is: "could we ask the crowd to describe a body shape" and get accurate results as opposed to more expensive body scanning. Could we in fact reconstruct based on this information with enough detail to buy clothes for the person?

First, they created a data set of bodies and words. For the body they started with a mean body shape and used shape coefficients to create a set of 127 male and 128 female bodies covering most types.

Then they defined 30 words describing shape attributes that were provided to focus to proper match.

Words
The 30 words describing shape attributes

Next, the team did accuracy tests to see how they predicted height and weight, measurements. They brought in models of all sizes and then matched the sizes and used body talk words to describe them.  The results were very accurate.

WordsTask

They found that having 15 people rate the bodies is the sweet spot for getting accurate data. After evaluation they added 30 more words. They also linked words to eliminate ones that no longer make sense after choosing others, like long legs would eliminate short, stocky, etc.

They showed using body talk to construct characters found in books. An effective example shown was a character from The Maltese Falcon. They also duplicated people in the media and the accuracy was impressive.

The paper:  Body talk: crowdshaping realistic 3D avatars with words

 

Marvel - Captain America: Civil War

In Captain America: Civil War, Steve Rogers leads the newly formed team of Avengers in their continued efforts to safeguard humanity.  Marvel Studios, ILM, Rise FX, and Method Studios show attendees how they created some of the movie’s most heart-stopping moments.

Panel

Victoria Alonso, Executive Vice President, Physical Production Marvel Studios
Dan Deleeuw, VFX Supervisor Marvel Studios
Greg Steele, VFX Supervisor Method Studios
Swen Gillberg, VFX Supervisor
Jean Lapointe ILM comp supervisor

Deleeuw showed previs to final of a sequence with a helicopter on roof. The props guy in early meetings piped in offering to make a puppeteer-ed helicopter which they could film in Atlanta on blue screen. They all thought he was crazy but ended up choosing the crazy idea due to the realism this would provide.  Deleeuw oversaw 3,000 shots and 18 vendors.

Swen Gillberg

Swen Gillberg ran the world plates unit.  Plates can add scope and production value especially as films end up in subsidized areas like Atlanta where this film was shot. They filmed in Iceland for the Russian tundra, Puerto Rico for Lagos (off limits at the time for ebola outbreak), and also in New York, Vienna, London, Germany and Brazil. Calling them a shoot is a bit of simplification as they collect a lot of data, including HDR, Lidar, and other information.

Gillberg showed a lot of shots that in film look practical but are all green screen using plates from his unit.

The size of plate unit ranged from one person shooting out a hotel window to 150 people shooting for two weeks. For example, on the Puerto Rico shoot, they used helicopters, a drone with a Red Epic, photographers on ground, and a data collection crew. All this was happening in parallel with the live action shoot with actors in Atlanta.

Alonso pointed out that they strive to stretch the 24 hour day using global teams to meet deadlines. The process starts with boards and then location scouts. Extreme planning then goes in to matching locations to script and creating gear lists. 200,000 still images were shot in total. Yes 200,000..that is not a typo. Gillberg pointed out that a plates shoot for a film like this requires complete time of day coverage as that may not be locked or may change. Have to cover every possible outcome, this seemed to be a theme in the talk from several speakers.

The first action scene in the movie was done by Rise Effects in Germany. Plates were from the large Puerto Rico shoot standing in for Lagos.

Jean Lapointe, ILM

ILM did 639 shots including the main fight sequence, which is twenty minutes long.  Actors were completely lifted from background.

They showed lots of digital double work. Even in close ups often the suits were replaced. They redesigned the Spider-Man suit, lots of back and forth and tests with Marvel. Mocap was done for performance as it was all CG.

Greg Steele, Method

Method did 430 shots for the final battle.

Previs was complete when they got involved. Principal photography was done on two large sets with green screen in Atlanta. Photography unit documented everything on set. Special displacement suits were made to give actors volume but allow movement as the suit was being replaced anyway. Bucky's arm is always CG.

Data acquisition was extensive. In every setup they shot reference of suits in the lighting, balls, charts, and HDRs. Witness cameras were Canon C300's to assist tracking. Again the idea was to gather everything, even if not called out, to protect against unforeseen changes later.

For Ironman they had to work out damage for different scenarios in script. In comp they were able to swap out pieces right in Shotgun to use any of the 16 different approved damage options to match sequence.

Shots of Captain America's shield were replaced to provide better look, with more reflection and metal look than the physical shoot captured. Post viz was done to fill in green screens and add characters.

For the Stasis chamber they needed to match set and provide flexibility to adjust as needed or to make the shot a better composition. Often they used only the camera track from the original live action shoot.

 


Thanks so much for reading our article.

We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.