Before SIGGRAPH 2016 began, there was Saturday's DigiPro conference at Disney's Grand Californian Hotel in Anaheim. fxguide will have coverage all week from the SIGGRAPH conference.

digipto-3

digipto-15


DigiPro, now in its fourth year, always sells out and as the capacity is normally around 250-300 people, the whole event has become the place for pipeline TDs, researchers and supervisors to get together to discuss production issues. The conference aims to present production proven, peer reviewed work to "the right people - together in a focused group" and once again the organizers are to be praised for making the day such a strong event in just a few years.

 

Keynote: ILM's Jason Smith

Keynote speaker was Jason Smith, Visual Effects Supervisor, Industrial Light & Magic who spoke about Performance Capture on Warcraft and the Role of Optimism in Production.

Smith gave a compelling talk about reaching for the new while judging the risk of failing. He used the latest ILM Warcraft film as a lens through which to ask... when do you take on huge risks ?

Smith started by outlining some of his background that ranged from writing his own personal finite element ray tracing system at home in his spare time, designing rubiks cubes, and his childhood doing his own home brew special effects makeup.

He would go on to reference these personal aspects and related them to visual effects. For example while discussing the design of the Orcs faces he pointed out that in traditional special effects makeup professional artist go to great lengths to build on the natural wrinkles and creases of the actor's faces. This means they can use less makeup and thus not hide the performance. Thus in Warcraft ILM made an effort to build their orcs to the basic lines of the real actors who played them.

generalcapture
The facial capture solution.

ILM did eight leading orc characters and half the film would have the orc characters. All of these were designed to look almost human and thus close to the uncanny valley, but with Blizzard body proportions. ILM lined up the Orc wrinkles and folds to the actual actor and to demonstrate Smith showed split screens of the ORC to their matching actor and in this respect there digital work was informed strongly by physical make up artists. That way when the actor smile or frowns and wrinkles the system is not fighting that performance but, as Hal Hickel Animation supervisor at ILM characterised it  can"carry forward their performances like a fragile thing."

warcraft-side-by-side-slice-600x200

"They need to do this in Warcraft as there was so much material that the pipeline needed to be heavily automated without dampen the performance of the captured actors on set. "The way our show was budgeted, we need faithful reproduction of the performances of the actors out of the box," he explained "without extensive cleanup after the fact."

To illustrate the maths he used his own personal example of designing the Pedaminx which is a 12-sided version of the traditional six-sided Rubik’s cube, with four slices per face. It has a total of almost a 1000 movable pieces to rearrange, compared to the 20 movable pieces of the Rubik's Cube.

In 2009, Jason Smith helped make the world's first Pedaminx, the pieces were cast in resin but the moulds left small marks and imperfections that needed to be sanded down. Seemingly not a major job, at less than 3 minutes per piece. But as Smith pointed out, anything times 1000 is a lot of work as he found out as he personally worked every night for weeks finishing the small resin pieces. Overall it took 3000 minutes or 50 hours (it took him 10 weeks elapsed).

Smith said this taught him a valuable lesson that he sort to apply to Warcraft in making sure the 750 facial capture performances did not require loads of additional per shot cleanup.

"Cutting edge works means that we will be trying new things... which often means ahead of time not knowing how we will do something"

Part of the talk centered on risk and the notion that rewarding endeavors do involve risk, but we are quick to underestimate that risk or discount it with sunny optimistic views on the troubles one may have. In contrast to this not trying and not pushing for something new is admitting defeat before you start.

With "Warcraft we decided to level up. Even though the technology was not ready yet when they started" he explained, while quoting Ray Bradbury "Jump off the cliff and learn how to make wings on the way down".

Dennis_fxguide
Dennis Muren, who set the agenda for ILM's risk profile

Smith said he thought about their risk on Warcraft in terms of the challenge in terms of the original Jurassic Park film, so he spoke to Dennis Muren about what it was like when they committed to not doing stop motion on that original film.

He said Dennis had considered using CG years earlier but everything did not line up. He said it was vital that things did line up offering the example of photoshop on Terminator. "Without having a Photoshop to handle paint errors in the renders (like 4 frames on the shoulder of the Terminator in the garage scene) they couldn't have finished the film." Apparently back then the film recorder was failing 9 out of 10 times.

"Look at the tools and people you have - are the tools and the people up to the task at hand?"

 Dennis Muren

Rob Kazinsky as Orgrim

Facial capture in summary form was outlined as :

  • tracked the dots on the actors faces
  • perform dot isolation
  • get detailed outlines of the lips and eye lids, as even small differences of where the lid intersects the iris makes a huge difference
  • lock the head (which is actually a couple steps)
  • solve the head to a version of the actors
  • retarget to the head or retarget as offsets (delta) and then
  • adjust the lips and other relevant aspects that are impossible to obtain from a straight transfer.

orgrim

"Our job is risk management" explained Smith, "we are all doing this on every single project right now."

Smith finished his inspiring talk by saying that he is confident that not only will the industry continue to be pushed, but "we are going to work it out, our clients come to us because this stuff is really hard to do but that is OK. That's why we got into the business in the first place."

Note: fxguide has a major technical piece coming out just after SIGGRAPH covering detail face pipelines including ILMs, Weta and others here at SIGGRAPH. Also see our current Warcraft story for more information.

Digi-Pro 2016 Session Recaps

Creating an Actor-specific Facial Rig from Performance Capture

Yeongho Seol, Wan-Chun Ma and  J.P. Lewis (Weta Digital)

1_4_CreatinganActor-specificFacialRig

Background

This paper from Weta was one of the key papers fxguide was keen to see at DigiPro, and it dovetailed perfectly with several others such as the keynote above from ILM. Weta is a world leader in facial animation and so their approach aims to capture the greatest fidelity of the actor, and translate the greatest high frequency detail of that performance to the re-targeted CG character, be that an actor's double or an Orc.

There are a handful of companies leading this 'arms race' of facial capture - modeling - rigging - FACS solutions and rendering. Some companies such as Weta and ILM are working at the high end of realism, but without concern to realtime (puppeting a face live), others are focusing on facial capture such as Medusa (Disney Zurich), Mova (DD) and USC ICT (on modeling and lighting response). Others such as Max-Planck-Institut für Informatik team and Hao Li's team at USC are focusing on real-time and mono video sources. Still others are  all the big companies outside VFX such as Google, Apple, Oculus and Microsoft are investing in teams and researchers in this area.

SIGGRAPH 2016 is a key place to see the latest work in this area, and sessions such as EPIC games Real Time Live demo on Tuesday night are pivotal demos to not miss. This presentation from Weta leads the technical papers in this area this year, and demonstrated the precise attention to detail that is needed in this area of CGI and VFX.

Presentation

Yeongho Seol outlined the challenging problem of producing a feature film quality blendshape facial rig, and the solution at Weta Digital. Producing a great rig can involve manual modelling which is time consuming and requires real skill, but even then can end up not being accurate enough.

There are many pipelines that combine manual modelling and performance capture. But the Weta solution is one that aims to use automated tools and highly complex solvers to deliver a rig that can be controlled by an artists and manual refined if needed. Weta's approach focuses on high end solution that at its core is based on FACS shapes, Weta having trail blazed this approach dating back to Gollum.

Weta will not disclose how many FACS poses an actor is required to do during a multi-camera FACS session, but it is in excess of 40 and that number "depends on the the purpose of the rig and the project.  The number of blendshapes can be considerably larger than the number of FACS shapes in some cases", explained Seol.

Simply put the system in summary: captures a person in a range of FACS poses, it then fits a generic rig to that person, - it uses this to provide a 'generic' range of motion,  which is then refined to actor specific facial motion.

Portable Real-Time Character Rigs for Virtual Reality Experiences

Helge Mathee (Fabric Software), Haux Bernhard (Oculus Story Studio)

It was interesting to see this talk from Oculus Story Studio. The Oculus Story Group appears to be a healthy cross between Pixar, Dreamworks and a Game company. They aim to provide high end narrative pieces but inside the constraints of the incredibly demanding real time VR environment.

CG characters are complex so it is understandable that a studio may want to share assets between a film and a promotional VR environment. But while repurposing these assets seems logical, the level of complexity and the performance requirements are completely different between film production and VR real-time situation. Lack of deformation control is one of the biggest shortcomings and there is also a lack of flexibility for retargeting IK systems among other animation issues.

Simply put, the paper aimed to provide what amounts to What You Rig and Animate (in the DCC app) is What You Get (in the VR experience) approach. The implementation involves using the python-based Kraken tool to generate a rig that can run in Autodesk Maya and also provide a version that can be executed by Fabric Engine within Unreal Engine UE4. By essentially running the same full rig in both Maya and UE4, they are able to maintain similar film-quality characters that keep the same richness and animation control.

KL

To make portable character rigs the team have taken a step back to look at how the character rig is usually built. Character rigs are essentially large hierarchies or graphs made of nodes within a specific DCC such as Maya. Transform hierarchies are constructed manually or by scripts and then are connected by additional nodes to solve equations such as constraints, logic, math, or custom behaviour solvers which are themselves usually built in C++ for speed.

Most studios build rigs using a high level description and a component system, describing a rig as a collection of high level pieces such as the spine, arms and legs. Animators control only a small portion of the nodes, and the graph’s task is to solve the pose required for the deformation of the character’s geometry.

The resulting rig in the digital content creation application (DCC) is  large collection of nodes, which is very application-specific and cannot be moved into the runtime environment. These rigs are friendly, giving animators a lot of control for precise and expressive animations. However, that comes at the expense of complexity and portability. The Kraken (KL) rigging system is an open-source cross-platform rigging framework that uses Fabric Engine. It offers a similar but different approach. Rigs are built using a meta layer in Python. Riggers can define high level components such as an arm or leg. The component in turn describes lower level elements such as controls, joints and constraints.

sad-henry-1They showed how characters rigged with Fabric can work in a VR experience within Epic Games’ UE4. The key to their virtual reality (VR) development is the Fabric for Unreal software and its integration into UE4.

Fabric Engine 2.3 will be showcased at the Fabric Engine user group here at SIGGRAPH.

 


How To Build a Human: Practical Physics-Based Character Animation

James Jacobs, Jernej Barbic, Essex Edwards, Crawford Doran, Andy van Straten (Ziva Dynamics)

talk2

James Jacobs and Jernej Barbic (Associate professor of CS at USC) presented state-of-the-art character animation techniques for generating realistic anatomical motion of muscles, fat, and skin.

Jacobs won the 2013 Scientific and Engineering Award Oscar for the development of the Tissue Physically-Based Character Simulation Framework. He is now CEO and Co-founder at Ziva in Vancouver. Jernej Barbic, in addition to his USC position, is the CTO of Ziva, He is an expert on finite element analysis which is the method they use to get a really nice volume preserving, solid look and feel muscle system.

Every muscle is made as a tet mesh, and every muscle is attached to the other muscles and to the bones. You can paint the fibre field and then activate them anyway one wants, it can be active or passive. Then the muscles are solved with an implicit FEM solver. There is also an inbuilt cloth solver for high detail skin detail. There is collision and self collision resolution, non-linear materials  (real tissue is non linear) and anisotropic materials.

Note: When Ziva talk about anisotropic materials they are discussing the properties of the system not the surface rendering, they mean they work differently in different directions, the way muscles actually do.  

The system has a full simulation cache so if something breaks everything up to that point is saved, and it makes the system interactive to scrub the timeline.

digipto-19
James Jacobs and Jernej Barbic

The input is the skeletal motion and the software works out the muscle, fat and skin movement including skin sliding and high level detail. This pipeline assumew you build the character from the inside out, and then the skeleton movement will provide realistic movement.

Over the past year they have developed a photo-realistic human character named Adrienne. They gave an overview of the workflow used to create Adrienne, from modeling (including fMRI) an actress into an anatomical correct body to their simulation via the finite element method.

This physics-based character animation system uses computational resources in lieu of exhaustive artist effort to produce physically realistic images and animations. This principle has already seen widespread adoption in rendering, fluids, and cloth simulation. Ziva believes that the savings in time and improved realism of results provided by a physics and anatomy based approach to characters cannot be matched by other techniques.

Selective and Dynamic Cloth Fold Smoothing With Collision Resolution

Arunachalam Somasundaram (DreamWorks Animation)

Somasundaram showed techniques to selectively and dynamically detect and smooth folds in a cloth mesh after simulation. The aim is to give artists the controls in cloth sims to emphasize or de-emphasize certain folds, cleanup simulation errors that can cause crumpled cloth, and resolve clothbody interpenetrations that can happen during smoothing. These new techniques appear simple and fast, and help of an artist to direct, cleanup, and enrich the look of simulated cloth.

A Retrospective on the Adoption of the ACES Technology at Framestore

Kevin Wheatley (Framestore)

framestore

Kevin Wheatley, Head of Imaging at Framestore, outlined the history of Framestore's adoption of ACES, from initial efforts that were attempted and rejected to the final stage of using ACEScg successfully on there feature film projects. This talk was exceptionally useful for its ability to speak to the theoretical benefits of ACES and ACEScg in the context of actual implementation and the real world issues and problems that need to be addressed.

Today at Framestore, all material delivered from an ACES camera workflow is converted into ACEScg and the team can run different shows in different colour spaces. They can convert RAW files into ACES, non-ACES shows are converted and even reference stills can all be placed in a standardised pipeline. All the shows at Framestore are internally working in ACEScg. While one can argue that ACEScg is slightly less than ACES, Wheatly pointed out that practically speaking the difference is not of any impact to the pipeline.

Tarzan_EPK_FilmCliphd

There are some issues such as there are some suppliers (camera manufacturers) that are not fully compatible.  A camera manufacturer can aim to hide noise with a 'negative black value' from an offset to the transfer curve. Of course, there is no such thing as negative values. But in Tarzan, for example, it is key to match the texture of the apes' fur in the black in the dark forest jungle scenes. The gorilla is dark, their fur is dark, and they are in a live action dark scene. But ACES floating point does not clip and instead it goes negative (mathematically), so such tricks upstream of Framestore can cause issues. In this example the negative values have to be preserved to reverse the process for the inverse process on output.

Another issue is that most matte painting is not in float, but in log in Photoshop (Adobe does not have an ACES Photoshop pipeline at all). Also. in versions of NUKE prior to version 10 did not fully support ACES in all modes and finally there is not official support for ACEScg in OpenEXR.

 

Gaffer: An Open-Source Application Framework for VFX

John Haddon, Andrew Kaufman, David Minor, Daniel Dresser, Ivan Imanishi, Paulo Nogueira (Image Engine Design Inc)

Gaffer is open source application framework for Image Engine and is built on a framework of a lot of other open source initiatives. A primary aim of Gaffer is to bring together the best-in-class open source technologies for VFX and expose their underlying functionality via a familiar node based user experience. Gaffer makes use of many industry standard libraries like OpenEXR , OpenColorIO, OpenImageIO , OpenShadingLanguage, Cortex], and Alembic.  It also provides interfaces to several rendering technologies, including the open source Appleseed renderer, and the licensed renderers 3delight  and Arnold, as well as the render farm management software Tractor.

Gaffer was used on Jurassic World (rendered with 3Dlight)
Gaffer was used on Jurassic World (rendered with 3delight)

Gaffer is an open-source application framework for visual effects production, which includes a multithreaded node based computation framework and a QT based UI framework for editing and viewing node graphs. The Gaffer frameworks were initiated independently by John Haddon in 2007, and have been used and extended in production at Image Engine since they were open-sourced in 2011. They have become vital to nearly the entire Image Engine pipeline, forming the basis of any node-based system Image Engine choose to develop.

Haddon is still involved today and works as a consultant to Image Engine. The team at Image Engine are looking to expand Gaffer further and are investigating OpenSubDiv, Open VDB, USD (Pixar's Universal Scene Description) and possibly Bullet, along with supporting more renderers such as V-Ray.  There is even work being done to explore distributed computing and cloud services such as Amazon or Google.

Internally, Image Engine's Gaffer works with their in-house Caribou stand alone lighting tool. Caribou is currently in Maya but future support is planned for Houdini and Nuke. While it is not intended as a pro compositing tool, in addition to lighting it does automated pipeline comping tasks for dailies, slates and/or slap comps. It is also key to managing task submission to the renderfarm, for example Gaffer feeding Tractor (Pixar's farm management tool).

The team also touched on their in-house asset management system Jabuka, which handles Image Engine's quality control (Bundle Render Profiles) and workflow templates.

2_2_Gaffer

 

Large Scale VFX Pipelines

Andy Wright, Matt Chambers, Justin Israel, Nick Shore (Weta Digital)

Andy Wright of Weta Digital outlined Plow, their custom queuing, scheduling, job description and submission system.

For many years, Weta used Pixar's Alfred for scheduling and dispatching and over this time they had successfully adapted and extended it. A render farm expansion increased the resources significantly, making it difficult to keep the machines fully utilized under this system. This coincided with Alfred's effective end of life. While resource utilization is a primary concern, the suite of tools WETA had built around Alfred for queue management were significant, therefore any new system would require a robust management layer as well as a complete API for inspection and modification.

Weta looked at open source options but found them "seriously lacking in terms of scheduler features, user friendly APIs and user interfaces". Ultimately, rather than utilize or customize an off-the-shelf product, Weta chose to develop a custom solution from the ground up. The resulting system, Plow, is built with an open API and provides rich statistical information that can be leveraged to rapidly build tools and graphs that help Weta drive their render farm effectively.

Weta already had Kenobi a job description framework. Kenobi is a node-based scripting and execution framework for building pipelines. Its primary goal is to abstract away the details of launching and executing large scale distributed jobs allowing our pipeline developers to focus on their core business logic rather than having to solve common problems. It tries to do this while providing a low barrier to entry and imposing minimal requirements on the developer. The team build Plow on the back of Kenobi.

2_3_LargeScaleVFXPipelines

To ensure peak utilization of hardware resources, as well as handle the increasingly dynamic demands placed on its render farm infrastructure, Weta Digital developed their custom queuing, scheduling, job description and submission systems. Today it works well in maximizing the available cores across a large range of non-uniform task types.

The render farm is one of the most important, high traffic components of Weta's VFX pipeline. Beyond the hardware itself, any render farm requires careful management and maintenance to ensure it is operating at peak efficiency. In Weta's case this hardware consists of a mix of over 80,000 CPU cores and Weta's GPU resources, and as this has grown it has introduced scalability challenges.

The Plow solution ended up with a UI on the desktop that allowed the artist more control, the ability to opt-in or out of participating in the Renderfarm. An Auto mode that only accepts tasks if key conditions are met, coupled with restrictions to project-specific tasks. And perhaps the most popular new feature,  "the ability to prefer specific users to be scheduled first over others," joked Wright.

In the end the artists were very happy with the result and even had suggestions for future work:

"Overwhelming at first - the Wrangler UI tool is frickin' Brilliant!"

"The amount of control is really nice, the stats, the overall visibility etc.. the blueprints are totally ace."

"A tree view would be fantastic, or at least a better representation  of the dependencies would be very helpful"

 

Camera Tracking in Visual Effects – An Industry Perspective of Structure From Motion

Alastair Barber (University of Bath and Double Negative Visual Effects), Darren Cosker (University of Bath), Oliver James, Ted Waine, Radhika Patel (Double Negative Visual Effects)

camera tracking
camera tracking

Matchmoving, or camera-tracking process is a crucial task and one of the first to be performed in the visual effects pipeline. PhD grad Alastair Barber has been working with Double Negative (DNeg) to explore what is the nature of tracking in a facility, and what could the team learn by mining the projects at DNeg?

An accurate solve for camera movement is imperative and will have an impact on almost every other part of the pipeline downstream. In this talk he present a comprehensive analysis of the process at a major visual effects studio, drawing on a large dataset of real shots in the Shotgun database of DNeg. This lead to him presenting 'guidelines and rules-of-thumb" for camera tracking scheduling.

They also made available data from their research which shows the amount of time spent on camera tracking and the types of shot that are most common in DNeg's work.

For example, time spent in each department:

time per departments
Time spent per department

The duration of various visual effects pipeline processes can be seen above, Compositing dominates but Camera Tracking, the subject of this talk is highlighted in red. This data was taken from an aggregate of the total times taken over the production of 6 feature length films, with DNeg acting as either sole or a major Visual Effects vendor.

This shows that 5 to 10% of all the time spent in vfx was on camera tracking.

Of those camera tracking jobs the research was divided into how long the shots were to solve per frame. (The duration was in ‘man-hours’, i.e. the actual time taken by a single specialised and experienced visual effects artist to complete the task divided by the number of frames in the shot. These measurements are recorded in DNeg's shotgun system.

 

Graph 1
This shows a lot of shots take 5 or 10 minutes per frame to solve.

 

Graph2
This shows (by the red line) 50% of the time is dealing with shots that take over 25 minutes a frame to solve.

The second graph is interesting not only is more than half the total time taken coming from shots that worked out at more than 25 mins per frame, but there is quite a lot of shots at 60+ mins a frame and the hardest was a staggering 75.4 mins per frame to solve.

Which raises the question, what is considered a "solved' camera track?  At DNeg, as with most companies, the answer is subjective with artists lining up over the top of the shot either:

  • trackingVirtual cones or sphere overlaid
  • Real Lidar geometry overlaid
  • Proxy geometry such a generic person model placed over the footage and animated to see if it stayed approximately the right scale until the end

Interestingly, in the general R&D community there is a 2d/3d/2d re-projection system that is not used in VFX.

Barber looked at a series of metrics such as camera speed, occlusion short lifespans, and whether the shot had LIDAR or other shot information. He examined the material in terms of types of lens (anamorphic breathing makes seemingly simple shots very complex) and nodal movement of pans.

After examining the various mean solve times the results were stunningly unexpected. For example, spherical lens tracking is twice the mean solve time as anamorphic. Or shots with LIDAR on average take longer.

So many of his results seemed completely counter intuitive, but the reason is fairly simple. DNeg being an experienced well run company knows how to do shots, so if it is a hard shot they will put their best people on it. They will make sure they have the needed data, LIDAR scans, and other information. A big anamorphic lens is hard to move so it is far less likely to be used on a handheld complex shot. As such, hard tracking shots are nearly always shot on lightweight prime spherical lenses.

The lack of obvious correlation reflected the experience of the team running the projects. While this was not Barbers intent, he highlighted just how crtical project postmortems are on jobs and how easily one can poorly estimate work without really examining the actual results from previous jobs.

Barber did expand to point out that parallax, visual texture, illumination and point occlusion are all key factors in camera tracking. But the assumption that all staff are of equal skill and that projects don't dynamically adapt to these predicted or expected issues can be wrong. Even after accounting for the impact of lens type, camera motion constraints and level of survey data available, it can be seen that the variance of solve times for shots with similar velocity is large. The work did show that one of the most useful pieces of information available for speeding up the process of matchmoving is accurate 3D scene information to register 2D features to.

 

Searching For The Interesting Stuff In A Multi-dimensional Parameter Space

Andy Lomas

cellularForm17_0007_0025Andy Lomas showed his evolutionary art pieces that explore evolution and machine learning. These started as a personal process and he produced remarkably organic looking evolved elements that look like they are right from a micro-scanner.

Lomas was formerly with the Foundry and his talk gave a refreshingly different point of view: that of someone that has taken the tools of VFX and used them for personal, yet strangely organic art.

Andy recently had an exhibition in London of his art and you can view his work at www.andylomas.com.

 


The Jungle Book: Art-Directing Procedural Scatters in Rich Environments

Stefano Cieri, Adriano Muraca, Alexander Schwank, Filippo Preti, Tony Micilotta (MPC)

Click for high res
Click for high res

Disney’s The Jungle Book required MPC to build photorealistic, organic and complex jungle environments. MPC developed a geometry distributor with which artists could dress a large number of very diverse CG sets. It was used in over 800 shots to scatter elements, ranging from debris to entire trees.

Per object attributes were configurable and the distribution was driven by procedural shaders and custom maps. This talk described how the system worked and demonstrated the effectiveness of the workflow.

The scattering system was conceived in the early stages of MPC’s work for The Jungle Book, when they realized that the environment artists needed a solution to populate scenes quickly and easily with great detail and varying scale. Before starting the development, a broad scope investigation was conducted, looking both at the research field and at available commercial, open source and out-of-the-box solutions.

The most common strategies involve procedural methods and point-sampling approaches. The team looked at particle solutions such as the one used by Animal Logic on Lego.  A key different, but impressive, approach the team looked at was Wonder Moss by Inigo Quilez at Pixar. The technique was developed for the movie Brave to cover sets with plants, moss and minute details, directly at render time. This technique is particularly interesting to the team as it works as a layer, wrapping any renderable geometry and seamlessly blending heterogeneous elements with different scales. Despite its effectiveness, the drawback of this approach for MPC was that it requires an experienced programmer to address visual feedback, by tweaking the code of the render procedurals. They also looked at established production and out-of-the-box tools such as Houdini, Clarisse, and Maya's XGen.

The MPC scattering solution was designed to combine the flexibility of procedural distributions and the granular control required for art direction. Its scattering principles were aimed at maintaining the highest possible level of abstraction and generalization by following a stochastic approach where an artist could define the parameters used. Depending on the desired scatter behaviour, artists would translate distribution rules into shaders and maps while controlling randomization. The main application for environment artists at MPC is Maya, so the team leveraged Maya's established API and integrated it into MPC’s pipeline to reduce development time.

The Scatter worked very effectively with PRman renders, using a custom REYES shader to generate a point cloud. The process provided:

  • A seamless solution (which most team members could themselves not pick as procedural)
  • visual blending with the environment
  • tool set integration with MPCs current tools and workflow

It was used in over 800 shots and is now being adopted in other shows at MPC.

As a guide:

  • It took 1 day for a simple scenarios
  • It took 1-2 weeks for ordinary sets
  • A typical scatter was a few thousand to few million elements
  • Importantly, the count does not depend on scale.

Moving forward, MPC wants to use Fabric Engine for visualization in both Maya and a standalone tool.

Volumetric Clouds in the VR Movie, Allumette

Devon Penney (Penrose Studios)

clouds

Allumette is an immersive, highly emotional, and visually rich VR movie that takes place in a city floating amongst clouds.

The story unfolds around the viewer, as such the viewer can experience the cloud in VR from any perspective. This means the team at Penrose Studios had to build a cloud city one can move around and view from all angles, as Devon Penney explained in this presentation called: Volumetric Clouds in the VR Movie, Allumette .

This type of set is a formidable challenge for traditional animated films where a team have huge resources and hours to render each frame, which makes achieving the look and feel of immersive clouds in VR very much uncharted territory. Existing lightweight techniques for real time clouds, such as using geometric shells with translucency shaders, and sprite-based methods, were thought to have both poor quality and bad performance in VR.

For Allumette, Penney explained that they first modeled clouds by painting cloud shells using a proprietary modeling tool, then they used a third party procedural modeling package to create and light the cloud voxel grids. Finally, these grids were exported with a custom file format, and rendered using a ray marcher in their game engine. The resulting clouds take .6ms per eye to render, and immerse the viewer in our cloud city. The imagery Penney showed was both beautiful and extremely effective. He even showed a clip of a controller 'hand' with a torch moving through the clouds in real time lighting up inside the cloud volumes.

clouds2
Before and after

Cross Departmental Collaboration for FX-driven Animation in “Alice Through the Looking Glass”

Joseph Pepper, Klaus Seitschek, Ian Farnsworth, Cosku Turhan, Chris Messineo, Lee Kerley (Sony Imageworks)

Alice Through the Looking Glass presented many challenges for the effects animation team and they shared two major examples: the time ocean and the 'rusting' of collapsing time.

Ocean of Time

The “Ocean Of Time” is a completely computer generated environment that serves as the basis for the time travel in Alice Through the Looking Glass. Alice travels through the Ocean Of Time in a time travel device called the “Chronosphere” when going to various times and places in her quest to save the Hatter and reunite him with his family. It represents the void or limbo between these destination times and places and was conceived simply as “an ocean made from images of the past.”

After a period of concept exploration the design for the Ocean Of Time settled upon a vast, two-tiered ocean with “floor” and “ceiling” landscapes made up of dune-like, undulating waves. Close to the surface of these waves would live what became known as the “memories.”

The memories were essentially images and sequences either from the original ’Alice In Wonderland’ movie or from moments seen previously in Alice, Through The Looking Glass. As Alice traveled back in time, the ocean would show sequences from progressively earlier periods.

There were two major pipeline hurdles for the Ocean Of Time. 

  • Integrating a large array of image sequences of past Alice moments into the ocean in an original and naturalistic way that did not require huge amounts additional resources in terms of rotomation and layout. 
  • Constructing this vast ocean using the tools and technology of the FX department in a way conducive to complex simulation while still allowing fast feedback and control for the animation director and VFX supervisor when choreographing the action with the client.

The pipeline was a combination of Maya and Houdini rendered in SPI's own version of Arnold (with OSL shaders).  The Houdini engine was key and the Ocean of Time sequence used Pattern Create - a pattern making tool in Katana. This removed a huge amount of manual texturing painting.

As the memories had to be from the world that may not have even been made yet, the key was to have a very automated cross functional approach. Clips in stereo provided motion vectors via the Foundry's stereo tool set.  These motion vectors meant that if the character turned their heads - the motion vectors could change the fluid sim and cause waves. If the Mad Hatter who seems to be almost projected on the waves turns - his image disrupts the water and then dissipates into the water and spray.

cat
Clips were processed (above) to gain a motion vector set.. that would feed the sims and help make (and colour) the waves (right)

waves

 


A customised flip sim meant that not only does the clip of the Mad Hatter looking to the side creates waves and form (+ sub foam) but the clip colours bleed. There is colour extraction from the memories as if they are water colours in the volumes. The waves looked like the memories caused the waves and coloured them.  To sell the shot the entire sequence was ray traced with volumetric and sub surface photoreal rendering effects.

For the stormy ocean itself, the entire ocean surface was simulated as a FLIP simulation in world space, and the domains were used merely to subdivide the vast space the ocean occupied into more manageable chunks. To achieve this world space simulation, a targeting setup was built inside a FLIP DOP network. FLIP particles deep underwater were made “ballistic.” By explicitly setting their velocities, these deep lying ballistic particles were targeted at the waves and undulations on the layout geometry, free from being affected by the FLIP solve.

Meanwhile, particles closer to the surface were treated as progressively less and less ballistics, free to be affected by the FLIP solve. In effect they are carried along for the ride by the deep swells beneath. The non-ballistic particles on the surface were also pushed around by the vector fields extracted from the memory tile meta-data, allowing wakes, splashes and the birthing of secondary whitewater foam, rendered as volume primitives in Arnold, via Imageworks’ “vdbSpeck” tools in Houdini.

Given the vast sims and motion vector imagery this sequence used over a petabyte of data, the team told the DigiPro audience.

alice

Rust

In the original script there was a single line: "and then everything rusts over."

The ’rust’ effect in Alice is conceived as to what happens to ’Underland’ as the unintended consequence of Alice’s Time Travel throughout the movie. The more Alice travels through time the more the world around the characters decays as the fabric of time becomes more and more disrupted. This is represented visually as a sentient, fluid, substance that engulfs everything from characters, objects and rooms, to castles, landscapes and even the sky.

The animation department was responsible for the timing and placement of rust in shots and sequences. For this malevolent force, smothering everything in its wake into frozen husks, there was a clear direction for it to display sentient properties. As if attacking anyone it comes close to. Since the physical simulation and procedural geometric aspect of the rust was so extensive it was decided that the easiest way to represent the rust in the front end was by using a customized curve tool in Maya. This tool was a direct descendant of the Spiderman web tool.

Once the curves were generated, they were exported for use by the FX department. In Houdini customized FLIP fluid simulations were triggered as the curves grew along the paths defined by animation. Rather than behave like regular fluid, the rust fluid would have the ability to climb walls and hang from ceilings. In addition, an internal clock in the solver would freeze the rust simulation particles after a certain lifespan.

For additional rust geometry, SPI leveraged heavily off the ’source instancing’ technology available in Arnold. Rather than loading multiple copies of identical geometry into the scene build at render time, source instancing allows the specification of only transform data on the specified objects, keeping only one copy of the topology in memory. This meant that they were able to render millions upon millions of geometric flakes and scallops with minimal overhead in terms of scene building. All that was needed was a point cloud with relevant transformation data.

In the end, six million render hours and over two years work went into the rust sequence at SPI.

The final piece in the jigsaw of the rust concept was the “Derusting.” To achieve this unique 'undoing' effect the team again approached the problem from a multi departmental workflow perspective. It was crucial for the filmmakers to have a visualization of the timing and progression of the de-rusting of the characters and environments.

To achieve this, the team devised a new workflow to get early approval with quick turn-around by the Layout and Animation department. Layout used colored lights in Maya to visualize the progression and speed of the de-rust. Based on this initial timing, animators took over and added in “hero” curves for either close-up elements, or any area where it was particularly important to define strong timing and placement of the de-rust. The character de-rust on Alice is a great example of this, where the animation department blocked out the de-rust progression and got early buyoff on timing.

The Oceans Of Time and Rust effects sequences were only a small part of the VFX work undertaken for Alice Through The Looking Glass. Together with the other work they accounted for a data throughput that was more than four times the magnitude of the largest show previously undertaken at Imageworks.

 

Pictures from the Conference

 

digipto-18
The conference was sold out- again

 

Sam Assadian, Isotropix CEO and Founder
Sam Assadian, Isotropix CEO and Founder
Autodesk and Animal Logic
David Morin and Xavier Desdoigts

 

 

 

 

 

 

 


digipto-16

 

Bruno Nicoletti
Bruno Nicoletti
Per
Per Christensen

JFH_3120_HDRsmall2
Tone mapped HDR of the Conference Hotel for DIgiPro2016

 

 

 

 

 

 

 


Thanks so much for reading our article.

We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.