Finding the Secret SAUCE for Asset Re-use

SAUCE is a three-year EU Research and Innovation project between Universitat Pompeu Fabra, Foundry, DNEG, Brno University of Technology, Animationsinstitut at Filmakademie Baden-Württemberg, Saarland University, Trinity College Dublin and Disney Research Studios to create a step-change in allowing creative industry companies to re-use existing digital assets for future productions.

The goal of SAUCE is to produce, pilot and demonstrate a set of professional tools and techniques that reduce the costs for the production of enhanced digital content for the creative industries by increasing the potential for re-purposing and re-use of content as well as providing significantly improved technologies for digital content production and management.

The SAUCE project has its roots in another EU joint virtual production project that was led at that time by Jon Stark at the Foundry called DreamSpace. That project looked at editing and working virtual production in a user-friendly way. It was built on the assumption that production was no longer linear and it involved a team of companies from around Europe including Filmakademie, as Prof. Volker Helzle recounts, “That work package was about editing objects, lights, and animations in virtual production scenarios in a direct and user-friendly fashion”.

The new SAUCE project is also a multi-company EU project which has developed a range of tools including a tablet controller for Filmakademie’s “VPET” – Virtual Production Editing Tools – which has grown to encompass a central production system with Katana, using Epic Unreal, synchronizing virtual production environments, AR, Google Tango, Apple and Andriod devices so as to make virtual production a more stable and workable tool for enhanced digital content and management in real-world productions.

The team presented SAUCE at DigiPro this year and outlined the project. While some parts of the broader SAUCE project are focused on things such as Lightfields and Crowd Animation, the team at DigiPro focused on Asset pipelines. SAUCE stands for Smart-Asset ReUse in Creative Environments and so in terms of Asset pipelines, SAUCE breaks down to three key research questions,

  • Smart Assets: What are ‘smart assets’, what makes them smart, and how can they be defined?
  • ReUse: How can assets be made re-useable not only in a VP environment but in post?
  • Creative Environments: Why does this matter to creatives and productions?

 

As most people in production know, there is an enormous amount of wasted effort in missed opportunities to re-use assets and how asset libraries are both made and maintained. It is not uncommon for an asset to get remade as it is deemed just easier to remake it as it is to find, convert, or modify an older asset from a previous project. Shared assets across projects become even more problematic when one considers sharing assets between facilities or across siloed boundaries such VFX, games or licensing/marketing.

Partner DNEG focused on Searching and Retrieving Data, Foundry looked at storing data and Filmakademie focused on use cases in production.

Data Storage:

Foundry set itself a lofty goal that the actual file formats in the library should not be front of mind for the user. Dan Ring of the Foundry gives the example of an iPhone user not caring about if a song is an .MP3 or AAC – they just think of their ‘data’ as a song and iTunes handles the issues around file formats, conversions or transcoding. As long as it can be versioned and it is unique, the asset can be useful long term on-premises or in the cloud. Serving both large companies and small one or two-person outfits.

Searching and Retrieving:

DNEG has millions of assets, so the curation of DNEGs assets is a vast challenge. Traditional information retrieval systems operate using text. The indexing of text is naturally well understood. However, the demands of the VFX and animation industries are far greater. As Will Greenly of DNEG explained, the visual nature of our industry means that these systems need to move beyond just sorting on text or specialist curated metadata. DNEG was keen to standardize how things could be classified. They also employed machine learning with this standardization approach to produce a system with greater search options and better search results. The retrieval system also has a graphical not text-based user interface with some new artist-friendly UI tools that are genuinely user-friendly.

Use Cases:

Jonas Trottnow of Filmakademie outlined at DigiPro, the use cases and especially the overhead of having to prepare assets upfront before the shoot. These assets ideally will flow from onset to final use so they often require different levels of detail (LOD) and need to rely heavily on Foundry’s work in precise versioning. As with the other areas, Filmakademie wanted to support big and small use cases. To help with this workflow for production research, Filmakademie involved their “VPET” – Virtual Production Editing Tools – a system that had previously been started by the R&D team at Filmakademie Baden-Württemberg

What makes VPET so interesting is its ease of use on set. It is an open-source editing tool that allows live onset editing of environments and set elements from a tablet. It seamlessly integrates into the production pipeline and even supports AR features so team onset can collaborate and interactively adjust the virtual world that extends the onset footage and help build a more integrated world. Helzle doesn’t know of any other open-source solution that addresses these issues for virtual production, “There is Omniverse from NVIDIA that sort of does something similar, but it is not the same and our system is a completely open structure… and it does not extend to our work in procedural character animation”

Since the VPET tablet client is used as a remote control for the 3D scene, visual quality on the tablet is not a key target. VPET allows the use of different versions of the same asset on a VPET tablet and on the LED wall of a virtual production stage. This means that a user can interact with the real-time asset on the tablet, but all updates are applied to the high-quality version on the set. The Tablet is just providing a high-quality preview of the final look. This requires multiple LODs and the system is fully designed to deliver this, a USD scene of several assets at various LODs can be automatically generated, composed, and delivered.

The system works with metadata and labels from the asset library to simplify the set dressing and even editing them on set via the tablet. Since the system is integrated, as the world is updated the scene aware procedural animation automatically updates. Additional animation direction can also be input via the same tablet. The team also added ‘mood regions’. Via machine learning the procedural animation then blends to different behaviors as the autonomous agents drift into say a “scary’ building or region.  “If a character enters such a region the animation engine can then automatically adjust the animation – which can be used for onset virtual production but also for Pre-viz”, explained Trottnow.  The ML network learned how humans move in different styles (e.g. running, walking, sad, happy…). “This means a character could react if he walks by a TV, he would look at the TV if it is on since the behavior is triggered by the prop asset, -cause that is what people do” explains Helzle. This can then be used on a spline walking path defined in VPET. This allows an arbitrary human character in a crowd simulation to be animated using the learned animation. The training data for the ML algorithms was also released publicly as high-quality optical motion captures.

VPET has an open character streaming protocol for entire characters (including weights, skeleton, etc.) at run time. An arbitrary external animation solving engine can animate a character through streamed bone animations. Onset, high-level commands can be used to drive a character. Commands like ’Go there’, ’Run’ etc. can be used via the VPET tablet.

For virtual production on an LED stage, the SAUCE system allows for a database coherent system for feeding the LED scenes while the director modifies the environment and animation in real-time, all as part of an end-to-end asset storage and management system. SAUCE has an extensible framework for both classifying and enriching production assets and it has been proven to be extremely effective in a range of productions by Filmakademie.

Helzle points out that SAUCE as a project extends beyond just the points covered in their DigiPro presentation specifically the work exploring “crowds, Lightfield compression, camera calibration,.. and SAUCE is ongoing until the end of the year, there is more coming up.”

Building on the vision the team had back at 2016 Siggraph Asia, the majority of virtual production pipelines are proprietary and mostly apply for large scale productions. Through SAUCE and tools such as VPET it is now feasible to realize virtual production-like scenarios at much lower costs. For instance, Foundry’s Katana can be used to define the look of a shot, and by using VPET, a collaborative session can be established where all creative teams can interactively edit the content.

The SAUCE team seeks to establish new production processes that will enhance creativity by creating collaborative environments with reduced post-production cycles and strong scope for the reuse and intelligent management of assets thanks to the work of companies such as DNEG. VPET is released under an open-source license to share and allows any team to take advantage of collaboration in virtual production environments.