Disguising Virtual Production Rendering

 As we reported in 2020, disguise is a platform for creatives and technologists to imagine, create and deliver spectacular visual experiences. The company expanded its focus from primarily live events such as mega live concerts to complex onset projection projects on films such as Solo: A Star Wars Story, and now to LED soundstage volumes.

As LED Stages have grown in size, the technology has had to develop. In the latest advance from disguise, the company is building on the work on Epic Games Unreal nDisplay to offer cluster rendering. The company is also actively facilitating the latest trend in multi-volume display setups, such as fxguide reported ILM has separately been doing on season 2 of The Mandalorian

Cluster renders

disguise’s cluster rendering separates the physical binding of rendering and screen connectivity with LED volumes and it works in concert with Epic’s nDisplay. In fact, cluster rendering was partly developed thanks to an Epic Games’ MegaGrant. The system allows the real-time LED displays engine configuration, synchronization, content distribution, and computer server management to be offloaded to disguise software. This allows scaling of LED stages as the scene on the screen can now be split several ways over multiple render systems.

Cluster rendering puts a separation layer between rendering and technical delivery. It allows teams to cut their LED stage content up into multiple slices, distributing the workload across multiple machines. All slices are then delivered to the LED screens as one coherent piece, without introducing awkward latency issues.

Clustering can be split over multiple render systems by cutting the canvas, by object or by plates. Clustering by objects is the process of rendering background objects on one node and rendering the foreground object on a different node. The new disguise xR only solution allows team to divide the virtual world into any number of plates which are then each rendered on different nodes.

The system provides near-linear scaling with render complexity, along with correctly mapping to the actual studio space.

disguise aims to separate the rendering of content engines such as UE4 from the technical delivery of final pixels to the LED volume. By separating these two parts of the system in virtual production, disguise can scale them independently. Given the complexity and growing size of LED volumes, it is becoming important to scale up without exploding cost, complexity or sync/render problems from adding additional render points to the LED sound stages.

disguise does this by conceptualizing the two roles of creative content and LED render as independent parts of the systems. “Our rx nodes perform the rendering for ‘channels’ of content, which can be a hero viewing frustum of a camera, an out-of-frustum render, or any additional passes needed for AR, lighting, etc. Those channels are rendered on the nodes, running UE4, and streamed over our RenderStream protocol into the disguise software,” explains disguise’s Peter Kirkup who heads up Global Technical Solutions. The disguise rx system is their dedicated system for hosting third-party render engines such as UE4.

As with their original live concert work, at the heart of the LED workflow is the company’s disguise Designer software, which manages how the whole LED stage will be visually mapped, including an LED overhead or floor panels. Once a project arrives in disguise Designer software, and the system has calibrated the tracking system, it can be mapped to all the video surfaces using the production team’s specific content mappings.  The content mappings can reproject content from the camera’s frustum but can also be mapped using other techniques. The software also supports spherical reprojection, pixel-accurate feed mapping, or direct mapping into the UVs of a UE4 object. These channels are then composited with any other content (such as plate playback) and the final LED pixels are calculated for the screens.

All the channels can be fed through the ACES pipeline and can have independent input (and output) transforms – so the UE4 render running in ACEScg could be composited with a PQ (Perceptual Quantization) plate shot on-site, and output as a DCI-P3 signal to the LED processing. As the system does the composite in linear space, users can set the input and output transforms per content layer.

The system can then drive the physical LED panels through a powerful multi-server output mapping system – meaning that segments of the LED screen can be connected to different disguise ‘actor’ systems and all the synchronization is taken care of by using the company’s internal server to server communications (d3net) together with an external sync source. The wall is then running in sync with both the camera and the volumetric tracking system that maps the position of the camera.

An actual setup would be more accurately mapped as:

 

Overall, what this means is that disguise can run any sized LED volume using any amount of render nodes, and the two can scale independently. Channels can be sliced up and rendered across a group of nodes using the UE4 nDisplay architecture. For example, the hero frustum of a camera could be allocated to 3 nodes to provide enough render power to hit the required framerates and image quality, “enabling the creatives to ‘turn it up to 11’ with no need to scale back on content because there’s not enough power in a single GPU,” explains Kirkup. “If creative team hits a limit on the GPU rendering, adding another render node to the system provides near-linear scaling of render power – whilst not affecting the technical delivery of pixels to the LED screens”.

The team at Framestore LA deployed the system on their Blink test project. (Credit Framestore, XR Stage, AOIN, disguise, FourTwelve Films )

Multi-volume displays

As stages evolve there is a new move to multi-volume display stages. This allows for actors to move between volumes and not just be encased inside one circular LED set of screens. But as both complexity and sheer numbers of screen expand, solutions such as disguise render clusters become vital to provide the vast amount of scaled performance.

It is relatively easy to get video output from the Unreal engine and push it to a set of LED panels. “When you start wrapping the LED volume around something you expect the lighting, the reflections, and all the objects to correctly appear spatially in the right place,” explains Kirkup. “You can’t just do that so much anymore. Uh, because as you move, you’re not moving the camera in a plane and re-projecting onto complex surfaces. You’re now moving within volumes of volumetric space. disguise thinks of the studio space as a 3D volume and projects content into that volume according to various frustums needed by production,” comments Kirkup. The system starts in a normal LED space with a pixel map, which represents the production needs. Camera frustums are then defined which represent hero cameras, and additional render passes can be added to cover environmental lighting and out of frustum LED pixels. Without this as the real camera moves it would not get the right light play and surface reflections correctly across the physical elements in the scene.

LED panel lights alone are not all that is used in an LED volume. The disguise system also incorporates DMX controls for the non-LED studio lighting system. This combined with the correct spatial reconstruction inside disguise’s spatial mapping system is key to the LED sound stage volumes becoming a truly immersive space. “We’re not talking about the technical implementation too widely, just yet,” comments Ed Plowman, CTO at disguise. “But we are trying to merge the lighting into one. A volumetric representation is not a straight mapping of the UE planes to the panels. A volumetric mapping uses environmental mapping techniques and that’s the precursor to being fully able to track the light from the virtual environment onto the stage and match physical to virtual lighting and virtual to physical lighting.” From their extensive concert work, disguise has done a lot with DMX and lighting control systems. “We understand how to control the lights, where they are, what they’re doing, what the settings are, as we have done so much work previously on lighting visualization,” he adds.  “This allows us to address how to compensating for physical lights in the virtual space.”

For multi-volume display setups in a studio fall naturally out of this advanced model, “From disguise’s perspective, we don’t treat it as one volume or some other type of volume, they are all just screen surfaces that are being mapped based on whatever the camera’s sees,” comments Kirkup. While the multiple volumes need to be accurate in volumetric lighting, multiple screens viewed differently from different angles are at the core of disguise’s company DNA. “And I think that’s why this has become really exciting for disguise,” Kirkup adds. “We feel that we’ve really got something very unique in our understanding of how these things can be manipulated and interact.”  Tom Rockhill, CSO at disguise, adds that not only does the company understand stage from a mathematical perspective, but the company also knows from their live concert work, “the need to get something right.  Because you’ve got talent and everything set at that moment, and you don’t have time to go start messing around the video files. That’s why we again think that migrating this workflow more into film production is a natural and valuable step for the company.”

Not all LED screens are created equal

Naturally, there are many screens currently being installed, and some are just effectively rough live or dynamic backgrounds which may or may not be accurate. For example, an LED wall behind a car for a ‘through the window’ shot. In these cases, if it looks OK it probably is, but disguise offers a far greater level of accuracy and colorimetry confidence. It offers a DOP a precise reproduction tool. But to do this there are a host of issues to be addressed.

LED panels are actually well set up for handling HDR comments Plowman, but he also points out that to get LED capture volumes displaying correctly requires a lot of work to be carefully done in the calibration of the whole pipeline. “Basically, I’ve got three separate color representation systems that I have to tie together to get a stable output,” he explains:

  1. the colorspace of the render,
  2. the camera’s colorimetry (everything including the dyes in the camera’s CMOS chip) and
  3. the LED processors themselves, that control and modify the LEDs on the back of the LED panels themselves.

This last aspect has often been overlooked points out Plowman, but it is why there has been a rise of companies producing specialist LED processors such as Brompton Technologies and Megapixel VR (Helios). “The biggest problem we have at the moment is not getting LED panels that were designed for this purpose. You’re getting LEDs that were originally designed to be an exotic giant TV set. The original mainstay for these things was building giant jumbotrons.” LED panels may be said to be exact in their color reproduction, but this assumes averaging. In other words, some LEDs might be up in value some down but the final aggregate is to spec. This may not work well in cases where only part of a screen is visible. It also means there can be real spikes in the spectral response curve over the range of visible light frequencies. These spikes again are assumed to cancel each other out and average to a manufacturer specification.

If that was not enough, LEDs were also designed to be viewed at a fair distance, and from directly facing the screen primarily. Many current LED calibration systems for LED stages also align their color spaces by pointing the actual camera directly at a section of the screen and then resolving a uniform color representation. What disguise is doing for its volumetric calibration is calibrating and adjusting for the angle the camera is to the screens. This is far more important than it might first sound. If the LEDs in a panel are not all perfectly flat, if say the red LED is slightly higher in the actual plastic mold, then even with the LED technically displaying flat uniform white, as the camera moves to a glancing angle on the screens, the color will tint and look pinkish. disguise solves for the actual observation point of the camera relative to the beam coming out from the light source, – from the LED itself. The term Volumetric Calibration is so much more than a marketing term, it accurately reflects a calibration for where the camera is and its precise angle to the screens, even in complex and multi-volume LED stages.

Even pitch varies

The most common question regarding any new LED stage is the pitch of the LED screens. A smaller pitch means the LEDs are denser, have less chance of moire, and are generally more expensive. Ed Plowman points out that even when it comes to looking at this most universal of metrics for LED stages, things are not always straightforward.

The pitch of the LED screen reflects the density of the LEDs as it indicates the separation between LEDs. While this may seem unambiguous, Plowman explains that “density pitch is actually measured as the pitch of the packaged devices on the sub-assembly, not how the actual LEDs are arranged.”  The quoted number of LED pitch is between the outside of the packaging of the LED not the LED itself. “If you’ve got a sparse field of LEDs in packaging, you could have a “two-millimeter” pitch LED wall but get a drastically different output.” In other words, a production may think they are getting a standard 2.84mm pixel pitch LED wall, but if the panel maker is using inferior LED assemblies, it would look more like a 3mm+ pitch screen in reality. “When people think of an LED, they think of a thing in a plastic bubble on the end of two-wire connectors, you apply a current and it lights up. The LEDs in virtual production are embedded inside a package. They are sold as a sub-assembly, which creates the panel. Pitch density, is actually measuring the density of the LED package with a lens in front of it, not the density of the LEDs inside and their arrangement.”

Tom Rockhill believes there’s a massive disparity between various LED stages, “there’s definitely a big sliding scale of expertise, – and just ambition quite frankly,” he comments. “Some people are just happy to use LED panels to capture reflections on a car and are just not building precise calibrated systems.” The issue with those simpler stages is that one is the possible lack of consistency. There could be odd fall off of colors, as there is not detailed color management and this, can make it very hard for VFX crews to match and reproduce elements exactly. In some respects, parallels can be drawn with green screens, while VFX-post teams will normally find a way to finish a shot from poor green screens, when more care is taken onset it can dramatically affect schedules and the ease of achieving the creative vision of the DOP and Director. disguise aims to provide reliability and incredibly high confidence about what is being shot when working on real-time LED stages.

 

Update:

The Carlyle Group & Epic Games take major interests in disguise.

 

disguise announced last week that it has new backing from The Carlyle Group which will allow the company to enter a new phase of its growth development. The company statement stated that “disguise has experienced tremendous growth over the past five years. It has led the drive for immersive and spectacular concerts and live events, become a market leader in xR (Extended Reality) and a key player in Virtual Production for Film and Television. In the past year, over 150 disguise xR stages have been built in more than 35 countries and disguise xR has powered more than 200 events.”

disguise will now be supported by The Carlyle Group, one of the world’s most diversified global investment firms with $246 billion of assets under management who will take a majority stake in the company, and by Epic Games who take a minority stake. In the press release the company confirmed that “led by CEO Fernando Kufer, disguise employees will also retain a significant minority share, with Founder Ash Nehru still the largest shareholder amongst them. Existing Private Equity partner Livingbridge will also reinvest in the transaction.

“The virtual production market is forecast to grow substantially, and we believe that disguise is uniquely placed to benefit from the accelerated demand for LED-based visual experiences and capture further market share. In partnering with the disguise management team, we will look to leverage our significant expertise in scaling media technology companies as we support the company to become a global leader.”Michael Wand, Managing Director and Co-Head, Carlyle Europe Technology advisory team.

Epic Games will take a minority stake in disguise, enhancing an already strong partnership between the two companies. Production companies and broadcasters are embracing the benefits of virtual production techniques, built on photorealistic real-time graphics engines such as Epic’s Unreal Engine, and LED infrastructure for immersing presenters and performers in virtual environments.