The state of rendering – part 2

In Part 1 of The State of Rendering, we looked at the latest trends in the visual effects industry including the move to physically plausible shading and lighting. Part 2 explores the major players in the current VFX and animation rendering markets and also looks at the future of rendering tech.

There is more about rendering at www.fxphd.com this term.

There are many renderers, of course, but we have focused below on the primary renderers that have come up during the last 18 months of writing fxguide stories. It is not an exact science but fxguide has a ring side seat on the industry and the list below covers the majority of key visual effects and non-exclusive in house animation renderers. We have excluded gaming engines and many fine non-vfx applications.

The order is not in terms of market share – in reality the 3ds Max default renderer or Mental Ray would swamp many others due to the market share of Autodesk with Max, Maya and XSI. But the order does indicate a subjective rough grouping based on our feedback with major studios and artists around the world.


2. Major players

2.1 RenderMan – Pixar
2.2 Arnold – Solid Angle
2.3 V-Ray – Chaos Group
2.4 Maxwell Render – Next Limit
2.5 Mantra – Side Effects Software
2.6 CINEMA 4D – Cinema4D
2.7 Modo – The Foundry
2.8 Lightwave – Newtek
2.9 Mental Ray – Nvidia
2.10 3Delight – DNA Research
2.11 FinalRender – Cebas
2.12 Octane – Otoy
2.13 Clarisse iFX
2.14 Lagoa


2.1 RenderMan – Pixar

fxguide will soon be publishing a piece on the 25th anniversary of RenderMan. In that story, we look back with contributions and interviews from Ed Catmull, Loren Carpenter and Rob Cook, each of them now senior managers or research fellows at Disney/Pixar but they were also the founding members of the team that developed RenderMan and defined a specification so far reaching the Pixar PRman implementation is able to be called the greatest renderer in the brief history of computer graphics. No other renderer has contributed so much, been used so widely or so long or been responsible for so much creative success as seen by the near endless stream of VFX Oscar winning films that have used it.

Image by Pixar's Dylan Sisson. The left image uses image based lighting, with geometric area lights used for the right image. The same shaders are used for both images. Environment lights are used in both images, with one bounce color bleeding. On the right, the neon tubes are emissive geometry. 
Image courtesy of Dylan Sisson, Pixar. The left image uses image based lighting, with geometric area lights used for the right image. The same shaders are used for both images. Environment lights are used in both images, with one bounce color bleeding. On the right, the neon tubes are emissive geometry.

In those interviews and podcasts you can hear first hand about the evolution of the product and spec, but you will also hear about the leadership of Dana Batali. While RenderMan has many contributors and excellent researchers, Ed Catmull, President of Disney and Pixar, points out that one thing that has always been true behind the scenes and screens of RenderMan has been the lack of committee thinking. At the start, Catmull points out that, “we had Pat Hanahan as the lead architect on the design of RenderMan, and Pat is a remarkable person. I set up the structure so Pat made all the final calls, at the same time we involved as many companies as we could, 19 if I recall…and of those 6 or 7 were really heavy participants, but that being said, we gave the complete authority to make the final choice to a single person. And I think that was part of the success – that it has the integrity of the architecture that comes from a single person, while listening to everyone else.”

Today there is also one man responsible for guiding the product, Dana Batali VP of RenderMan Products at Pixar. Ed Catmull explains: “The way it has developed is that we have given Dana a free hand in how the product develops, it isn’t as if he comes to me and says is it OK for us to put the following features in – he never asks. The charter is that he is meant to respond to what is needed. We set it up so they make changes to what is needed, they never ask me what should go in – they just do what the right thing is and we have doing that for many years.” In this respect today there is still one person with a single vision of what should be developed for RenderMan’s worldwide clients, including Disney/Pixar. “Yes, that is the set up and one I believe strongly in,” reinforces Catmull.

Dana Batali in turn sees his role as just focusing the intense collaboration of the incredible team of scientists and researchers inside Pixar’s RenderMan development team based in Seattle. There is no doubt that team is exceptional, something easily judged by the volume of papers and published articles that has flowed from the team since it’s inception, much of it published at Siggraph as they will do again next week.

fxguide has recently covered the advances in RenderMan’s use in Monsters University and the move to ray tracing with physically based shading and lighting, so for this article we decided to get very technical on the implications and implementations of the ray tracing framework in the current release and the upcoming new RPS18 release with Dana Batali.

Art from Monsters University.
Art from Monsters University.
Background: traditional approach

Some background, as RenderMan’s own notes point out, CGI rendering software approximates the rendering equation. This equation models the interaction of shape, materials, atmosphere, and light, and, like many physics-based formulations, takes the form of a complex multidimensional integral. The form of the equation is such that it can only be practically approximated. This is accomplished by applying generic numerical integration techniques to produce a solution. The goal of rendering algorithm R&D is to produce alternate formulations of the equation that offer computational or creative advantages over previous formulations.

At the heart of the rendering equation is the description of how light is scattered when it arrives at a surface. The fundamental question is: what light is scattered to the eye at this point in the world? The portion of a rendering program focused on solving this problem can be called the surface shader, or the material integrator. In RenderMan, the canonical or idealized surface shader has traditionally been decomposed into a summation of the results of several independent integrators. In 1990, they represented a ‘theoretical-perfect” material as:

Ci = Ka*ambient() + Kd*diffuse(N) + Ks*specular(I, N, roughness)

Additional terms were later added to simulate incandescence and translucence, but, fundamentally, the simplicity and approximate nature of this formulation was driven by the computational resources available at the time.

In 1987 the RenderMan Shader Language (RSL) was introduced.  And over the next few years until about 2002,  RSL evolves to include new techniques:  deep shadows, “magic lights”, more elaborate proceduralism.

From 2002 to 2005 there are great advances but this second stage is very much a series of complex new approaches from different areas rather than one unified trend. For example during this period Ray tracing is added: gather(), transmission(), indirectdiffuse()  all extend the collection of “built-in integrators” (eg diffuse(), specular()). But also point-based subsurface scattering is added, which is a huge advance. From 2002 until the present new custom shaders implement things such as area lights with ray traced shadows, much of this ridding on the back of two key aspects:

  1. Techniques such as ray tracing  become more affordable due to moores law.
  2. Memory continues to grow, making the Reyes scene- memory approach less vital for complex scenes.

By 2005, armed with significantly more computational resources, The RenderMan team could afford ever more accurate approximations to the physical laws. Rather than start afresh, new terms were added, evolving into a morass of interoperating illuminance loops, gather blocks, light categories, and indirectdiffuse calls. Moreover, the fact that many of these additions were point based solutions that meant pre-baked data, which in turn made rendering pipelines more complex and therefore difficult to maintain and comprehend.

The third phase of “illumination-related technology” is the move to allowing much more pure Ray Tracing solutions, for example in 2011 a  pure raytraced subsurface scattering added, and the  introduction of the “raytrace hider”.

Today with faster multi-threaded computers, the new approach documented above is expanding daily. There is a growing school of proponents of the idea that physics-based (ray-traced) rendering is now the most efficient way to produce CGI. The argument is that it is cheaper for a computer to produce physically plausible images automatically than it is for lighting artists to mimic the physical effects with cheaper (i.e. non ray-traced) integrators. With RPS18, (being shown at Siggraph 2013), there is support for separation of integrator from the material,  and a streamlined, fast, pure C++ shading environment augmented by built-in advanced GI integration technology (Bidirectional Path Tracing with Vertex Merging).

In this latest phase, Pixar felt that the time had arrived to embrace geometric sources of illumination (i.e. area lights) and to jettison the venerable, but entirely non-physical, point light source. Once the new more affordable area lights enter the picture, things change, prior to this using area lights and other new complex lights was expensive. This is largely due to the fact that the shadows cast by area lights are expensive to compute. Add to that HDR IBL (high dynamic range, image-based lighting) and the previous generation of RSL shader has been pushed past a limit.

What was needed was new integration support from the renderer.


New ray tracing and physically based methods

The new RenderMan approach is not replacing the old but offering an alternative. The new beguilingly simple characterization of a material integrator is now:

public void lighting(output color Ci)
{
  Ci = directlighting(material, lights) +
       indirectdiffuse(material) +
       indirectspecular(material);
}

The surface shader’s lighting method is where Pixar integrates the various lighting effects present in the scene. To accomplish physically plausible shading under area lights, Pixar has combined the diffuse and specular integrators into the new directlighting integrator. Like the earlier work, the new integrator is only concerned with light that can directly impinge upon the surface. Unlike that earlier work, the combined integrator offers computational advantages since certain computations can be shared. And to support the physical notion that the specular response associated with indirect light transport paths should match the response to direct lighting, Pixar introduced a new indirectspecular integrator. By moving the area light integration logic into the renderer they made it possible for RSL developers to eliminate many lines of illuminance loops and gather blocks.

s111a_8cs137All of this was seen most recently when key members of the Pixar team such as Christophe Hery and others such as Jean-Claude Kalache implemented physically based lighting and shading inside RenderMan using ray tracing. (Read our article on MU).

(* select notes from RPS18 reproduced with permission).


Dana Batali, VP of RenderMan products at Pixar

fxg: As a hybrid renderer, how does the older reyes/rsl/illuminance loops play alongside the new ray tracing/physically based rendering and GI?

DB: First: the term hybrid primarily refers to the combination of Reyes with ray tracing. GI is usually used to refer to “color-bleeding” or more advanced light transport effects and so it might be wise to keep these notions separate. Reyes has excellent characteristics for motion-blur and displacement because it can compute these effects very efficiently, reusing results across many pixels on the screen. Reyes also offers significant advantages in memory efficiency since it can bring-to-life and send-to-death objects on an as-needed basis. Our hybrid architecture allows a site to choose those objects that are known to the ray tracing subsystem and therefore can allow the rendering of more complexity than a pure-ray tracing solution could in the same memory footprint. This memory advantage diminishes in proportion to the percentage of objects that need to be ray traced. And certainly we’re seeing a trend to higher percentages of ray-traced objects. But ray-traced hair & fur were beyond the memory budget for MU and RenderMan’s hybrid architecture was crucial in their ability to produce the film. So again, our hybrid renderer allows a site to select the “sweet-spot” that best matches their production requirements.

Now to the question of illuminance loops. All renderers break the solution of the rendering equation into direct-lighting and indirect-lighting. (indirect-lighting refers to reflections, color-bleeding, subsurface scattering, etc). The term, “Illuminance loop” refers to the traditional manner in which RSL represented the delivery of direct-lighting (aka “Local Illumination” or “LI”). And thus, it has little to do with GI. But what it does have to do with is plausibility. In the real world, all sources of direct illumination (aka luminaire’s, emissive objects, etc.) have non-zero area. A long-standing corner that CGI has cut constrains luminaires to perfect, mathematical point emitters. This corner-cutting is no longer tenable since it results in pictures that aren’t sufficiently realistic (by 2013 standards). To address this issue we extended RSL to broaden the communication channel between light shaders and surface shaders. Moreover, we extended the built-in integration capabilities of the renderer with the introduction of the directlighting function. Christophe Hery’s shaders (referring to the work done in Monsters University) heavily rely on the inter-shading communication capabilities of “RSL 2.0” but predate the maturation of RenderMan’s built-in directlighting capability. Both systems rely on MIS (multiple-importance sampling) to reduce the grainy-noise associated with luminaire sampling. For a fixed number of direct-lighting samples, it’s a fundamental property that the noise increases with the size of the luminaires. The primary source of noise is the complex assortment of objects that reside between the source of illumination and a receiving surface. The efficient computation of shadows has been a central focus of CG research since its inception and area lights are *much* more expensive to compute shadows for.

RenderMan supports two means for computing area light shadows and both reside behind the RSL function: areashadow. The simplest solution is to ray trace the shadows and this is the preferred solution as long as the shadow casting geometry can fit in memory. But with hundreds of hairy creatures in an Monster University (MU) shot all the shadow-casting geometry can’t fit in memory. Our hybrid solution allows us to produce a “deep shadow map” in a reyes-only (memory efficient) pre-pass that can be used by the areashadow function to produce realistic shadows during the beauty pass. RenderMan can combine the tracing of rays against real geometry with the evaluation of area shadow maps (which might only contain shadow information for hair) to produce a hybrid shadowing solution.

Finally, getting back to the topic of GI: with the widespread adoption of physically plausible (area) lights, it became feasible, even necessary, to consolidate the code and the parameters that control the integration of direct illumination with controls for indirect illumination. In practice this simply means: there should really be no difference between a reflection and a “specular highlight”. A photon arriving at a surface directly from a luminaire doesn’t behave any differently than a photon that arrived by a more indirect path. Prior to the consolidation, shaders would have two shader parameters to express the specular color and reflection color. While certainly offering lots of artistic control, this isn’t physically plausible. Only a single specular color should be needed. But the idea of taking away artistic controls can be very contentious and part of the significance of the success of “the GI efforts” on MU was the fact that a lot of lookdev and lighting artists had to be convinced that the benefits of physical plausibility outweigh the potential for artistic control that these traditional parameters represented. Christophe is a driving force of this message at Pixar.


Above: watch a Monsters University progression reel.


fxg: It can be confusing when talking about ray tracing as Pixar has used forms of ray tracing for many years, but the new workflow is towards a more unbiased pure form of ray tracing, is that true?

DB: Yes there has been a “raytrace hider” in RenderMan since RPS 16. There are still benefits delivered by the hybrid architecture in the form of the radiosity cache where various partial integration results can be reused. But there are controls to completely disable these features and cause RenderMan to operate as a simple-pure ray tracing engine. In RPS 18, we extended the ray trace hider to support path tracing since it offers a better interactive experience (during relighting) than its cousin, “distribution ray tracing”. We’ve found that path tracing has other complexity management advantages over distribution ray tracing insofar as it’s easier to understand and manage a ray count budget. In other application of the term “hybrid”, we actually commonly run RenderMan in a combination of distribution and path-tracing modes, favoring the former for indirect diffuse integration and the latter for indirect-specular integration.

fxg: GI is not new to Pixar, but the techniques in MU were?

DB: GI has been present in PRMan for many years. Part of the message here is that computers are now fast enough that it’s becoming tractable to broadly deploy GI. And certainly advancements like the radiosity cache make GI even more feasible. But the new thing is the trend to unify the indirect and direct integration frameworks and this has been a substantial effort that will continue for the foreseeable future. On the production side, HDRI, tonemapping, exposure, AOVs etc are all components that must agree. In a larger production studio, there are numerous plumbing challenges. At Pixar, many new Slim templates needed to be developed around the core technology. New features to Slim were added to support the interplay between co-shaders and lights in Christophe’s shading system. New hotspots needed to be optimized since the production was pushing on things in new ways. RenderMan bugs needed fixing. And several new features were added to RenderMan to facilitate some of the plumbing changes.

fxg: When does geometry get diced, and how long does it stay in memory? How do you exploit coherence?

DB: Since 2002, RenderMan’s raytracing subsystem has supported a mutiresolution tesselation cache. The idea is that the ray has a cross section (the ray differential) that is implied by the light transport path. As rays bounce around in a scene, ray differentials typically grow and we exploit that observation by caching the curved-surface tessellation at different levels of detail. GI is fundamentally incoherent and this is bad news for memory access coherence. The good news is that the broader ray differentials allow us to fudge the intersection and significantly reduce the memory thrash between diffuse and specular ray hits. Another parameter that can improve coherence is “maxdist”. Each ray carries with it a maximum distance that it can be traced. In old-school occlusion renders, setting a reasonable maxdist value on rays would ensure that rays launched from one side of a scene couldn’t cause hits on the other side. Coupled with RenderMan’s geometry unloading feature, this was a valuable tool to exploit coherence. This approach is less viable in a more plausibly-lit setting since there’s often no reasonable maxdist you can choose due to the variability of light locations and intensities.

fxg: Can you talk more about the MIS? And the balance between the peaks in the BRDF vs the hot spots in the scene? How do you go about approaching that? Is that something deep inside PRman, or is it exposed for shader writers, like if the shader writer wanted to do say Metropolis sampling?

DB: MIS is nothing more than an unbiased means to weigh samples from different distributions. As a developer of a shader the most common distributions to sample are the BSDF and the lighting. Metropolis sampling isn’t relevant in the context of a RenderMan shader since it performs MIS in path-space the space of paths is a broader notion that a single shader is responsible for.

fxg: Some strengths of the reyes rendering approach is displacement and motion blur, especially together. A few years ago, it felt like as soon as you turn on the ray tracing, you pretty much lost these. How does the new renderer approach these?

DB: Generally it is the case that displacement and motionblur are more expensive to solve with ray tracing than with reyes. As more rays are affordable, the importance of the performance of these subsystems increases and with that goes more development investment.

Monsters University.
Monsters University.

fxg: How do you do soft shadow motion blurred fur without your renderfarm exploding? What tools are available to keep the memory under 20 Gig?

DB: It’s my understanding that that was accomplished with RenderMan’s area shadow maps.

fxg: We got an additional comment from Christophe Hery to expand on the area shadow maps

Christophe Hery: We did use area shadow maps on hair and on “some” heavy scenes (crowds). We also used tricks, such as different hair densities in shadows than in camera contexts. But most (non-hair) shadows were actually ray traced. So the 20Gb memory limit was managed by people in rendering, optimizing by hand some of the knobs on the lights, for instance the number of samples.

fxg: What are the limits of progressive rendering? It sounds like they might not be able to subsurface, if they’re still baking point clouds. But what other limits are there?

DB: RenderMan 17 offered support for progressive rendering of all indirect illumination effects including subsurface. This is great for interactive but may be substantially slower and less controllable than point-clouds. As usual, tradeoffs abound and what’s right for one show or studio may not be right for another.

FXG: Thank you for your time.


 

Photon beams for caustics - these images courtesy Per H. Christensen, Pixar.
Photon beams for caustics – these images courtesy Per H. Christensen, Pixar.
Photon beams for caustics - final render in RenderMan.
Photon beams for caustics – final render in RenderMan.

A great example of the flexibility of RenderMan is in the way it is used by Weta Digital for extremely large renders, splitting the rendering problem and using a GPU pre-render using and storing spherical harmonics, with PantaRay pre final render. This robust pre render is very different from the point cloud pre-renders outlined elsewhere in this document and you can read about it here in our fxguide story.

The new approach offered by RenderMan still needs to address the same issue all ray tracers do which is memory limitations, but for most production shots (perhaps not Weta level) the one pass approach offers greater simplicity of lighting control coupled with incredible realism. By moving from very complex shaders to having the renderer understand things such as bxdf, geometric area lights – results in a much better and cleaner rendering model. To some extent the older rendering model has been viable with some ray tracing doing a style of ‘what’s the value at this hit’ type of integration. But that is not powerful enough moving forward into worlds where there are much more complex integration techniques at play. The shader programming models were not geared towards bidirectional or other integration techniques. Nor did they let the renderer help with complex problems (such as sampling geometric lights).

RenderMan still very much supports both models, but recently the team has worked hard to service the trend of heavier and heavier fully ray traced shots. By redoing the shaders and making the system easier to implement at a full energy conserving physically based lighting system, the RenderMan team under Dana Batali’s leadership is hoping to secure a strong place in the next 25 years of computer graphics.


2.2 Arnold – Solid Angle

Much of the history of Solid Angle and the development of Arnold by its founder Marcos Fajardo was covered in our previous Art of Rendering piece.

Arnold is a path tracer that tries to solve as efficiently as possible ray tracing for film and media production, with as few tricks, hacks and workarounds from the end user as possible. “We are just trying to solve the radiance equation, on the fly without doing any type of per-computation, and pre passes,” explains Fajardo. “So we just trace a lot of rays around and hope to get an accurate answer. The challenge is to design a system that is optimized so that it traces a relatively small number of rays for a given quality and also the ray tracing needs to be very fast. That’s what we do everyday we try and optimize the renderer with both mathematical processes to optimize the Monte Carlo equations and also to make the code very fast – so those two things – the speed of the rays and the number of the rays – that is what we work on everyday.”

PR-ILM-0367
Pacific Rim. This image rendered in Arnold by ILM. Courtesy Warner Bros. Pictures.

The number of rays greatly affects everything from image quality to render speed. In this video below we take a simple scene and demonstrate how the various adjustments increase or decrease the render. It is worth nothing again that to halve the noise one needs to quadruple the number of rays sampling.

Latest advances

Solid Angle has achieved remarkable success in producing incredibly powerful ray tracing that balances render time, memory management and image quality. It continues to grow and expand around the world.

Arnold remains an incredibly important product, not only is it very fashionable and on most studios’ render roaster (or being evaluated for inclusion) but the company has a strong commitment to R&D and like Pixar before them is committed to sharing and publishing their work. As such the company is held in very high regard and there is no doubt their focus on advances inside a production framework is yielding spectacular results but still obtainable inside the budget constraints (time and money) of the real world.

Solid Angle with Arnold has several key advances, we highlight here four:

• SSS
• Major advances in MIS
• Muti-Threading Performance
• New Volumetric rendering

Above: watch Arnold’s 2013 reel.

2.2.1 SSS

The new Arnold renderer does raytraced SSS, which does away with point clouds. This is a remarkable advance, and the early results are incredibly realistic.

The new Arnold SSS
The new Arnold SSS. Image courtesy of Digic Pictures / Ubisoft Entertainment.

BSSDFNote: a material definition or BRDF becomes a BSSRDF (Bidirectional surface scattering reflectance distribution function) when considering SSS.

Some background first on the state of the art of SSS. Starting with a landmark paper by Jenson (et al Siggraph 2001: A Practical Model for Subsurface Light Transport), much of the Subsurface scattering approaches have been an approximation, normally based on dipoles, a method that approximates the scattering beneath the surface using points and the dipole maths of having a function above and below the surface that gives the control on the amount and distribution of the scattering.

Jenson provided a single scattering with a dipole point source diffusion approximation for multiple scattering. The name dipole refers to the magnetic dipole of plus and minus. This original dipole method was a breakthrough in allowing scattering beneath a surface which really is the science of treating say all skin or similar surface materials as a dispersing/scattering transmissive material. If ray tracing is complex – then scattering the light beneath the skin (BSSRDF) with different amounts of scatter depending on the wavelength of light is incredibly complex and vastly computationally difficult.

Disney Research recently presented a new paper on SSS, called Photon Beam Diffusion: A Hybrid Monte Carlo Method for Subsurface Scattering, it advances the art and improves on the Quantized Diffusion (QD) method that say Weta had used on the Engineer characters in Prometheus. The QD method was a approximation sum-of-Gaussians approach to BSSRDF. The new Disney method is even more advanced in that it moves from a point approach to a beam approach, allowing for a set of samples along these sub surface beams. The result is even greater realism and continues this trend of rapidly advancing SSS approaches, each built to work in a production Environment (Photon Beam should match QD for performance) and not a full brute force solution as that would cripple any real world production.

Digic Pictures
Rendered using Arnold’s new SSS, from the remarkable Digic Pictures trailer for Watch Dogs.

Arnold can now do brute-force ray traced sub-surface scattering without point clouds. This is a major improvement because of the memory/speed savings, improved interactivity and easier workflow, compared to earlier point cloud methods. The problem with point methods according to Fajardo is that. “it can be a bottle neck like any cache method.”

The way Fajardo explains SSS research at the moment it really falls in just one of two camps. You can “change the diffusion profiles, and make them slightly better looking and that is what the Disney guys have done and that is what the Weta guys did with QD. It does not change the workflow it makes the images look a bit better. You get more sharpness in the pores of the skin – it is hard to see – but when you see it, it is good. That is one thing, one axis but there is another axis. It is to make the whole process more efficient and that is what we have done, and we are really proud of this new system and it changes the way you think about SSS – it just makes it a lot easier.”


The Milk spot above (now missing from YouTube) uses the new SSS in Arnold, which is no longer point cloud based. Gael Honorez, lead renderer at nozon, explained that this would be very difficult to do with point-based methods as you would need a very dense pointcloud which would use a lot of memory and take a long time to precompute. The memory issue is really key, in fact the memory constraints for this job would have made this project impossible to complete – without a major rethink – if it were not for the new approach.

“The way people still do sub surface,” says Fajardo, “is still by doing point clouds which is a really inefficient workflow approach, the work that Solid Angle has been doing (and is being presented at SIGGRAPH 2013) is for us a game changer. We co-developed that with Sony Pictures Imageworks, the results are really good in terms of performance compared to point cloud.” Solid Angle do not pre-process and store information in a point cloud, they just fire more rays in an intelligent way. This makes it much better for scaling, it works much better with multi-threading and less memory requirements. Users do not need to worry about the density of point clouds or tweaking parameters. “You just press a button, you don’t have to worry about precomputing or adjusting values. This ends up being very much more efficient especially when you have many more characters in a scene – such as with crowds,” adds Fajardo.

Flag
Assassin’s Creed: Black Flag trailer image rendered with SSS by Digic Pictures.

Szabolcs Horvátth is lead TD at Digic Pictures and a driving force behind that studio’s transition to physically-based rendering with Arnold and the new SSS. He is incredibly excited about the creative windows this opens and the scalability, especially the notion of being able to render entire Massive crowds with ray traced SSS.

For some time fxguide readers you may recall that SPI flagged that they had moved to fully ray traced SSS on the last Spider-Man film.

“Sony solved this problem (of good SSS) by jumping entirely to a full Monte Carlo path tracing technique. This is a remarkable commitment to image quality as almost the entire industry has stopped short of a full Monte Carlo solution for large scale production….SPI used an Arnold renderer for Spider-Man”.

But this was different from the new implementation. The Spider-Man technique was single scattering; Fajardo explains the difference. “When you are doing SSS you can work at two levels. You can just use SSS to simulate what we call the first bounce – under the surface – that is single scattering. It is an easier and well defined problem. That is what they did on Spider-Man. We along with Sony helped develop single bounce scattering more efficiently with GI. Now we are talking about multiple scattering, this is what gives you the softness and bleeding of light. That is a lot more difficult, and that is only now possible now that people are starting to do this with ray tracing. Up to now you really needed to use point clouds and it was painful. This year at SIGGRAPH we are presenting a way to totally do away with point clouds. I am so happy that we are putting the final nail in the coffin of point clouds. I cant even tell you! For many years that has been the last place you needed point clouds. A few people have been trying to do multiple scattering with ray tracing and we touch on this in our talk, but it was not very efficient, we use a new importance sampling technique for sub surface scattering, what we call BSSRDF Importance Sampling.”

While this is a SIGGRAPH paper it is also being used today in production at key Solid Angle customers such as Digic Pictures and at effects houses such as Solid Angle’s original partner Sony Pictures Imageworks.

Sony Pictures Imageworks breakdown reel from Oz – showing volumetric rendering done with Arnold.

2.2.2 MIS

Arnold was one of the first renderers to deploy MIS back when Fajardo worked at Sony Pictures. Today the implementation at Solid Angle is quite advanced, more than just using it for BRDF and lights. It is “applied to many many places in the renderer, virtually any place in the renderer where there is an integral you can apply IS – and there are many integrals in a renderer,” says Fajardo. If you are smart enough to find multiple samplers, most of the time people find just one sampler or method, but if you are smart enough you can find multiple samples for the same task and then combine them.” It is for example used for SSS in Arnold.

“The control is hidden from the user,” says Fajardo. “The user should never know, they don’t need to know. The user should never never know as it is unrelated to the art of using a renderer.”

While the user may never need to know directly, MIS is incredibly important to image quality and render speed. IS is being used in the new SSS example above and also with area lights. Area lights are not only great tools for producing very attractive lighting as any DOP knows but it is also key in using IBL with HDR lights in the scene and many other areas of modern production. Another great example of Arnold’s research into IS was published at Eurographics Symposium on Rendering last month (2013). The paper was called An Area-Preserving Parametrization for Spherical Rectangles. This rather dry sounding paper explains how much more sensibly the lights can be sampled given the spherical projection nature of working in computer graphics.

In this diagram below (left) it can be seen that a rectangular light can appear bowed – in much the same way a real light such as a Kino Flo appears bowed when shot with an 8mm fish eye lens (right). Note this ‘appearance to the spot being computed is called the Solid Angle (from where the company takes its name).

ballMISarnold
This shows the bent mapping – that actually happens as the light turns – relative to its solid angle.
8mm
A frame from an HDR – note the bowed Kino Flo light – or real world area light.

The company Solid Angle takes the mathematical solid angle into account with its sampling.

If you look at the ‘random’ samples below on the left you see a seemingly sensible distribution across a square patch which represents an area light. The problem is when an area light is seen from the computer’s point of view via the ‘soild angle’ maths that is the area light as ‘projected’ along sight line with the computational ‘dome’ used in computer graphics, it is now easy to see just how much the samples are all collecting along the edges. So much stronger is this bias or effect than one might imagine – it is worth checking any square for yourself (count in from the left and bottom and you can see that both of the shapes in (A) marked Area sampling are exactly the same. What is needed is to start from a distribution. If one starts with the scattering on the left in (b) Spherical Rectangle sampling, when the ‘projection’ is taken into account the sample is now more evenly spread. This directed sampling is just a refinement that falls under improved Importance Sampling.

solid angle
Click to enlarge (and count the boxes and their samples yourself to see the mapping is accurate).

How much difference does this one clever IS improved PDF (probability distribution function) make?

It is most noticeable closer to the lights. Exclusively, we can show an animation rendered with the normal area sampling and then – with no other change than the new IS – the spherical sampling version. The reduction in noise is dramatic. (note some banding in these videos is from compression NOT rendering)

Area vs spherical sampling.

2.2.3 Muti-Threading performance

It can be argued that with ray tracing there are three primary concerns:

  1. render speed
  2. noise
  3. memory limitations

But Fajardo says he would add a fourth: threading scalability. Today machines can have 32 threads and this is only going to increase. Scalability “is going to be more and more important as Intel and others come out with processors with more and more threads on them,” says Fajardo.

Arnold has incredible multi-threading performance. “I feel like we have done a tremendous amount of work to make Arnold scale optimally in many-core machines. It’s easy to run fast on one thread, but running on 64 threads is a different story, you typically run into all kinds of performance bottlenecks that you have to analyze individually and solve with careful, low level programming or sometimes with better mathematical models.”

Fajardo argues that things that one might take for granted, like texture mapping, can become threading bottlenecks unless the renderer and development teams can benchmark, analyze and optimize to a machine with many cores. Currently, as we spoke to Fajardo at Solid Angle, they are evaluating machines graciously donated from Intel with 32 physical cores / 64 threads.

This SPI breakdown video shows one of the first films to use Arnold to deal with complex volumetrics in MIB3.

In the case of texture mapping, the problem is that you need a texture cache to hold the hundreds of GB of texture data required to render a complex VFX shot. “And texture caches require some sort of locking mechanism so that multiple threads can write and read from the cache in parallel without corruption,” says Fajardo. “We worked hard with ILM during PacRim (Pacific Rim) to solve that problem and as a result we probably have the most efficient (in terms of threading) texture engine in the industry. It’s funny to watch other renderers die at such scenes, renderers that have traditionally had awful threading scalability (like Pixar’s PRman), where people have gotten used to such bad scaling that to compensate run such renderers on a small number of threads per job, e.g. run four 2-threaded jobs on a machine with 8 threads, therefore limiting the amount of memory available to each job.”

With Arnold, one can be sure “you are making full use of all of those 16, 24 or even 32 cores in your machine while using all of the available memory,” argues Farjardo. This becomes increasingly important of course as artists do lighting work on increasingly complex scenes, in their powerful workstations with an ever increasing number of CPU cores.

“You would be surprised”, explained Fajardo, “even Disney’s almighty Ptex library, which caused so many ripples in the industry, is not threaded well and destroys the performance of your renders. Which is probably OK for Disney as they use PRman therefore running it on very few threads. But run it on all the threads of a powerful machine, as we did, on a simple scene with a single Ptex-textured polygon, and the results are abysmal.” Here are the results Solid Angle provided to support this claim:

threads pixel rendering time speedup
1 18.94s 1x
2 11.91s 1.6x
4 7.23s 2.6x
8 9.44s 2.0x
16 12.37s 1.5x
32 13.39s 1.4x
64 14.65 1.3x

In this test case, instead of 32x faster with 32 threads, it’s only 1.3x faster. “Which means that 30 of the cores are idle and you are wasting your money,” he adds. “I could give you more examples. Katana has never been thread-safe and therefore forced single-threaded loading of geometry (though I imagine they will fix this eventually). Most hair-generation pipelines are ancient and therefore not ready for multi-threading. All of which are reasons why big studios don’t fully take advantage of threading and would run multiple single-threaded jobs on the same machine. It’s an embarrassing fact that most studios hide, and if you ask them they’ll give you all kinds of hand wavy explanations as to why running single-threaded jobs is more “efficient”, Fajardo points out passionately.

Many companies talk about multi-threading but this is often not addressing the overall production problem for Solid Angle, as they believe “it’s easy to multi-thread well when you don’t have hundreds of GB of texture data, or complex SSS, displacement, etc – all used together in the same shot,” says Fajardo. “Just like it took years for people to catch up with global illumination and ray tracing, it’s still taking people years to catch up with efficient multi-threaded programming. Unless the company is hell-bent on systems performance on modern machines, like Solid Angle is, multi-threading scalability is the Achilles heel of production renderers.”

COURTESY OF WARNER BROS. PICTURES Rendered in Arnold by ILM
Pacific Rim. This image rendered In Arnold by ILM. Courtesy Warner Bros Pictures.

2.2.4 Volumetic renders and volumetric lighting

Arnold has a code base also inside Sony Pictures due to the historical development of the product (see our original rendering article). We asked Rob Bredow, Imageworks CTO, about the innovations happening inside Sony with their version of Arnold. “I think the biggest new innovations are full volumetric rendering in-camera with global illumination,” he says. “It’s enabled new looks like you’ve seen on Oz and will see in some of our future work as well. It’s really changed the way we can work.”

The cloud work in Oz, and the rocket Apollo launch in Men in Black 3 are both excellent examples of the impressive new volumetric innovations. These innovations, as has happened historically with all such advances, has been shared between SPI and Solid Angle.

explosionFrom the EGSR 2012 Paper joint research by SPI and Solid Angle.

This work builds on the research that Christopher Kulla at SPI has been doing and publishing in conjunction with Solid Angle and Fajardo in particular. This work builds on “the sampling work we have done for volumetric lighting over the past couple of years, and which we showed at Siggraph 2011 and EGSR 2012,” commented Fajardo.

Importance Sampling Techniques for Path Tracing in Participating Media
Importance Sampling Techniques for Path Tracing in Participating Media. Rendered ~ 5 mins a frame on a 2 core laptop.

The volumetric lights have proven very popular with clients. “One of the nicest complements we get is that our volumetric lights are quite beautiful and very easy to use,” says Fajardo. There are two aspects to volumetric lighting, homogeneous lights or uniform lighting (a spot light in an even fog with a beautiful cone of light) and the other is non-uniform heterogeneous lighting – and this is of course much more difficult.

As mentioned above OpenVDB is important for non-uniform media storage and in addition to supporting Open VDB. Solid Angle is also working with Fume FX and Luma Pictures and Digic Pictures to implement their volumetric effects in Arnold.


2.3 V-Ray – Chaos Group

V-Ray from Chaos Group is one of the most successful third party renderers, with wide adoption. Key V-Ray studio users include Digital Domain (Oblivion, Enders Game) and Pixomondo (Star Trek: Into Darkness, Oblivion). ILM also used V-Ray heavily for environments on G.i. Joe: Retaliation, Star Trek: Into Darkness, The Lone Ranger and Pacific Rim. And Scanline VFX is another V-Ray heavy lifter. “In fact I think everything ever rendered on their (Scanline’s) showreel is out of V-Ray and they have done tight integration with their Flowline fluid simulations,” says Lon Grohs, business development manager of Chaos Group. “This includes work on Avengers, Battleship, Iron Man 3, all kinds of stuff.”

Stuart White, head of 3D at Fin Design, a boutique high end commercials animation, design and effects company in Sydney, uses V-Ray and finds it a perfect fit, providing high end ray traced accurate results without the pipeline and artist overhead of non-raytraced solutions. “Rendering-wise, we are all about V-Ray here. It makes consistently beautiful images whilst being easy to use, affordable and pretty bullet proof even in the face of some seriously heavy scenes.”

cadburry_bunny_cg
Fin Design + Effects, Sydney, use V-Ray for high end TV spots like this Cadbury one.

As seen above, V-Ray produces excellent images with particluarly good fur, SSS and is used around the world, by large facilities but especially mid-sized companies producing high end work. It is also now available to on several popular cloud services and was used by Atomic Fiction that way for Flight.

There are various version of V-Ray supporting different products, such as Max, Maya, Rhino, SketchUp and more, but for the purposes of this article we can assume they are the same from a rendering point of view.

V-Ray is basically a ray tracer and it does do brute force ray tracing very well, but the team at Chaos Group have added all types of optimizations for architectural visualization and other areas, so the product does have radiance caches and a bunch of other things which would be classed as biased, but it can work very much as an unbiased renderer. It has had physically based material and lights from the the start of the product – “that is what we are from the start” says V-Ray creator and Chaos Group co-founder Vlado Koylazov.

V-Ray’s workflow is very clean and the artist can work well with data from on set such as HDR image probes and IBL lighting etc. “We hear people like being 90% there and just matched to a plate with just the things they have documented from on set. From there – there is always the artistry. In fact I have only had one client ever come and ask for non-physically based rendering,” jokes Lon Grohs.

A scene from Oblivion. VFX by Pixomondo.
A scene from Oblivion. VFX by Pixomondo.

The product has always used MIS since starting. V-Ray is very much the product of being a modern renderer, sampling is often handled for the artist keeping the interface very clean using adaptive sampling. The adaptive sampling both increases based on a noise sampling threshold system. The renderer is checking neighboring pixels and until the noise threshold is reached it can apply more samples.

In the early days of the product the company had to deal with efficient memory use to allow for the scenes to be rendered in what was then very small amounts of RAM. The team deployed a proxy system which was very successful and is still used today. It avoids having to load all the geometry at once.

V-Ray’s SSS:

SSS in V-Ray
SSS in V-Ray. Dan Roarty (2011).

V-Ray uses a dipole approximation for the VRayFastSSS2 shader. “Some methods are more precise technically speaking, but we’ve found that the VRayFastSSS2 provides the best balance between quality, speed, and intuitive controls,” says Koylazov. “For V-Ray 3.0, we are considering additional models including a fully ray traced solution. We are also looking to implement a simple skin shader with simple, artist-friendly settings. Some of our customers have written their own SSS shaders for V-Ray including multipole and quantized diffusion.”

It is possible to render a full brute force solution inside V-Ray but it will naturally be slow. As V-Ray is a production renderer most people use the new and popular VRayFastSSS2, but before the new Fast SSS2, V-Ray was already producing strong SSS images as seen in the Blue Project image left.


 

nanadone_s
Dan Roarty (2013).

nana_wire

When Dan Roarty is working he sets up a few area lights behind the head to see how much light passes through the ears. This helps him gauge how thick the SSS should be.

The new ‘Nana’ was:

  • modeled in Maya
  • sculpted/textured in Mubox
  • hair done in Shave and a Haircut
  • all the spec maps in the new SW Knald
  • Rendered in V-Ray

Adam Lewis at adamvfx.com outlined his view on V-Ray’s SSS:

“The beauty of V-Ray’s SSS2 shader is you don’t really need any special techniques to get a great result, the shader behaves like you would expect from a scattering material in real life, so using the SSS2 shader well is mostly a matter of understanding how real world materials like skin behave.

So with that said, there are a couple of specific techniques that some artists might not be aware of. One very useful technique is to use a separate bump map for the specular component. The advantage of this approach is you can introduce an extremely fine bump map that affects only the specular, which is very useful for controlling the microstructure of a surface.

Another useful technique is to use a simple grayscale map to introduce some diffuse shading into the SSS2 shader to simulate dead/dry/flaky skin on top of the scattering surface. A great example of this quality can be seen in something like dry lips, where you have two very distinct materials interacting with each other: the soft, highly scattering skin of the lips as a whole, with the more diffuse, dry skin on top.”

The next version of the software will be shown at SIGGRAPH 2013, showing 3.0 which is entering Beta. It is expecting to ship V 3.0 in the Fall. V-RayRT, the real time product, will be supporting new SSS and the team have been working very closely with Nvidia on their CUDA optimizations. Hair and hair rendering should be 10 to 15 times faster in version 3. There will also be a new simplified skin shader in version 3.0 for doing digital double work, and one they hope will be a little more user friendly. Also in version 3 will be open source support as mentioned above, with Alembic and OpenEXR 2 support. Viz maps are being introduced – these are material definitions that will be V-Ray maps which can be common across multiple applications like Max and Maya. Also as mentioned above support for OSL in version 3.0.

The team are also introducing a new “progress production rendering” which is one click path path trace render which will continue to refine and would eventually converge to production final renders.

G.I. Joe: Retaliation (2013)
GI Joe: Retaliation (2013) rendered in V-Ray by ILM.

Last SIGGRAPH the company announced V-Ray for Katana and V-Ray for Nuke. Both are now at the testing stage. The projects would best be described as ‘by invitation’. If you are interested in V-Ray for Foundry products email Chaos Group directly or find them at SIGGRAPH. Both products are real but will unlikely be shown publicly at their SIGGRAPH booth.


2.4 Maxwell – Next Limit

Maxwell Render is a standalone unbiased renderer designed to replicate light transport using physically accurate models. Maxwell Render 1.0 was first released by Next Limit in 2006, and from the outset it adopted this ‘physically correct’ approach.

“The main aim of Maxwell is to make the most beautiful images ever,” says Juan Cañada, the Head of Maxwell Render Technology. “That’s the main idea we had in mind when we started the project. Apart from that we wanted to create a very easy to use tool and make it very compatible, so everybody can use it no matter what platform you wanted to use.”

Image from MTV EMA ident rendered in Maxwell Render by Sehsuct Berlin.
Image from MTV EMA ident rendered in Maxwell Render by Sehsuct Berlin.

Maxwell Render is unbiased – this means that the render process will always converge to a physically correct result, without the use of tricks. This is very important both in terms of quality but also ease of use. Maxwell really does mirror the way light works without tricks and hacks.

So successful has the Maxwell Render been in replicating real world environments it has become the yard stick by which most other solutions are judged ‘correct’ or not. It is no accident the renderer is referred to a ‘Light Simulator’.

The software can fully capture all light interactions between the elements in a scene, and all lighting calculations are performed using spectral information and high dynamic range data, a good example of this is the sharp caustics which can be rendered using the Maxwell bi-direction ray tracer with some Metropolis Light Transport MLT approach as well.

Fur rendering in Maxwell
Grass rendering in Maxwell. Image by Hervé Steff, Meindbender.

The algorithms of Maxwell use an advanced bi-directional path tracing with a hybrid special Metropolis implementation, that is unique in the industry. Interestingly, in the last few years the whole industry has been moving more towards Maxwell’s ‘physically based lighting and shading approach’, while the Next Limit engineers have been making Maxwell Render faster and better using key technologies such as M.I.S and multi-core threading to optimize the speed in real world production environments.

Maxwell started out ‘correctly’ according to Cañada so it has recently been mainly about making Maxwell faster and easier to use, since they have no bias or point cloud approach legacy. The team is focused on issues such as Mutli-threading and other practical issues. “I agree at the beginning Maxwell was almost an experiment – ‘lets try and do the most accurate renderer in the world’ – once we were happy with the quality we said – ‘OK, let’s make an interactive renderer – optimize everything’. We have been very focused on Multi-threading so when you had just one or two cores Maxwell might have been slow but now people have 8 or 12 cores. It can even be faster than other solutions in certain situations,” says Cañada. It is common now to use Maxwell for animation, something that was fairly unrealistic just four or five years ago.


Above: Deadmau5 ‘Professional Griefers’ music video features characters rendered in Maxwell Render by Method Studios.


Normal path tracing is slowed or confounded by optical phenomena such as bright caustics; chromatic aberration, fluorescence or iridescence. MLT works very well on some of these shots, while being very complex to implement and Cañada will be giving an advanced talk on lighting rendering techniques at this year’s Siggraph 2013 which will cover some of the complexity of attempting a successful MLT implementation and why few people have tried it.

Want to know more about Maxwell Render? See fxguide's new iBook - From Sim to Render: The Next Limit Story available for free to download.
Want to know more about Maxwell Render? See fxguide’s new iBook – From Sim to Render: The Next Limit Story available for free to download.

The Next Limit implementation is not a full MLT but a clever hybrid solution. MLT can also be very fast on complex shots and yet more expensive to render on others. For example, its approach of nodally mapping paths bi-directionally helps it focus in on the problem of say light just coming through a keyhole in a door to a darkened room or to produce very accurate caustics. But a full MLT can be slower than other algorithms when rendering simple scenes. “The power of metropolis is in exploring difficult occurrences and its strongest point is sometimes its weakest point when dealing with simple scenes,” explains Cañada.

Sometimes with a MLT you can not use all the same sampling techniques you can use with a path tracing system, at least not everywhere in the code. Cañada points out that, “you can not use quasi- Monte Carlo for example in many places – you can of course use some things in some places,” but the Maxwell system is very different, for example Next Limit’s implementation of Maxwell’s MLT, at its core, does not use MIS. There is MIS in Maxwell (extensively) but not in the MLT part of the code.

While pure MLT does not seem to be favored by any part of the industry, Next Limit believes there is a lot to be learnt from MLT and they are constantly exploring how to improve bi-directional path tracing.

Maxwell Render includes Maxwell FIRE, a fast preview renderer which calculates an image progressively, and so renders can be stopped and resumed at any time. If the renderer is left long it enough it will simply converge to the correct full final solution. It is very good for preview, but normally once an artist is happy with the look, they switch to the production renderer for the final. This approach means that users can get faster feedback but also know the results won’t change in the final render.

“People were used to traditional workflows with old school renderers where they want to render a lot of passes,” adds Canada. “You just think of Maxwell as a real camera – so you just focus on lighting, focus on materials. You work like a traditional photo developer and you don’t worry too much about the technical details of transport algorithms.”

Scandinavian by Alfonso Perez
Scandinavian by Alfonso Perez. Rendered in Maxwell.

One of the most challenging things for an unbiased renderer is SSS. As stated above, many solutions are point based, Cañada explains that “it is one of the biggest challenges for Maxwell in terms of trying to make something accurate and at the same time fast enough to be used in real life production.” Most approaches are point based. “In Maxwell we will not apply biased techniques, as it is important that Maxwell not only be used in effects to create good images but also in a scientific way, producing predictable results to help you with and guide you in making real world design decisions.” They have developed their own system, which is fast enough for most applications but it is perhaps the main area of current research and development at Next Limit for Maxwell, and Cañada hopes to make “a large contribution soon, perhaps next year.”

Combined with the multi-light feature, advanced ray tracing, massive scene handling, procedural geometry for fur, hair and particles, and a python SDK for custom tools, Maxwell is a production tool today. In the past the ‘purist’ Maxwell approach could prove too slow for production but with a combination of Moore’s Law and Next Limit’s engineering efforts, the renderer is becomingly increasingly faster and more popular.

The_gateway
Maxwell render by Rudolf Herczog. Rendered in Maxwell

The next release of the product will support new volumetics, alembic, deep compositing and will see Maxwell integrated much more closely with Next Limit’s RealFlow with direct Maxwell previewing built into RealFlow. “There will be between 25 and 30 new features from volumetrics to deep compositing, it is a major release, the biggest release in our history,” explains Cañada.

RealFlow has been hugely successul in fluid simulation, so providing good rendering visualisation of simulations is a great bonus, after all most sim artists are not necessarily lighters – so easy and high quality renders will just provide the sims team with more information on what the sims will look like. “It will be a milestone for us – now when you open Realflow you will just have Maxwell window inside and when you simulate with RealFlow you can preview with Maxwell inside the application,” says Cañada.


2.5 Mantra – Side Effects Software

Side Effects Software’s Houdini product is incredibly successful. In countless films now it seems a Houdini component exists helping with either fluid effects, destruction sequences, smoke, flames or just procedural implementations of complex animation.

Andrew Lowell, Senior FX Artist and Houdini trainer at fxphd.com, has used Houdini on films like Jack the Giant Slayer, Ender’s Game, Transformers 3, Thor, Sucker Punch, Invictus, Mummy 3 and Aliens in Attic. “Like most things in the Houdini universe, Mantra will deliver everything you ask of it and more as long as the user commits to learning the science of what they’re doing,” says Lowell. “It doesn’t hold anything back or make anything easier. Like many people the first time I fired up a Mantra render I was thoroughly disappointed by the lack of prettiness, a clunky speed, and having to go to a few different places in the software to get in and start adjusting things. But, when it came time to get the job done, Mantra has never let me down. It’s enough to make any lighting department struggling with heavy renders, envious. What at first seems like a slow render on a sphere manifests itself in production as a highly efficient render of millions of particles with full motion blur. What seems like a lot of work to set up a shader ends up being that life-saving modification at a low level to easily give the compositor the AOV’s they need. And what seems like a lack of user interface with ease concerning lighting and submission turn into highly automated and dependent systems in the latter stages of production.”

h12_waterfall

Mantra as a renderer in it’s own right can also be optimized for almost any render or situation, such as large crowds, or very large volume renders, “and it has the flexibility to achieve any look on any project,” adds Lowell. “I remember a bit of render engine snobbery from a vfx supervisor saying he would only accept renders from a certain engine and Mantra was the worst you could get (!). We didn’t have time for the lighting department to do look development on the fx so, I simply took the time and textured/lit the elements myself, and mimicked the properties of the other engine. I submitted my final elements as lighting elements. Everyone was on board thinking how well we had lit elements except for the compositing department, who wanted to know why the motion blur was of higher quality.”

Of course, Houdini could be used for any 3D animation, but it is known for its effects animation more than anything else today. Mantra is included with Houdini. In 2012 fxguide celebrated the 25th anniversary of the company. In that story we wrote:

According to Nick Van Zutphen, who helped us compile this story, in 1988 a guy in a big wool sweater showed up at the Side Effects office, ‘sheepishly’ looking for a job. That person was Mark Elendt, who at the time was working for an insurance company. The insurance company part didn’t really impress Kim Davidson and Greg Hermanovic, but what they did notice were some photographs Elendt showed taken from an Amiga 1000 screen (with 512kb RAM). It displayed renders of a typical late 80′s ray-traced sphere. “He had written a ray-tracer as a hobby,” says Van Zutphen. “This was the prototype of Mantra, which is Houdini’s native renderer.”

Mantra is still to this day the Side Effects Houdini packaged renderer. It is very similar in many ways to Pixar’s RenderMan, a renderer that many Houdini customers also use.

Today Mantra is very much a powerful solid option for rendering, offering one of the best known in-house renderers from any any of the primary 3D vendors. It is very much a tool that could be marketed separately but has always been part of Houdini.

h12_burning_bridge

Mantra looks very much like RenderMan:

  • Mantra’s micropolygon rendering is based on the REYES algorithm. It is a divide and conquer algorithm, a strategy whereby a difficult problem is divided and sub-divided into smaller and smaller problems until it is decomposed into a large number of simple problems. For micropolygon rendering, this takes the form of refinement.

With raytracing, mantra does not refine geometry if it knows how to ray trace it natively.

  • The raytracing engine has algorithms to do efficient raytracing of points, circles, spheres, tubes, polygons, and mesh geometry.

“These days it has shifted very much towards the ray tracing approach, we don’t have too many people using micropolygons anymore, unless they are rendering something that can not fit in memory, but the amount of memory on processes these days is quite high, you fit a lot of geometry in memory and use the ray tracer for pretty much anything,” explains Side Effects’ Andrew Clinton, 3D graphics programmer. “There are a lot of techniques handled more efficiently with ray tracing than with micropolygons, like instancing, you can keep a single copy of an object in memory and just trace rays with different transforms whereas with micropolygons you would need to create new shading grids for that object for each instance, which is a lot slower. The other advantage is that if you have polygons smaller than a pixel, you spend a lot of time breaking up objects that are already smaller than a pixel. In ray tracing you just keep the geometry as is and you don’t need to create any additional geometry or data structures so it is efficient memory wise.”

Mantra has at the kernel of the renderer both the micropolygon renderer and a ray tracing engine, but “there are different renderers built on top of that, we have a pure ray tracer but we also have a physically based rendering system that is built on top of that and it is built using the VEX shading language,” points out Side Effects senior mathematician Mark Elendt.

The core ray tracer could have a biased or unbiased renderer written on top of it thanks to the flexibility of VEX. “Our physically based renderer is pretty much completely unbiased and it is written in that shading language,” adds Clinton.

In the physically based renderer the team use MIS for the direct lighting, and the BRDF in the scene. Side Effects has experienced a lot of interest, but actually they built it some time ago, before there was as much interest. And it was “a bit like: if we build it – they will come,” says Elendt, referring to their 2008 initial implementation. Today there is much more interest in physical plausible pipelines, something that has validated a lot of the early work Side Effects did in this area.

Mantra and Houdini are known for their volumetric work, having won technical Oscars in this general area of research (Micro-voxels). Side Effects was one of the first companies to work with Dreamworks on OpenVDB, partnering with them to help make it open source. The new OpenVDB allows the volumes to cope with very sparse spaces, which really expands Houdini’s Mantra to efficiently render huge spare volumes without huge memory hits. Side Effects really supports open source, also very actively supporting Alembic for example. “One thing we did in 12.5 with Alembic and our own geometry is that we implemented a really efficent polygonal mesh that uses pretty much the minium amount of memory possible, and this really helped with our big fluid sims such as oceans,” explains Clinton.

They have also done serious work in volumetric lighting, providing say fire as a lighting source, which was a generalization of their area lights to handle volumes as well as surfaces as volumes. “If you have parts like the center of the fire, that are really bright then it was really good from a perspective of sampling, to be able to focus your ray tracing on those parts of the volume, to be able to direct your sample there, it results in really low noise in the render.”

The next release not only will have improved Alembic support, but new lighting tools for Houdini and Mantra interaction. But as the next release is not until later in the year Side Effects may release support before the next release for OpenEXR 2.0 deep compositing. Mantra has had its own format for some time for deep data but this would be that output in the new OpenEXR 2.0 deep data standard. “The advantage of OpenEXR 2.0 is that you can bring it into Nuke and do compositing there”, says Clinton.


Above: Watch the Houdini demo reel 2013.


Mantra supports SSS using a point cloud approach with an irradiance cache, it is based on a Jenson dipole model. There is a ray tracing and path tracing approach in the lab, but many to have a ground truth to compare the point cloud to. Research is continuing but there are no immediate plans to change the system or approach.

Mantra continues to improve its speed, this is especially true of the ray tracer. Clinton joked that some work is new algorithms and some stuff is more dumb stuff that was broken. In one isolated case a simple fix on opacity made a huge difference to fur rendering – literally one tweak yielded a render several orders of magnitude faster on complex fur for one client. It is not this simple but “we just played with a constant and got huge improvements!,” joked the team quick to point out that was an unusual “edge case”. Like many other companies Side Effects is working hard on moving things from being single threaded to multi-threaded. Here a really wide benefit can be felt by customers, especially those on newer 8 and 12 core machines.

Of course, many Side Effects customers use other third party renderers, and Houdini supports RIB output for PRMan, 3Delight, etc and there are plugins for others like V-Ray.


2.6 CINEMA 4D – Maxon

Oliver Meiseberg, product manager, Maxon Computer GmbH told fxguide: “CINEMA4D supports other renderers very well, we cover almost any renderer out there in our software. It is up to the user to choose whichever renderer they feel comfortable with and is the best for the project.”

Astrabike-Kopie

While most renderers are available, Meiseberg estimates the most popular is easily V-Ray, “but a bunch also use Mental Ray and the large houses use RenderMan.” A new version of CINEMA 4D is expected to be at SIGGRAPH 2013. According to some sources, a third party made bridge to Arnold and support for Krakatoa may be previewed at SIGGRAPH. Thinkbox Software’s Krakatoa is a production-proven volumetric particle rendering and manipulation Toolkit. There may also be a V-Ray update which will be coming. The key area to watch out for with V-Ray is support of light mapping.

Light mapping (also called light caching) is a technique for approximating GI in a scene. This method was developed by Chaos Group and will be in R15 to be announced on July 23rd. It is very similar to photon mapping, but without many of its limitations. The light cache or map is built by tracing many eye paths from the camera. Each of the bounces in the path stores the illumination from the rest of the path into a 3d structure, very similar to the photon map. But in a sense the exact opposite of the photon map, which traces paths from the lights, and stores the accumulated energy from the beginning of the path into the photon map.

CINEMA 4D offers two render options. After version 13 there has been a second physical renderer. The light mapping is in the physical renderer for example. “Most people love the physical renderer – the feedback has been awesome, but with tight deadlines – most people go back to the advanced renderer but if you want physically accurate use the new renderer.”

Divano_2

The SSS shader was completely rewritten from scratch for version 13, and thus is fairly new. The standard set in SSS with its varying wavelength adjustments has proven popular with customers. Like many users there is a desire amongst C4D users to move to a simpler lighting model, with no loss in quality but with an easier more natural lighting setup phase that behaves more like one might expect and involves less hack and tricks.

The product is the leading application for motion graphics but it is more and more used in visual effects, and while it is not a primary focus for the company, they are happy with the growth the product has experienced in both the entertainment space and the product visualisation community. Maxon has customers in the automotive industry and many other major product design companies. The main goal remains the motion graphics industry. “It is great to see the product entering other markets – even if we don’t target them,” says Meiseberg.

Tim Clapham, Luxx, Discovery Kids ident.
Tim Clapham, Luxx, Discovery Kids ident.

One of the biggest coups of the last 6 months is the link between Maxon C4D and Adobe’s After Effects. While not a rendering issue directly it has helped to bring the product to an even wider audience and given the brand vast extra international exposure. You can link from AE to the CINEMA 4D render engine, the render engine is based on the R14 advanced renderer, and not the new physical renderer, but this is coming, says Meiseberg. There is also a live or dynamic link from Premiere to AE which allows teams to work more effectively in a production concurrently. This places C4D renders back into AE and then automatically into Premiere.

“Cinema 4D entered a new era with the introduction of the physical renderer,” says C4D user and fxphd Prof. Tim Clapham. “Allowing us to use real world camera attributes such as aperture and shutter speed in conjunction with true 3D motion blur and depth of field. This combined with a central location to control global samples for blurry effects, area shadows, sub-surface scattering and occlusion shaders results in enhanced workflow with more realistic renders.”

Maxon will be at SIGGRAPH 2013.


2.7 Modo – The Foundry

Modo from Luxology, now at The Foundry, is expanding on several fronts. Firstly, as a part of The Foundry it is more exposed to the high end effects market, but also because independently key supervisors such as John Knoll, senior visual effects supervisor and now chief creative officer at ILM, have been forthcoming in saying how much they like the clean and fresh user experience of Modo and its renderer. For example, inside Modo there is a spherical projection type for camera item that allows the creation of spherical environment maps, including export of MODO-created panoramic HDRI’s. John Knoll rendered 360 spherical Pacific Rim set images out to his iPad for the film and then he could interactively look around the real set on seeing in real time where the giant Pac Rim machines and bases, cranes etc would be thanks to an app that detects tilt and shift and displays the window onto the Modo rendered ‘set’ interactively. This allowed actors to know where to look and for anyone to judge what the framing should allow for – in effect it was a virtual set – on set – via Modo and an iPad.

wfqwefqwerrf
Lois Barros – Arch Pre Viz artist now moving to feature films in Portugal.

John Knoll (an Oscar winner whose films include but are not limited to Pacific Rim, Mission Impossible: Ghost Protocol, Avatar, Pirates of the Caribbean I, II, III, and Star Wars I,II,III etc) has used Modo since version 201. ILM uses a variety of renderers and Knoll is no different but he seems to genuinely like the Modo tools and renderer for certain projects or tasks.

Modo is a hybrid renderer, if one keeps an eye on setting it is able to be run as a physically plausible, unbiased way. “In that sense I think it is more like V-Ray, when Allen (Hastings) was writing it (in 2002) he was looking at how we can make it have the scalability that something like RenderMan is known for, but also take advantage of some of the new technologies that were coming out around then,” says co-founder Brad Peebler. Through the use of both biased and unbiased approaches Modo’s renderer includes features like caustics, dispersion, stereoscopic rendering, fresnel effects, subsurface scattering, blurry refractions (e.g. frosted glass), volumetric lighting (smokey bar effect), and Pixar-patented deep shadows.

The render is not as mature as some, for example its EIS Environment importance Sampling does not yet provide IS on directional lights nor full MIS covering materials, but the EIS does work well for both Monte Carlo and irradiance caching approaches and produces greater realism from HDR light probe captures or approaches. Furthermore the team plan to expand IS throughout the product.

Peebler points out that every renderer makes pretty pictures and can render photorealistic images, but the key now is getting there faster. “There are two ways you can do that, one is making your rendering engine faster and the other is making it so users don’t have to fiddle with so many values and tweak so many settings.”

Visual effects by Light VFX.
Visual effects by Light VFX from The Butterfly’s Dream.

Some renderers, he states, take the approach that everything is physically based and “you have to just render with the real world settings regardless, and others tilt the other direction, more human time to set it up but it renders it faster, inside Modo. EIS is one of those things that does both – and there aren’t too many of those (!) – it is something Allan has wanted to do for a long time. Allen was actually inspired into the implementation after a conversation we had with Kim Libreri (senior VFX supervisor), John Knoll (ILM chief creative officer) and Hilmar Koch, (head of computer graphics at ILM) about importance sampling.”

EIS is an aspect of the entertainment industry providing a new tool that has been appreciated by Modo’s architectural clients, and that has been a two way street. In reverse the design and architectural clients requested embedded python, which has been a big boost to many effects and animation customers.

Modo is one of the companies focused on a variety of markets, pointing out some of their design companies are doing vfx work, but vfx companies like Pixomondo are doing design work to even out production cycles. For Peebler they believe they can cover multiple markets with the same core product, without the need to bifurcate to address them individually. And it is not even just something The Foundry is seeing just with Modo, Apple Inc. owns Nuke licenses, points out Peebler. For Luxology’s R&D team it is key that their render technology cover a range of needs both photoreal rendering and more stylized solutions in a range of markets and countries around the world. “I was at a client – a design client who had a real time visualisation they have a set of screens making up a 15m x 10m LED wall – powered by a 500 cluster render farm – for real time interaction for their car design reviews, it was phenomenal, and from a budget point of view, the design space is vastly larger than the entertainment space,” notes Peebler.

CG work by Creative Conspiracy.
CG work by Creative Conspiracy.

The Modo renderer is provided as both a final render engine and as an optimized preview renderer that updates as you model, paint or change any item property within Modo.

The Modo renderer uses high dynamic range radiance units throughout its calculations for accuracy and quality. The renderer is highly scalable on multi-core systems, delivering nearly linear speedups as more processors/cores are added. The renderer’s performance is a combination of tight code and a “unique front-end that decouples many key computations, allowing for a finely tuned balance between memory requirements, speed, and final image quality,” explains Peebler. “A client sent me an image that was about 6 trillion polygons that rendered in about 10 minutes, now those are of course a combination of multi-resolution sculpted micro-polys and a ton of instancing but the renderer is not the bottleneck.” Modo 701 now scales better than 601 and “Modo continues to expand in this area of scalability.”

One very exciting trend is the possibility of Modo and Nuke working more closely together. Nuke deploys only a scanline renderer as standard. Modo’s renderer does not currently support deep compositing, but, says Peebler, “as a company that has the industry’s leading comp system supporting deep compositing, you can imagine we would be ‘interested’ in getting the Modo renderer to support that as well.”

Another interesting connection is Modo to Mari. Mari is very much a product that is known for its strong Ptex implementation, but Modo is known perhaps much more strongly for its UV work on Avatar and other films since. “We see a lot of benefit in previewing textures with full global illumination. Mari is brilliant at what it does, and you can get good lookdev right there in Mari, but if you want more like sub surface Modo’s renderer is excellent,” suggests Peebler.

Actually, an artist at ILM, Jacobo Barreiro, on his own, produced a video called Moma – as a proof of concept of Mari and Modo working together. Barreiro internally won ILM’s award for best environment on Star Trek: Into Darkness (future San Fran) and so is thus a very serious artist not just a student or fan boy. Peebler seems very aware of the interest in connecting Modo to the other products and especially rendering through the comp. While The Foundry is render agnostic – the Luxology team and the Nuke developers are perfectly placed to expand complex integration between Modo and Nuke in ways as yet unseen.

Modo supports toolkits which can extend the capability of the renderer. These include the NPR (non-photo-real) module – popular in Japan. The lead programmer on NPR is actively developing more in this area, and the Studio Lighting Kit – which was one of the most popular extensions especially with photo retouchers who were early Modo adopters.

Below is the work of artist Rodrigo Gelmi, one of several of his tests suggested by Modo’s Peebler:

Modo is more than just a renderer, and Luxology’s workflow fully supports exporting to third party renderers, allowing Modo’s fast preview renderer to aid in modeling. As Peebler says, “we are not in the selling render license business.” They see “our rendering technology as a great enabler for people, we are not trying to disrupt anyone’s pipeline. We are not trying to displace anyone else’s renderer.” For example Maxwell works well with Modo and they are keen to see V-Ray working more with Modo. Currently Peebler estimates only 5% of clients render to a third part renderer and most are either Maxwell or V-Ray.

Not to short change the Modo renderer for high end work. It is also possible to only use Modo’s renderer on a big production, as proven by the new Luc Besson fully animated feature film being made at Walking the Dog Studios in Brussels – which is being fully rendered in just Modo.

Exclusive Nuke-Modo Tech Experiment:

What makes Modo’s renderer incredibly interesting is its sister applications at The Foundry. Connecting Modo to Mari, Katana both makes 3D sense, but the team is exploring much more than that. The Foundry is doing incredible work in exploring Modo, or more specifically Modo’s renderer, with Nuke. This one, complex series of possible connections and interfaces could cause Modo’s renderer to be elevated faster and more significantly than anything in recent times. Nuke is the dominant compositor in high end film and effects. Below is a video showing a test running in R&D. In the video Modo is automatically updating 3D renders in a Nuke setup. Of course it is possible to render a number of passes that would allow Nuke to manipulate a render that has been imported but this is a live connection between the two applications. This was recorded on a laptop running 2.8ghz i7.

Peebler commented, “This is a “technical sketch” showing what we think workflows might look like in the future. This is not indicative of an intended shipping product. We like to “doodle” a bit with our vast body of combined technologies and we hope that this will spark a conversation with users to help guide us on what users would like to see, and what might be possible.”

A look at a live link between Modo and Nuke

If you are at SIGGRAPH 2013, we recommend you find a senior Foundry /Luxology staff member- offer your thoughts – and bug them to see more. Watch above or download an HD version:  TF_Labs.mp4


2.8 Lightwave – Newtek

Lightwave is very strong in a few markets including the television episodic effects market where sci-fi scripts have called for more and more visual effects.

Eugenio Garcia Villareal
Eugenio Garcia Villareal

Historically, the company has strong user groups in markets outside our scope, but it does influence its approaches to rendering. The company has roots in scanline rendering but since 2006 it has had a fully ray traced solution. Newtek’s Rob Powers (President, NewTek LightWave Group) feels that many smaller companies are keen to use new tools such as are discussed above but found it difficult: “I feel today with the struggles some studios are having to possibly use or adopt the same workflows that possibly highlighted at the very sexy ILM and Weta Digital project level, what I have seen is that a lot of studios are struggling trying to replicate that. We know that a company at the 40 or more staff members is the norm.” And Powers feels LightWave is well positioned to help people and companies in that position.

While the renderer today supports ray tracing it also supports some of the other features and more traditional approaches. The software has both a nodal based and layered based shader system. “It is kind of like a Lego block system for people who don’t know how to write shaders, and it ships with a bunch of predefined shaders like the skin shader or car paint shader,” says Powers.

On the whole the ray tracer is a biased approach – a general radiosity solution designed for production, supporting a wide variety of approaches. It is designed to be quick which is more important than ‘correct’ according to NewTek. You can do a brute force approach but it is not normally used that way. It is a tool aimed to be quick and fast and not optically accurate to the extent of other high end renderers.

Naoya Kurisu
Naoya Kurisu

LW has an interactive window in the viewport (VPR). The first VPR was scanline, the next version was a ray tracer (LW 8.6/9), now there is a new version which is very fast and really a third whole approach. It is backward compatible, but the new VPR is really quite impressive. It no longer renders just polygons but it can render lines and other primitives. It also has a new instancing system that is extremely fast.

“This is really the direction we are moving in not just features but meta-features,” explains Mark Granger, rendering lead at LightWave 3D Group. “For example, when we added the edge rendering we did not just add it as a simple feature we added it with the nodal shaders so that the nodes can control the thickness and color, even full shading of the edges and we think this will be really popular with the Anime community who can make it look like it was done with different brushes just by changing the shaders.” As the lines primitives can have shaders they can have texture maps, gradients and many other types of power from the LW nodal shader system.

It does not currently have MIS or importance sampling generally, but it is something that the team is considering, and on their planning horizon. The lack of MIS makes running the LW renderer as a full brute force ray tracer option unlikely in a production environment, especially given the fast nature production environments that LW clients live in. It is wonderful to do high end rendering but if you have to get an episode out, have 3 more in the pipe and another 2 being shot – then speed and deliverable results is the key. But episodic television also needs to provide production values often on par with films having months if not years longer in post, and certainly much bigger budgets.

DEF101
Still from Defiance.

LW has managed to provide not only fast and professional results, but thanks to agressive pricing – production can also compete financially, leading more than one client to stop productions moving out of state and letting the work be competitive in say California/Hollywood. Productions such as Defiance and visual effects supervisors such as Gary Hutzel (located on the Universal Studios lot in Los Angeles) have had great success with LW. His work on Blood & Chrome was also covered on fxguide.

LW has three types of motion blur, dithered, vector and fully ray traced sampled motion blur. Its hair and fibre shaders have now been used widely on shows such as Terra Nova, CSI and Grimm.

LightWave users can use not only the product’s own renderer but there are also third party renderers such as Kray Tracing. There is also an Octane implementation from Otoy for LW users. Most users do use the LW render and its free renderfarm LW license which comes with it.

DJ Waterman
DJ Waterman

NewTek is closely monitoring GPU rendering, their VPR is a fast mutli-threading solution. Mark Granger has worked very closely with Nividia and the original CUDA beta – but he feels that perhaps the price of going down the GPU approach may not be worth it. “The cost of supporting GPU rendering in terms of what we would have to give up in terms of flexibility are so extreme – I dont think it is worth it to us. For a third party plugin it might be interesting – but for the main renderer – in a general purpose package like Lightwave 3D, we really don’t want to give up all the flexibility – like for example giving up being able to work with third party plugin shaders and all the other things that require say large amounts of memory.”

He also pointed out the rapidly changing nature of GPUs makes it hard to commit, given the ground is moving so fast in this area, and GPU work tends to be very specific. It is not an area they are ignoring, but they feel CPU is their best place to focus.

LW does not support Ptex as we reported previously, nor do they currently support Alembic, but support for the later might be coming very soon. SIGGRAPH is coming within a fortnight and one might expect 11.6 to be at SIGGRAPH, but the exact roll out is not published yet or known.


2.9 Mental Ray – Nvidia

Mental Ray is a ray tracer with MIS, but as a shader allows so much C++, it is hard to say it is unbiased or baised since you can use Mental Ray as just a thing that shoots rays but everything is then done by the shaders. But to be efficient is hard, it is the cost of Mental Ray’s massive flexibility. There is so much legacy code around the code base. It may be that Mental Ray never transitions to a new hybrid efficient renderer. Today, one can run Mental Ray with BRDFs but only the ones provided by the Mental Ray advanced rendering team. Depending on your point of view, Mental Ray is great platform for its flexibility or too code focused for a modern pipeline. The problem is really not if it can render it, but what it takes to set up that render, and maintain that as a modern physically based energy conserving rendering environment (should you choose to want to do that). Most people who want to set up modern new rendering pipelines for large scale production environments are simply not doing that in Mental Ray.

At the other end of the massive pipeline production render environment are individuals who know how to use it and are keen to just get shots delivered. Lucas Martell is a director and animator who has made some hugely successful short film projects as well as working professionally for many years. “I’ve worked with Mental Ray for years and it’s served our needs very well. A lot of the complaints about Mental Ray come down to its complexity. I feel like it has gotten much simpler in the past few years, but more importantly, those settings do give you some very granular control over the rendering quality/efficiency. Because we already know those settings inside and out, we’ve never run into something that we couldn’t do in MR.”

“Granted we are a small shop, so the scale we deal with doesn’t come close to the big animation studios, but the integration with Softimage is so great that we haven’t hit the tipping point where investing in a lot of 3rd party licenses makes sense. Plus we have a lot of custom code written to optimize shaders, set up our passes, etc. Renderers are just tools. The best one is the tool you know inside and out.”

wef
A frame from the Ocean Maker from Dir. Lucas Martell.

The image above was rendered by fxphd Prof. and director Lucas Martell. The image was rendered with one key light with final gather. The image took approximately one hour/frame including 3D motion blur. (on a laptop).

Håkan “Zap” Andersson, now at Autodesk and formerly of Nvidia/Mental Images, pointed out that newer versions of Max and Maya at Autodesk have new parts of Mental Ray available to them such as unified lighting and IBL in say 3ds Max. These are not so much new Mental Ray features so much as they are features now available to artists that previously were not accessible in Mental Ray from these key Autodesk Products.

As mentioned above, Nvidia for Iray is exploring MDL. But as part of that is the Mental Images Layering Library (MILA). This reflects the same thinking as OSL and could become the implementation of a new shader system for Mental Ray. It is hard to see yet whether Nvidia or Autodesk with some unified solution will lead to a move to OSL or MDL or nothing at all. Not only does Autodesk need to consider their array of programs Softimage, Max, Maya etc but also the fact that so many of their clients use products like V-Ray and not the standard Mental Ray, and V-Ray is now supporting OSL, and would be unlikely to support a hybrid modified Nivida shader solution.

As Mental Ray comes standard with Autodesk products it is not surprising that indie film makers have worked with Mental Ray and in many respects they are producing some of the best work with it. Below is a great example of indie production by Pedro Conti in this breakdown of ‘One More Beer’, rendered in Mental Ray. Watch the full short here.

Breakdown of ‘One More Beer’.

Conti says, “I started working on the Viking Project in January of 2011. Over the span of 5 months, I did the illustration but I had plans for an animation. From July to December of 2011, the project was on hold until Alan Camilo (animator) came on to join the project. The animation process took about 2 months of free time, and after the animation process I worked on polishing all details of lighting, shading, compositing, and finalization of the short film. We released ‘One More Beer’ on the 1st of October. So, it was 9 months of hard work – overnights and weekends – to complete it.”

Lighting setup.
Lighting setup.

For the lighting, Conti relied on 10 photometric area lights in Mental Ray with MR exposure control and 1 skylight for the ambient light. “Lots of fakeosity to reach the mood that I was aiming,” he adds. “Final gathering was also used for some extra light bounces. It was rendered in passes. One main beauty pass + additional passes for comp like hair, zdepth, masks, atmospherics. In total it was about 30 passes. For the beauty pass it was about 2 hours/frame, and hair pass about 5 minutes/frame. Additional passes were really fast as it was rendered in Scanline Renderer.”


2.10 3Delight – DNA Research

3Delight is a RenderMan compliant renderer, and is used by companies such as Image Engine in Canada. They have been a customer since 2007. They have used it extensively with Cortex (see part 1) and they are one of the renderers’ highest profile customers. Image Engine is always evaluating its rendering options moving forward, but for now it is very happy with the close working relationship it has with DNA Research.

Zero Dark Thirty - VFX by Image Engine.
Zero Dark Thirty – VFX by Image Engine.

Like many other companies they are looking at more physically based lighting and shaders with ray tracing. “Coming off the last few shows we have been reviewing our pipeline and thinking about how we might generally be more efficient and one of those things is simplifying the lighting workflow,” says Image Engine CG supervisor Ben Toogood.

The notion of moving to new tools that are more physically accurate but also simpler from an artist point of view is a common desire in the industry. Image Engine does a lot of creature work, which often involves a lot of passes, baked textures and complex pipelines, so a high quality realistic but simpler process to lighting is very attractive. The less work in data management the more iterations and actual lighting the team can do. The team is in the middle of re-writing their shader library right now and re-examining some of those complex shader networks that have gone up in production over time. “For a lot of work and especially for background elements – hard surface props etc – we can move to using ready made shaders that have a lot of the physically plausible shading built in,” says Toogood, “and having that base will hopefully make their behaviors more predictable for the artists. But whether that will be run through ray tracing or not is something we will have to look at. We need to be flexible and quite responsive, we have to be a bit more clever than most in spending our computational budget, we are not a huge mega studio.”

A final shot from the parcade collapse in Fast & Furious 6 (VFX by Image Engine).
A final shot from the parcade collapse in Fast & Furious 6 (VFX by Image Engine).

As Image Engine does a lot of character animation work it uses the SSS point cloud solution in 3Delight. They have found it very efficient to render – the only downside is the need to pre-bake the point cloud, which as with all such approaches makes render time tweaks hard as one needs to go back and re-bake the point cloud. “In terms of quality we are quite happy and our artists are very capable in manipulating the tools 3Delight offers to get good results,” says Toogood.

Image Engine recently has tried a hybrid HDR / IBLs approach of capturing image probes on set, then projecting that onto geometry representing the scene and then using 3Delight’s point based indirect global illumination to project the light then back onto the character, “so if a character is close to a wall they get bounce from the wall, but to get the artistic control our supervisors require we supplement that with normal spot lights to tune the shot,” Toogood explains.

Image Engine works closely with the 3Delight team and they enjoy very good support and have a close working relationship. Unlike the web site of 3Delight which at the time of writing has not been updated for years, Image Engine gets the latest builds if need be and a very direct response from DNA Research. DNA seems to have a closely held group of users, and their twitter account is a graveyard. DNA will be at SIGGRAPH this year, and will soon be releasing new versions of its plug-ins and 3Delight Studio Pro.

The Thing. VFX by Image Engine.
The Thing. VFX by Image Engine.

3Delight Studio Pro’s current features include ray tracing, global illumination (including photon mapping, final gathering and high dynamic range lighting and rendering), realistic motion blur, depth of field, complete geometry support (including highly efficient rendering of hair and fur), programmable shaders and antialiased shadow maps. It is available for Windows, Linux and Mac OS X.

The latest version of 3Delight (including for both the Softimage and Maya plug-in) has done a lot of work on ray tracing, with a full path tracing option with MIS and new materials. The next big release should see a lot of these new ray tracing tools available to all other users. 3Delight CTO Aghiles Kheffache told fxguide in regards to the new version they will hopefully be showing at Siggraph that “we have a new multiple importance sampling framework that is easy to use. We have a new environment sampling algorithm that produces less noise than the competition. As an example, we don’t ask our users to blur environment maps in order to get nice sampling. The algorithm also extracts very nice shadows from environment maps. Our plug-ins now have the ability to do ‘IPR.’ ” he said adding that in his opinion “We claim that we have the fastest path tracer around. Especially when multiple bounces are involved.”

Other 3Delight clients include Soho VFX , Rising Sun Pictures, ToonBox Entertainment , NHK, and Polygon Pictures (in Japan).


2.11 finalRender – Cebas (GPU + CPU)

finalRender was the first renderer to practically apply true global illumination rendering to the large­scale vfx movie production with the film 2012. The movie’s bigger scenes used finalRender’s advanced global illumination algorithms to render the vast photoreal disasters. The product is about to completely change with a new approach, and virtually all new code.

An older render pre-4 from Makoto (Pasadena).
An older render pre-4 from Makoto (Pasadena).

There is a new version of FinalRender 4GPU that will be launched at SIGGRAPH and as the name implies it will have GPU support, and be a normal upgrade to 3.5 users. “We have been working for a long time now,” says Edwin Braun, CEO at Cebas Visual Technology Inc, “the next step is really a new product, and with the changes in CUDA (5.5) – it will be a CUDA product – we have had to do so many new changes – it is really a new renderer – there is not much left from 3.5 – other than the name!”

It is part of a wave of new GPU products, but significantly different as it is also CPU. There will now be “no difference between a GPU rendering and a CPU rendering and that is a hard thing to do,” says Braun, “we are getting really close to this goal we have set for ourselves.”

The newest version is finalRender 4 GPU which is a­ hardware accelerated (GPU) rendering approach­ with a rather unique balance between GPU and CPU balancing. Unlike many other GPU­-only renderers, finalRender 4 GPU “will always be faster” with newer hardware even when upgrading the workstation alone and still keeping the GPU card. It will use all available rendering cores and not only one type of processor.

Living room interior rendered in finalRender by Doni Sudarmawan.
Living room interior rendered in finalRender by Doni Sudarmawan.

In contrast to other renderers, Cebas uses a hardware acceleration approach that will not favour CPU over GPU or vice versa. In fact, cebas’ trueHybrid technology will leverage the full potential of existing CPU cores as well as, simultaneously, using all existing GPU cores and memory. Maintaining full accessibility to features and functionality of the core raytracing system, trueHybridTM will not sacrifice quality for speed. Unlimited Render Elements (layers), Volumetric Shaders, complex Blend Materials and layered OpenEXR image file export along with hundreds of 3rd Party plug­ins; are a few of the features made possible by finalRender 4 GPU that were otherwise unattainable with a GPU­only rendering system.

finalRender 4 GPU provides some shading and rendering flexibility to GPU rendering. Offering an advanced new material shading core provides finalRender with the advantage of representing nearly every material effect in the form of a highly optimized native GPU shader. It supports the car shader the skin shader, and many other shaders from 3DMax. “If you have a Mental Ray scene and you use the Mental Ray architectural materials from Mental Ray, you can just render it with our GPU renderer,” says Braun.

Rendering Core and Integration finalRender 4 GPU is a fully integrated 3ds Max renderer with the key benefit of being compatible with existing 3ds Max workflows that usually include support of third party plug­ins. The following three rendering methods are all available with finalRender 4 GPU:

  • GPU Only Rendering Mode (full path rendering like Octane, or V-RayRT)
  • CPU Only Rendering Mode (like the old 3.5 used to run)
  • CPU + GPU (trueHybrid) Rendering Mode

The last mode “uses the GPU for your CPU rendering.” What does that mean? If you render with Fume FX, this is a CPU plugin, but in hybrid mode it will pass off some internal calculations to the GPU, in effect it is a GPU turbo charger on the CPU, even if the plugin should only be a CPU option. In tests this hybrid mode has shown 2x up to 5x speed improvements over just CPU alone. This hybrid mode is different from where a CPU may help a GPU. As the Cebas model works the other way, all the CPU plugins will be able to get GPU acceleration. “We can use all 3DStudio Max plugins as we were able to use them in our sw renderer, and the user will have no problem running 3D Max plugins and use their GPU if it is available,” explains Braun. This will work well for farm rendering when the farm machines may have no GPU cards.

finalRender 4 GPU is aiming for a really high goal of providing ­a continuous GPU/CPU rendering workflow for 3ds Max users. trueHybrid is a novel approach. It was developed to allow co-­operative hardware rendering by leveraging different types of processors at the same time in one workstation.

finalRender’s memory optimization algorithms enable new Physically Based Microfacet rendering models for rendering various Blurry/Rough surface effects.


Above: watch a Cebas Making of Reel.


Global Illumination Methods

finalRender offers the benefit of multiple Global Illumination engines for artists to choose from. The newest GI rendering method offers a unbiased, physically accurate path tracing method; with fast GPU based Global Illumination.

Other options or methods include:

  • Irradiance Caching
  • Unbiased Rendering
  • Light Cache Rendering

Core Render Qualities (Realtime and Non­Realtime):

  • Newly developed: Content Aware Sampling (CAS)
  • Physically Based Wavelength / Spectral Light Transport
  • Biased & Unbiased Rendering incl. Direct Lighting / Ambient Occlusion support
  • Full physically based IES light support
  • Physically Based material shading model
  • Highly optimized Geometry Instancing for GPU and CPU

One of the issues in GPU renders is noise or grain. Although Braun can’t discuss, he hints at a new sampling method that will smooth out such renders and produce a more even and smoother results. To do this he will only hint that they are borrowing from other technologies, hopefully at SIGGRAPH one can find out more about his sampling techniques which do not involve any MIS or importance sampling, he claims.

Maik (Germany)
Maik (Germany)

It is worth noting Cebas also produces Thinking Particles, one of the industry’s key tools for fire, procedural particles and fx animation work. Unfortunately 3ds Max does not allow Thinking Particles to work more closely with finalRender, except by a backdoor. The interface is very old in 3ds Max, but it does mean ThinkingParticles can only do some of the things it does with finalRender – as Autodesk does not allow other products to work together due to the architecture the plugins must normally follow.


Although Max is the main user group, finalRender is also supported in Maya used at vendors such as Walt Disney Studios, but the Maya code is different, and the new GPU/CPU version 4 will come later in Maya. But there is more flexibility so that version may end up with more features than the Max version. The C4D version is now fairly much discontinued.


2.12 Octane – OTOY

Octane is one of three new renderers that we have included in the round up. Each is approaching rendering from a new point of view and each has the promise of being impactful in their own right. Ocatne is a powerful GPU render solution that works using both local and cloud based GPUs.

Below is a Sony spot animated in 3DS Max and fully rendered on Octane.

fxguide has written about the product when it was launched over a year ago, and was heavily featured at the last Nvidia GPU conference. At that conference it was announced that it would be used by Josh Trank, director of The Fantastic Four, for his in house vfx team. Trank appeared on stage during the keynote speech of Nvidia chief executive Jen-Hsun Huang at the GPU Technology conference in San José. Trank touted how his special effects team will be able to tap cloud-rendering technology from OTOY to create the movie at a much lower cost.

rteg32
Octane Render from Lightwave

As we pointed out at when it came out of Beta, Octane Render is a real-time 3D unbiased rendering application that was started by the New Zealand company Refractive Software. OTOY is now developing the program. It is the first commercially available unbiased renderer to work exclusively on the GPU, and runs exclusively on Nvidia’s CUDA technology. OTOY sells Octane as a stand alone renderer as well as a plugin to popular 3D applications such as Max and Maya. The company has strong links to cloud computing and graphics research. OTOY also owns LightStage, LLC in Burbank which did the facial scanning for The Avengers, among other films, and Paul Debevec is their “chief scientific consultant”. They also have a special relationship with Autodesk, who are an investor, and its tools can be integrated as a plugin for almost all the major 3D art tools on the market today.

As we said then – “Clearly these guys know what they are doing.”

The base Octane is still very young, but it has such strong partners in Nvidia and Autodesk alone that it demands attention. But the primary aspect of Octane is its promise to bridge the divide with high end production rendering on GPUs.

rays
God rays using Octane with transmissive fog.

The company also has Brigade which is not yet a shipping product, but aims to deliver GPU ray tracing at game rendering speed. Brigade is a different code base from Octane and the two products are sharing algorithms and innovations moving forward but Brigade is not yet a shipping product. It is however one of leading realtime path tracing games speed rendering products and tests have shown exceptional rendering speed, but at the cost of classic ray tracing noise, that while fine in real time would naturally need to be rendered longer in a production pipeline.

The whole area of real time ray tracing is about to have another major boost from SIGGRAPH 2013 Realtime Live! event. Once again this year at SIGGRAPH there is a special session showcasing the latest research and in particular games development for realtime rendering.

Real-Time Live! is perhaps the world’s premier showcase for the latest trends and techniques for pushing the boundaries of interactive rendering. As part of the Computer Animation Festival, an international jury selects submissions from a diverse array of industries to create a fast-paced, 90-minute show of cutting-edge, aesthetically stimulating real-time work. Each live presentation lasts less than 10 minutes, and is presented by the artists and engineers who produced the work. Last year was remarkable for its range of session which included realtime SSS and facial lighting. While it features game engine rendering and art pieces it also very clearly highlights the massive advances in general realtime rendering that are outside this article.


2.13 Clarisse iFX – Isotropix

Clarisse iFX is included as one of our three new renderers as it seeks to not fit into a pipeline in a traditional sense. The team lead by founder Sam Assadian wants to merge the product into a pipeline not as an end renderer but starting further back up the pipeline. While solving the render equation quickly is important, it is changing the workflow itself that interests him.

cityride
Internal Clarisse demo image (using 75 millions, unique, non-instanced polygons assets, multiplied into 8 billions in Clarisse).

Clarisse iFX is a new style of high-end 2D/3D animation software. Isotropix is a privately owned France company and has been working on Clarisse iFX now for several years. It has been designed to simplify the workflow of professional CG artists to let them work directly on final images while alleviating the complexity of 3D creation and rendering out as many separate layers and passes. Clarisse iFX is a fusion of a compositing software, a 3D rendering engine and an animation package. Its workflow has been designed from scratch to be ‘image-centric’ so that artists can work constantly while visualizing their final image with full effects on. It wants artists to see the final as much and as constantly as possible.

At its core, Clarisse iFX has a renderer that is primed and ready to start final renderings within milliseconds of your finger touching something requiring a re-render. It provides a lot more, but this central mantra means that the program feels remarkably fast, insanely faster than it should given that the program is rendering on CPUs and not GPUs.

Screenshot from Cube Creative's assets from the Kaeloo french TV show, featuring characters rendered with 3D DOF and motion blur...
Screenshot of Cube Creative’s assets from the Kaeloo french TV show, featuring characters rendered with 3D DOF and motion blur.

The renderer is different from some listed here in that it is very tied to the front end, but unlike native renderers of animation and modelling packages, Clarisse iFX can’t model. It is designed to import and do some animation, although not character animation.

Since launching a year ago it has been developing new versions of the software but also working extremely closely with several key players to integrate Clarisse into other OEM products. At SIGGRAPH 2013 it will launch the new v 1.5 with major improvements, but one senses the real action will be in these OEM deals.

The interest from other companies comes from the lighting fast render times and the data management that focuses not just on fast rendering but changing the relationship between the renderer and the rest of the modeling and animation software. Company founder Sam Assadian explains this conceptually by drawing a picture of the current industry workflow as having “dinosaur, 20 year old code passing along this tiny wire to modern rendering engine – that just does not work.”

f
Final shot from Kaeloo.

What this means in practical terms, according to Assadian, is that when production shots are ready to be handled over to rendering, just loading the files can take, say, 45 mins to open and then even longer to see the first renders appearing. The pipeline may then render efficiently but this lack of integration and legacy code from generalist old software means the artist has no sense of immediate rendering on large scenes.

For Clarisse iFX he claims the same production shots in their pipeline would open almost straight away and then start rendering almost immediately – “we cut the wire”. His approach is to therefore tackle not so much just fast rendering but integrate the renderer further up the pipeline so the traditional divides are gone but so too are the vast load times and poor interactivity.

The actual renderer is a single path tracing solution but after the new V1.5 to be launched at SIGGRAPH they may be introducing irradiance caching. This might seem like an odd move, but the speed of irradiance caching is just too compelling for Assadian to ignore. He feels for some jobs especially stills frames of vast complexity many lighting TDs just want the fastest solution and he is keen to provide whatever it takes to render vast scenes quickly. Actually irradiance caching had been mentioned a year ago when we wrote of the launch of the product, and at that time it was thought to be in beta. As a company they do not do a normal release cycle with major versioning. One gathers that at this early stage custom builds and close integration with their small but important user base does not require or respond to a major/minor release normal schedule. While the company has a range of these key OEM customers, there are few customers using the product in day to day production. French Ellipsanimé (also known as Le Studio Ellipse, or Ellipse Programme) is one exception, who use the software for episodic television production.

Cube Creative's assets from the Kaeloo french TV show, featuring characters rendered with 3D DOF and motion blur...
Clarisse screenshot.

The system is still extremely young, it lacks some features such as caustics and deep color/deep compositing but its multi-treaded rendering approach is fast. The company is keen to embace open source, it is especially keen to embace the new Alembic 1.5 as the company finds the current Alembic file format not suited enough to their multi-threaded approach to not be slowing down the iFX system. The current Alembic is supported but Assadian expects big improvements with the new release 1.5, and he has seen 20x times improvements.

Similarly they are exploring OpenVDB for post 1.5 and seem certain to adopt it for volumetic work. The shader and material definitions are not currently supporting OSL but Assadian described Open Shader Language as “very sexy” and again something they are very keen to explore later this year.

The company has attracted a lot of early attention for its outrageously fast rendering pipeline (load, interact render). Already it has been working with companies such as ILM and Double Negative. The next 6 months will seem to be critical, if some of these third party companies integrate the product then it could really shake up the industry, if not then its new approach may fail to gain traction and it may need to rethink its new ‘string cutting’ approach and work more like a traditional renderer. But this second option is clearly not of much interest to the team and founder Sam Assadian in particular. The product is worth checking out at SIGGRAPH 2013 if you are attending, they will have a booth.


2.14 Lagoa

The last of the new renderers is Lagoa. But unlike the others, Lagoa is not GPU based but it is cloud computing only.

Rendered in Lagoa.
Rendered in Lagoa.

It uses a variety of approaches based on the materials which the company calls Multi-optics. For example, there is a specific approach for hair – optimized for hair, and a different approach for sub-surface scattering, which is a progress non-point based solution – which is again optimized for SSS.

SSS test rendered for fxguide by Thiago Costa
SSS test done for fxguide by T. Costa

It is a web-based renderer. For almost all other products aiming to be a web tool this would mean it is anything but a production renderer. Most other products fall somewhere between a toy and a light weight educational tool. What makes Lagoa stand out is that the actual renderer is technically cutting edge and has real R&D innovation feeding very high quality results.

The company aims to produce production quality rendering in not only render farm free pipeline but to a local render free desktop machie. With modern internet connections Lagoa aims to be taken very seriously in the high end render market and in so doing change the way people structure companies.

The Lagoa SSS is brute force (fully ray traced) and includes both single and multiple scattering. “Consequently, the method does not require any precomputation and works for anything ranging from thin volumetric slabs to an army of volumes with highly detailed surface structure. “The only assumptions we make so far is a specific BRDF at the interface: glossy diffuse (“Rough Volumetric”) or pefectly smooth (“Smooth Volumetric”). Moreover, light sources inside a volume is not supported. We also take advantage the path space analysis discussed below (which means that, in the end, we are not fully unbiased),” explains co-founder Arno Zinke.

The ray tracer is uni-driectional but the company does not like working inside labels such as ‘biased or unbiased’. “Generally, I think the biased vs. unbiased battle is over – consistency is the key,” says Zinke. “We are currently exploring the use of other methods (including fully bidirectional path tracing and a novel progressive consistent method) but the current implementation is uni-directional. On top there is a path space analysis to reduce ‘fireflies’. The method goes beyond standard approaches, like clamping or BRDF smoothing and is less aggressive (more selective) when dealing with hard-to-sample paths.”

HairClose
Lagoa Hair Render

There is extensive use of importance sampling, on materials and light sources and IBL. “We use (multiple) importance (re)sampling for lights (also in spectral domain, when having for example spectrally varying scattering coefficients in case of SSS), image reconstruction filter, phase functions and all other materials,” adds Zinke.

The system is expanding with more advanced shaders for plastics and other materials, as part of a realease of an update at SIGGRAPH. Included in this will be a new editing texture pipeline and light projectors are also being added.

One of the great additional services the company offers is to have the exact materials (a 5×5 patch) actually scanned and a real BRDF is then used in the renderer. “Besides classical BSDF and volumes we also support the direct rendering of particle scattering functions, BCSDFs (Bidirectional Curve Scattering Distribution Functions) and BTFs (Bidirectional Texture Functions),” says Zinke.

The scanning service means “we have a BRDF per pixel,” says Thiago Costa, co-founder. This concept seemed odd – why is it not a BSDF of the material? We asked Arno Zinke to explain: “So when talking about a BRDF per pixel Thiago was referring to BTF, which can be seen as a ‘texture of BRDFs’. A BTF can be measured (the standard case) and simulated. Contrary to conventional spatially varying BRDFs a BTF may also include parallax, local shadowing and local SSS effects.”

NeonThe company’s online presentation may be deceptive, this is not a toy renderer. While the product is streamlined for ease of use it is also able to produce very complex imagery. “As for polycount, we have tested scenes up to a few hundred million (non-instanced) polygons without any problems so far,” says Zinke. “This said our scene layout has been optimized for interactivity/dynamic scene updates, not memory efficiency. We have no magic bullet for sampling complicated light paths.”

The aim is to provide very complex tools as part of a radical rethink about the very nature of third party renderers, which includes a very different pricing model.

The company is still young, right now it does not support many open source initiatives, so it does not support deep data, OSL, Alembic, OpenVDB or Cortex but it is currently trying to support OpenSubDiv from Pixar. Interestingly while the company does not support OpenVDB its SSS could allow that in the future if the product moved in that direction. According to Zinke “I’m following Ken Musseth’s research since many years and find OpenVDB very interesting. However, since our focus is on design, supporting volumetric effects like smoke or fog is not having highest priority. As our current SSS implementation is essentially based on volume rendering an extension would be relatively straightforward though.”

CarpaintsThe company has its own compressed geometry format, which some clients use if they want to compress data before uploading to the cloud/farm/Lagoa environment. It normally compresses data by a factor of 10, depending on the geometry. Everything that gets loaded into Lagoa is converted into this format anyway so for large projects it makes sense to compress before upload. Whatever the format, Solid-works format or any of 20 different file formats that the product supports can be converted before or after upload, but everything that is rendered is in this internal format. “We use a proprietary format for compression of meshes and similar data that uses several high and low-level techniques for drastically reducing memory footprint. All incoming meshes get transcoded into this internal format,” says Zinke.

The company is also working with other companies to allow them to OEM the Lagoa renderer into a third party application or mobile platform.


3. Future Directions

3.1 Metropolis and Manifold
3.2 A Whole New Approach

3.1 Metropolis and Manifold

3.1.1 Metropolis Light Transport

The central performance bottleneck in path tracing is the complex geometrical calculation of casting a ray. Importance sampling as mentioned above allows less rays through the scene while still converging correctly to outgoing luminance on the surface point. Metropolis light transport (MLT) is another approach (included in Eric Veach’s Ph.D in 1996). In their 1997 Siggraph paper, Veach and Leonidas J. Guibas, described an application of a variant of the Monte Carlo method called the Metropolis-Hastings algorithm.

Wikipedia has a great definition: The procedure constructs paths from the eye to a light source using bidirectional path tracing, then constructs slight modifications to the path. Some careful statistical calculation (the Metropolis algorithm) is used to compute the appropriate distribution of brightness over the image. This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. In short, the algorithm generates a path and stores the path’s ‘nodes’ in a list. It can then modify the path by adding extra nodes and creating a new light path. While creating this new path, the algorithm decides how many new ‘nodes’ to add and whether or not these new nodes will actually create a new path.

mlt_comparison
Arion 2 comparison

The result of MLT can be an even lower-noise image with fewer samples. This algorithm was created in order to get faster convergence in scenes in which the light must pass through odd corridors or small holes in order to reach the part of the scene that the camera is viewing.The classic example is bright light outside a door – which is coming through a small slot of keyhole. In path tracing this would be hard to solve efficiently, but the bi-directional MLT system of nodally mapping the rays would solve this well.

It has also shown promise in correctly rendering pathological situations that defeat other renderers such as rendering accurate caustics. Instead of generating random paths, new sampling paths are created as slight mutations of existing ones. In this sense, the algorithm “remembers” the successful paths from light sources to the camera.

MLT is an interesting topic, we polled almost all the companies mentioned in this story on MLT, the reactions ranged from great but too complex/slow to some senior researches who wanted to implement it, to a rare few like Maxwell who already had a partial implement (hybrid) solution. It is by no means seen as the natural direction to go in by all, but one gets the impression if a workable solution could be found, everyone would at least consider exploring it.

One company that does support MLT is RandomControl’s Arion 2.

Arion is a hybrid-accelerated and physically-based production render engine. It takes a parallel GPU+CPU approach. Arion uses all the GPUs – and – all the CPUs in the system simultaneously. Additionally, Arion can use all the GPUs and all the CPUs in all the other computers in the network forming a cluster for massive distribution of animation frames.

Arion 2 can handle most of the rendering effects which are considered standard these days such as displacements, instancing, motion blur, and more. And beyond the current feature list of Arion, it is a Metropolis Light Transport renderer that can run on the GPU.

This is not a MLT lookalike or “simili-MLT amputated from its glory to run on a GPU, it’s the true, fully-featured Metropolis algorithm”, claims the company.

mlt

Although MLT can be used on any kind of render, it has a great use in optical and lighting simulations. The issue however remains render time per frame. While very accurate, the render above was completed in a few hours on 2 Geforces GTX 580. Note the blue beam that is seen in the total internal reflection of the most left prism for example, is a very hard case for most renderers.

As mentioned above there are very few Metropolis ray tracing solutions. Arion 2 is one, Maxwell is a hybrid, but why are there not more?

We asked Marcus Fajardo, founder of Arnold, who had already told us that Veach’s original PhD is so pivotal he re-reads it every couple of years. So why isnt Arnold using MLT? Fajardo points out that the theory was based on work done in the 1950s and 60s. “The reason why it is so difficult to implement in a renderer is that the theory itself is really complicated and it also changes the aspect of the noise you get in images. It works best if the renderer already has a bi-directional path tracing algorithm. In the case of Arnold it is a uni-directional path tracer.”

Arnold fires rays from camera into the scene not both from the camera and from the lights. “Bi-directional path tracing is really tricky to get right and to make it work well in a production environment, for example programmable shaders don’t work well with MLT.” Shaders in MLT need to preserve the Principle of Equivalence, the shader has to look at it from one direction or the other, from the light or the camera.

Maxwell has some MLT but it does not support large programmable production shaders, and so Fajardo feels it will be some time before production renderers could even possibly go this way. Most production shaders do not respect the Principle of Equivalence.

The team at Lagao have explored MLT. “We considered using MLT. I agree that, in its pure form, the unavoidable impact on rendering speed and the bad stratification of samples etc. are serious shortcomings. However, for certain light paths Metropolis sampling (or similar) is the only way to go. As said we are actively working on several methods for improving speed and quality. For us the perfect method has to be practical: interactive/progressive, must scale, has to allow for dynamic scene updates and has to deliver consistent results,” explained co-founder of Lagoa Arno Zinke.

3.1.2. Manifold Exploration

Beyond MLT is an even more accurate model, and one that requires no special geometry. It is Manifold Exploration Path Tracing (MEPT) and it could be a major advance in super accurate rendering. “Veach’s work (referring to the 1997 thesis) was the last major contribution to path tracing until last year and the Manifold Exploration,” explained Juan Cañada, the Head of Maxwell Render Technology.

It is a long-standing problem in unbiased Monte Carlo methods for rendering that certain difficult types of light transport paths, particularly those involving viewing and illumination along paths containing specular or glossy surfaces, cause unusably slow convergence such as rough glass, some meta or plastic. In their 2012 SIGGRAPH paper called the Manifold Exploration, Wenzel Jakob and Steve Marschner from Cornell University proposed a new way of handling specular paths in rendering. It is based on the idea that sets of paths contributing to the image naturally form ‘manifolds’ in path space, which can be explored locally by a simple equation-solving iteration. The resulting rendering algorithms handle specular, near-specular, glossy, and diffuse surface interactions as well as isotropic or highly anisotropic volume scattering interactions, all using the same fundamental algorithm. They showed their implementation on a range of challenging scenes and used only geometric information that is already generally available in ray tracing renderers.

Certain classes of light paths have traditionally been a source of difficulty in conducting Monte Carlo simulations of light transport. A well-known example is specular-diffuse-specular paths, such as a tabletop seen through a drinking glass sitting on it, a bottle containing shampoo or other translucent liquid, or a shop window viewed and illuminated from outside. Even in scenes where these paths do not cause dramatic lighting effects, their presence can lead to unusably slow convergence in renderers that attempt to account for all transport paths. (SIGGRAPH 2012)

To understand its approach it is good to summarize this article and the advances it has hopefully highlighted:

Simulating light transport has been a major effort in computer graphics for over 25 years, beginning with the introduction of Monte Carlo methods for ray tracing (Cook et al. Pixar 1984), followed by Kajiya’s formulation of global illumination in terms of the Rendering Equation (Kajiya 1986), established the field of Monte Carlo global illumination as stated above.

Unbiased sampling methods, in which each pixel in the image is a random variable with an expected value exactly equal to the solution of the Rendering Equation, started with Kajiya’s original path tracing method and continued with bidirectional path tracing , in which light transport paths can be constructed partly from the light and partly from the eye, and the seminal Metropolis Light Transport (Veach 1997) algorithm. This is a paper so important people still regularly return to it and refer to it today. “He provided a very robust mathematical framework that explained the algorithms (Bi-Directional path tracing and MLT) and explained them very well, but it was theoretical – when you implement your own path tracers you quickly find the evil is indeed in the detail!” comments Cañada.

Various two-pass methods use a particle-tracing pass that sends energy out from light sources in the form of “photons” that are traced through the scene and stored in a spatial data structure. The second pass then
renders the image using ray tracing, making use of the stored particles to estimate illumination by density estimation. This two pass approach is great on noise, but it is a pre-processing pass, and there is a move away from point based solutions, but this is still a valid option and used widely.

Photon mapping and other two-pass methods are characterized by storing an approximate representation of some part of the illumination in the scene, which requires assumptions about the smoothness of illumination distributions. “On one hand, this enables rendering of some modes of transport that are difficult for unbiased methods, since the exact paths by which light travels do not need to be found; separate paths from the eye and light that end at nearby points suffice under assumptions of smoothness. However, this smoothness assumption inherently leads to smoothing errors in images: the results are biased, in the Monte Carlo sense.” (Jakob, SIGGRAPH 2012 )

MEPTGlossy to glossy transports or rendering, without a sufficiently diffuse surface on which to store photons, are challenging to handle with photon maps, since large numbers of photons must be collected to adequately sample position-direction space. Some photon mapping variants avoid this by treating glossy materials as specular, but this means that the the resulting method increasingly resembles path tracing as the number of rough surfaces in the input scene grows.

Manifold exploration is a technique for integrating the contributions of sets of specular or near-specular illumination paths to the rendered image of a scene. The general approach applies to surfaces
and volumes and to ideal and non-ideal (glossy) specular surfaces.

As you can see in the example above the results are even more accurate and have less noise that MLT. The subtle caustic refractions of pings is captured in the MEPT and lost in the MLT.

3.2 A whole new approach

Wojciech Jarosz is a Research Scientist at Disney Research Zürich heading the rendering group, and an adjunct lecturer at ETH Zürich. The Perils of Evolutionary Rendering Research: Beyond the Point Sample, the keynote by Jarosz at EGSR 2013, argued that the way “we approach many difficult problems in rendering today is fundamentally flawed.” Jarosz put forward the case that “we typically start with an existing, proven solution to a problem (e.g., global illumination on surfaces), and try to extend the solution to handle more complex scenarios (e.g., participating media rendering).”

render teapot
Image from a paper by Derek Nowrouzezahrai, Jared Johnson, Andrew Selle, Dylan Lacewell, Michael Kaschalk, Wojciech Jarosz. (see full credit in footnote below)

While he feels that this “evolutionary approach is often very intuitive,” it can lead to algorithms that are significantly limited by their evolutionary legacy. To make major progress, we may have to rethink (and perhaps even reverse) this evolutionary approach.” He claimed that “a revolutionary strategy, one that starts with the more difficult, more general, and higher-dimensional problem – though initially more daunting, can lead to significantly better solutions. These case studies all reveal that moving beyond the ubiquitous point sample may be necessary for major progress.”

A good example of this are the point based solutions discussed in this article. Jarosz points out that by taking the original idea of the point samples as a basis for research, there is a progression of improvement but always based on points. While there is a rival approach in the full ray traced approach, recently someone stopped and wondered, why points? Why not beams? Since then there has been a series of complete rethinks on many of the point methods such as volumetric illumination, SSS and others starting from a whole new place of assuming beams not points as the ‘point cloud’, in effect replacing that concept with ‘beam clouds’.

Jarosz’s central point is not to just promote beam approaches but rather he just uses this as an example of coming at a problem from an entirely new way. In his EGSR talk last month he offered several examples from motion blur rendering to SSS on how a completely new revolution in approach is often advantageous over iterative evolution. In other words to get to somewhere new don’t start where we are now, start from a new jumping off point. Certainly the real world examples benefited from this ‘new approach’ thinking and we will have a more in depth fxguide article on his talk published here soon.

Jarosz does note, however, that he think researchers should rely on the evolutionary approach, but should have revolutions every once in a while to re-examine if “we are doing things the right way.” But, he says, it’s incredibly hard to simply make a revolutionary step without relying on the hard evolutionary steps taken by others.

One person who heard Wojciech Jarosz’s keynote talk first hand at EGSR was Marcos Fajardo from Solid Angle. “It was a really inspired talk, his talk was amazing, what he is trying to say is we can get stuck in seeing things from a certain way and maybe we should sometimes try and take a broader view of things, and then we can see more generalization of techniques. His main point is very valid, but his examples are quite specific to the work he has does at Disney,” referring to his beam based vs point based approaches.

Point / Photon vs Beam approaches an example of a different approach.
Photon points vs Photon beams – an example of a different approach, and not just a refinement to the current approach.

Fajardo also pointed out that he could not (yet) see any ways that Solid Angle could immediately rethink any of their approaches but he commented that, “I like the way he’s thinking – he is forward looking.” Jarosz says the methods he discussed in his talk have been incorporated into Pixar’s PRMan and used by WDAS in the production of feature films. “Specifically, the photon beams method I discussed was added to PRMan 17 last year, and the new SSS method is now in PRMan 18. So, though the talk tries to be forward-looking, we are also definitely concerned with the immediate applicability of our work to improve production rendering.”

Finally, Jarosz adds that he thinks Fajardo’s paper from 2012 with Christopher Kulla “is one of my favorite papers on volume rendering in recent memory. It really shows clearly how to make several challenging aspects of production volume media rendering more practical. Though it was not pitched/presented as such, I do think their paper also incorporates some aspects of this “revolutionary” way of thinking and eliminating the traditional view on point sampling in media.”

Footnote: Lighthouse image from A Comprehensive Theory of Volumetric Radiance Estimation Using Photon Points and Beams, Wojciech Jarosz, Derek Nowrouzezahrai, Iman Sadeghi, Henrik Wann Jensen.

The teapot in a Cornell box image above. Is from the following publication:
A Programmable System for Artistic Volumetric Lighting. ACM Transactions on Graphics  Derek Nowrouzezahrai, Jared Johnson, Andrew Selle, Dylan Lacewell, Michael Kaschalk, Wojciech Jarosz.(Proceedings of ACM SIGGRAPH 2011) August 2011.


Cover image rendered in Maxwell Render. Image courtesy of Hervé Steff- Meindbender.

Special thanks to Ian Failes.

24 thoughts on “The state of rendering – part 2”

  1. You might want to change RPS 19 to RPS toplist 18 since that’s the latest release. Unless they’ve already started 19 beta which would a crazily fast product cycle!

  2. Fantastic write up! It’s a shame not to see Redshift in this list, which is an up-and-coming a GPU-based renderer for Autodesk’s flagship DCCs. Although it’s in closed alpha at the moment, a few of us are lucky to be using it on some productions already, and I fully expect it will be in this list next year.

    Great articles!

  3. Unfortunately, you missed one big history in rendering: the development of Blender Rendering engine, Blender Cycles and also the OSL implementation within Blender.

      1. Maybe Mike when you have time you can do an short supplemental article on some of the research oriented and open source renderers (such as PBRT, Mistuba, POVRay, and of course the Blenders’ renderers and Lux Render, and many others) how they influence / interact with production renderers.

        It will be interesting to see what Imagination Tech.’s plan for Caustic is. Their approach to ray tracing acceleration reminds me of those dedicated Bit-Coin mining chip.

        1. Gavin Greenwalt

          Putting on my Caustic demo artist hat for a second. The CPU implementation is really fast too! 😉

  4. Thanks – this is a really cool article on the state of rendering! I would comment on a few points, however.

    “The new Disney method is even more advanced in that it moves from a point approach to a beam approach” – this is simply not true. One key contribution of Quantized Diffusion in 2011 was that it was the first method in graphics to analytically handle the diffusion-beam (which we called ‘extended-source’). This was probably the most important factor about the QD method because it was switching away from the dipole point source to the beam source that brought back some missing high-frequency detail that previous methods lacked. Quantizing the reflected energy into different temporal buckets was one way to evaluate the photon beam. The MIS/equi-angular approach of Disney’s new method is simply a different approximate quadrature for the beam integral. It is disappointing that Jarosz has been permitted to put his ‘photon beam’ stamp on this BSSRDF, when the beam was one of QDs key contributions in 2011.

    Also, I do agree with Jarosz that thinking outside the box and trying not to get stuck is an interesting goal to keep in mind when thinking about how to move rendering forward. However, as I noted at the end of my QD talk at SIGGRAPH 2011, we still have much to learn about rendering by looking into OTHER fields that use the same equations, especially neutron transport (used to design reactors). If you take the time to dig through this old literature you’ll find that Section 3.2 “A whole new approach” is actually a really old approach!

    In the 1960s Jerome Spanier and others discovered the significant benefits of switching from point sampling to line sampling for computing multiple scattering. This work goes further than that seen in graphics, in fact. Spanier presents two families of methods along these lines and also considers the regime between point and line samples. A beautiful interpretation of photon beams (and a nice way to prove they converge to the right answer) is to consider them as the limit of adding more and more ficitious particles inside the medium (much like Woodcock tracking for sampling heterogeneous volumes). These ficticious particles always scatter the light/neutrons forward in the same direction they were going and do not absorb energy, but the existence of these ‘pseudo’ sampling events allows more next-event estimations along the path until it finally samples a real scattering particle and changes direction. In the limit that the ratio of real particles to fake particles goes to zero, you arrive at the continuous beam integral. These are called ‘track-length estimators’ in neutron transport. This community also discovered the benefits of equi-angular sampling for evaluating these beam integrals (see Rief et al. 1984 – “Track Length Estimation applied to point detectors”, Nuclear Science and Engineering). This corresponds exactly to the sampling method of Kulla and Fajardo 2012 which removes the 1/r^2 singularity with a change of variables.

    So I still maintain that for at least a few more years rendering could likely benefit more from taking time to evaluate known methods in highly-related fields and (like Grosjean’s 1954 modified diffusion) – look for really old methods first.

  5. Eugene d’Eon the error is perhaps mine. The point of fxguide is to be a bridge between say a peer review “Siggraph’ paper, which is precise, and more accessible popular piece. We see our job to make difficult concepts accessible.

    You are correct QD in 2011 was that was the first method in graphics to analytically handle the diffusion-beam. The simplification in the story should be put at my feet not Jarosz. As I say in the story this was a short summary of his talk rather than the full article we will soon be posting.

    Leaving aside the QD analytical approach, the principle of moving from thinking in terms of points to beams illustrated the primary point which was ‘a different approach – rather than a refinement or evolutionary approach’… is what Jarosz was suggesting in his European talk. To be clear it was offered as an example rather than Jarosz main point.

    Your comments here are clear, and both techniques are things we would like to explore more, and we’d love to explore this issue further with you.

    My apologies to Jarosz if my efforts to summaries his primary point did not do either of you justice. But I am sure you would agree it is a complex topic and I take it as a great compliment that you read the story and took the time to write such a well thought out response.

    Mike Seymour
    fxguide

  6. Great and very thorough article Mike! Shame not to see KeyShot in the list as well. Best integration for 3D formats and lightning fast.

  7. Yes i am aware of Keyshot – but it just never appears in our vfx / animation space here at fxguide – other than in product renderings and product animations – but in talking to people about projects – I cant recall anyone ever using it in production. Not to say someone hasn’t but we had to draw the line somewhere and Keyshot seems like a really great product visualisation tool, but not a production renderer nor does it look like it is aiming at this market… but as I say the product is good and I could be wrong

  8. Pingback: the-state-of-rendering | ila.solomon

  9. Pingback: Введение | 3d визуализация сегодня | RAIT.kz

  10. Pingback: Render: 3D Görselleştirmenin sahne arkası (bölüm 1)

  11. Pingback: Cutting Edge Render Tech | 次时代人像渲染技术XGCRT

  12. Pingback: 次世代渲染? | 次时代人像渲染技术XGCRT

  13. Pingback: The State of Rendering | nukevfx.wordpress.com

  14. Pingback: Rendering | 3d art and animation

  15. Pingback: Rendering Process Diary | 3d art and animation

  16. Pingback: Time

  17. Pingback: The State of Rendering | CGNCollect

  18. Pingback: The Future of 3D Industrial Renderings – RenderThat Blog

  19. Pingback: How To Bake Vertex Shaders In Cinema4d | cambodiastar

Comments are closed.