Beam rendering extended at SIGGRAPH 2014

One of the very interesting technical papers sessions to be presented at SIGGRAPH 2014 is the Light Transport session (on the Wednesday, August 13 at 10:45 to 12:15 in the Vancouver Convention Centre, East Building, Exhibit Hall A). Among the papers being presented that morning is a new paper titled Unifying Points, Beams, and Paths in Volumetric Light Transport Simulation. It proposes a new rendering algorithm that combines the strengths of points and beam-based volumetric radiance estimators with the versatility of bidirectional path tracing. The algorithm aims to excel at rendering scenes with different kinds of media, where previous specialized techniques were required and took a longer time to converge to an acceptable image.

Below is an image from the paper that shows for the same amount of time how this new method stacks up against bidirectional path tracing, volume photon mapping, beam radiance estimation and finally straight photon beams. The first image is the final shot, but the test images isolate the volume and thus the effects of the new algorithm.

Equal-time comparison of our combined algorithm against previous work on a scene with a thin global medium and dense media enclosed in the spheres
Equal-time comparison of the researchers’ combined algorithm against previous work on a scene with a thin global medium and dense media enclosed in the spheres. The bottom images are close ups for comparison.

SIGGRAPH papers in general, and this one in particular

Alan Kay, then a Fellow at Apple Computers Inc, once stated that if you want to see the future – look in the labs today. New products and programs don’t just magically appear – he explained they start in a lab somewhere. SIGGRAPH technical papers is just such a ‘lab’ in the metaphorical sense – or at least one could argue it is the place new research in graphics is most often seen for the first time. If one was to generalize the lifecycle of a great new technical innovation in computer graphics it would have to run something like this:

  • A new technical paper is written about some research, submitted, peer-reviewed and presented at SIGGRAPH,
  • which becomes part of a Course two years later,
  • then is featured in a SIGGRAPH Talk as it was used in a new feature film or game,
  • it appears in a book (perhaps based on the Course),
  • and then it becomes part of a product we all can download and use…

Picking which papers to pay attention to is extremely hard. One great tool is the SIGGRAPH Papers Fast Forward, but by the time you are sitting with 1400 others in Vancouver, background reading on the topic may be hard, and every SIGGRAPH attendee suffers from information overload from the Fast Forward or just from scanning the summary program in your attendee lanyard.

Here then is one paper that we thought was pretty interesting, and the background on it…

The paper, Unifying Points, Beams, and Paths in Volumetric Light Transport Simulation is authored by Jaroslav Krivanek, (Charles University in Prague), Iliyan Georgiev (Light Transportation Ltd.), Toshiya Hachisuka (Aarhus Universitet), Petr Vevoda (Charles University in Prague), Martin Sik (Charles University in Prague), Derek Nowrouzezahrai (Université de Montréal), and Wojciech Jarosz (Disney Research Zürich).

If you are a SIGGRAPH regular, then you know the contributors or authors list is always a prime place to start to gauge a paper’s significance. University authored papers when combined with key researchers from places such as Disney’s excellent Research group in Zurich is a good sign. So is Iliyan Georgiev, who while listed as being from Light Transportation Ltd, is also known to many as a key Solid Angle researcher. In fact, Marcos Fajardo, the founder of Solid Angle, described Georgiev as “doing some kick-ass work on motion blur and other practical sampling problems for Arnold 4.2” at the moment. Below we talk one on one with Georgiev.

In summary:

Prague University + Disney Research Zurich + Solid Angle “kick ass” research = fxguide’s attention

There is one other huge aspect that flags this paper as of interest and that is the growth and maturity of beam estimation approaches. fxguide flagged this in our fxguide story on rendering (State of Rendering part 2) last year, and key to that research has been Wojciech Jarosz who is a Senior Research Scientist at Disney Research Zurich heading the rendering group, and an adjunct lecturer at ETH Zurich (but he is about to move to a faculty position as Assistant Professor in Computer Science at Dartmouth College).

Jarosz has published widely before and spoken internationally on a large number of key complex technical rendering advancements. In 2013, Jarosz was the keynote at EGSR 2013 when he presented The Perils of Evolutionary Rendering Research: Beyond the Point Sample, where he argued that the way “we approach many difficult problems in rendering today is fundamentally flawed.” Jarosz put forward the case that “we typically start with an existing, proven solution to a problem (e.g., global illumination on surfaces), and try to extend the solution to handle more complex scenarios (e.g., participating media rendering).” 

Not only is Jarosz in the Disney-Pixar family but fellow researcher Martin Sik (Charles University in Prague), is currently working on beam rendering “in our Seattle development team right now,” according to Pixar’s Chris Ford. In fact, this highlights the new combined research efforts that Pixar and Disney have recently put in place – as we reported in our RMS19 Pixar story last month.

Pixar has adopted beam rendering for volumetrics in RenderMan (RMS 17) – plus more SSS in RMS 18 – and for the upcoming new RenderMan RMS 19 “we have rewritten volume integration for RIS, and have deployed beams and points in our new VCM implementation,” says Ford. RenderMan has been able to render interesting volumes for many years using a variety of techniques, photon point maps and brick maps, but they “are currently experimenting with joint-importance-sampling and broader frameworks for volume integration. Currently the value of multi-scatter volumes does not yet appear worth the cost. Most productions are currently using single or dual-scatter solutions, but this is obviously subject to change as computing budgets increase,” Ford notes.

Beam rather than points is a very different way of looking at a problem. Jarosz used this as an example of the general approach that can happen in computer graphics of aiming to evolve techniques rather than re-examining the problem from a fresh perspective – something he called ‘conceptual blindness’. While he feels that this ‘evolutionary’ approach is often very intuitive,  “it can lead to algorithms that are significantly limited by their evolutionary legacy. To make major progress, we may have to rethink (and perhaps even reverse) this evolutionary approach,” he claimed.

Perhaps one of the most key beam papers was co-authored in 2011 by Jarosz: A Comprehensive Theory of Volumetric Radiance Estimation using Photon Points and Beams. It was this paper that introduced the very concept of photon beams to computer graphics. In addition the paper offered a solution that was more efficient, less noisy, and enabled rendering of sharp details like volume caustics using tens of thousands of photon beams instead of billions of photon points.

It should be no surprise that the R&D team at RenderMan (Pixar) would consider and then add beam rendering in RenderMan, the key 2011 paper on the subject was co-authored by Henrik Wann Jensen, Professor at San Diego University (UCSD) and head of the computer graphics lab at UCSD. Jarosz was a student of Dr Jensen (2008). Jensen has also consulted and worked for Pixar. Jensen’s PhD thesis “Global Illumination using Photon Maps” addressed the simulation of global illumination by introducing the concept of photon mapping in the first place. He literally ‘wrote the book” on Photon mapping: “Realistic Image Synthesis using Photon Mapping,” published in 2001. And his pivotal paper with Pixar’s Per Christensen in 1998 at SIGGRAPH, Efficient simulation of light transport in scenes with participating media using photon maps is heavily cited in research.

The key feature of Jensen’s research into photon mapping algorithms has been the use of photon tracing and the photon map. The photon map is decoupled from the scene geometry, and it can be used for models with millions of objects and complex materials. “Photon mapping is the first practical method capable of rendering caustics on complex non-Lambertian materials, volume caustics, true subsurface scattering, and motion blurred global illumination effects,” posted Jensen on his public Stanford.edu curriculum vitae.

Beam rendering
Beam rendering. Image from a paper by Derek Nowrouzezahrai, Jared Johnson, Andrew Selle, Dylan Lacewell, Michael Kaschalk, Wojciech Jarosz. (see full credit in footnote below).

2011 beam paper

In a vacuum, photons of light travel unobstructed until they interact with a surface. In many respects this was the world of computer graphics for many years. But to add realism and to model such things as fog, liquids and even skin, it became key to explore participating media, where the photons interact with the surrounding medium. At any point, a photon traveling through a medium may be scattered or absorbed, altering its path, and reducing the contribution in the original direction.

Participating media or volumes are responsible for some of the “most visually compelling effects we see in the world. The appearance of fire, water, smoke, clouds, rainbows, crepuscular “god” rays, and all organic materials is due to the way these media “participate” in light interactions by emitting, absorbing, or scattering photons,” stated Jarosz et al. in the 2011 paper. And they went on to add that “these phenomena are common in the real world but, unfortunately, are incredibly costly to simulate accurately.”

In the 2011 paper, one of the primary motivations was that current photon tracing methods for participating media are “limited by the data representation used to store light paths,” wrote Jarosz et al. “Current methods use a photon particle representation that only retains information about the scattering event locations, discarding all other potentially important information accumulated during photon tracing.” By retaining more information about the light paths during photon tracing, it is possible to obtain much improved rendering results.

How do beams work?

The 2011 paper built on some earlier work Jarosz, Zwicker and Jensen had published in Europe in 2008.

From the first 2008 paper published at Eurographics , by Wojciech Jarosz, Matthias Zwicker, Henrik Wann Jensen
From the 2011 paper. This image compares the photon beams technique proposed in 2011 with the previous “state of the art” method proposed in the 2008 paper (which still used photon points, but gathered them all at once along camera rays).

In a similar way that a photon map works on surfaces, it is possible to have a point based photon map in a volume. The choice of using points or beams applies to either the camera paths (the query) or the light paths (the stored photons).

“In the 2008 paper we introduced the “beam radiance estimate”. We still stored photon points, but showed that you can get dramatically better/faster results if you find all photon points along the length of the camera ray (the “beam query”) than with the traditional approach of multiply point estimates. In the 2011 paper we realized that this choice of using points or beams applies not only to the query, but also to how you store photons, and that by storing the segments between photon points (the photon beams) you can again dramatically improve quality. These concepts can in fact be combined (so you can e.g. have a beam query using beam data). In the figure above, we used beam queries in both images to isolate the impact of how photons are stored (points on the left, and beams on the right)” explained Jarsoz.

Beams work best for relatively thin, or more “transparent”, media such as fog, and less so for denser media like a wax candle. As the density of the volume grows, the average length of beams (and their benefit) decreases.

Rendering algorithms are not the only area Jarosz works in. There are basically 3 inter-connected areas that he does research about:

  • light transport simulation
  • intuitive authoring and editing of visual appearance
  • physical realization/fabrication of visual appearance

The first group is basically the work Jarosz does on new, more efficient rendering algorithms. However, as he is keen to point out, “having a fast rendering algorithm isn’t enough if authoring or editing the visual appearance is unintuitive or unpredictable (as can often be the case when modifying physical parameters). A couple years ago, for instance, we teamed up with our friends at Walt Disney Animation Studios to provide a better, artist-controllable way to create volumetric effects used for the movie Tangled“.

Finally, once one has an efficient model for simulating light transport (1st group) and a way to edit the visual appearance (2nd group), then “we can leverage optimization techniques and some means of fabrication to create physical objects that interact with light in novel and carefully prescribed ways,” Jarosz explains.

An example of this is a project published by Jarosz’s team at SIGGRAPH Asia last year which developed an automatic system to fabricate subsurface scattering materials with precise control of both their color and translucency.

2014 SIGGRAPH paper

In the new 2014 paper a combined solution is offered. By way of comparison the scene below is re-rendered with the new approach (volumes only) and each of the other main approaches – including bidirectional path tracing and straight point and straight beam solutions. The second image below (the new unifying points, beams, and paths or UPBP approach) shows a less noisy image with better results for the same render time.

The fundamental approach is to explore a system that uses points when points would be best, and beams when beams would be best. Of course knowing when would be best to use one or the other is not trivial.

fullReference
The complete test image with a range of volumes from the SIGGRAPH 2014 paper.
compatible_all_i752
A render of just the UPBP volumes (the new approach).
The earlier volumetric photon mapping approach
The earlier volumetric photon mapping approach.
The previous Beam Radiance estimate approach
The previous beam radiance estimate approach.
The previous Photon beams only approach
The previous photon beams only approach.
The normal Bi-Direction path tracing solution
The normal bidirection path tracing solution.

To find out more, we spoke to Iliyan Georgiev in London, where he is working at Solid Angle on Arnold. Along with all the contributors, Georgiev will be at SIGGRAPH to present the paper in Vancouver.

FXG: Could you tell me how the group got together on this – from Prague/Disney Research and yourself?

IG:  I’ve been collaborating with Jaroslav on research projects since late 2011, and even though we’ve been physically separated most of the time, it’s been working really great. We both spent some time in Disney Research Zurich in the summer of 2012, where we worked with Wojciech Jarosz on the “Joint Importance Sampling…” paper (thanks for mentioning it in one of your recent podcasts by the way), with the remote help from Toshiya Hachisuka and Derek Nowrouzezahrai. This team worked so well together that it was only logical to continue the collaboration (sometimes on 3-4 different continents 🙂 ), and the result is the UPBP paper. This project required a lot of non-trivial coding, and we got some great students of Jaroslav’s – Petr and Martin – to help with that.


FXG: The paper focuses primarily on transmissive volumes, and the balancing of beams and points – is it fair to say your work has proposed a solution based on producing a metric or formula, that says for low variance in thin media use beams and for thick media with high variance use points?

IG: Yes, one of the results of this paper is that in thin media beams work better, whereas in thick media points can be more efficient. Technically, this efficiency is related to the size of the blurring kernel — when the length of a beam (which is inversely proportional to the medium thickness) is shorter than the width of the kernel, then points are better. This is nicely demonstrated by the figure below (from the SIGGRAPH Paper), and summary we provided. The intuition is that in dense media, beams get very short and lose their efficiency. But I think it’s still correct to simplify it in the way you’re suggesting.

figure7

From the 2014 paper: Above is a comparison of the results of different volume radiance estimators for different kernel width w. For small kernels (left column), the beam-based estimators perform better, while for larger kernels (right column) the point-based estimators provide smoother results. With the middle case (middle kernel width), all the estimators perform roughly the same. The same number of photon points and beams has been used to render these images. The brightness differences among the images are due to the different effects of boundary bias in the various estimators. The noise level differences among the three Bs-Bs1D images, which are not predicted by our analysis, are due to the additional variance from random sampling of the input configuration.


FXG: Could you explain the new extended multiple importance sampling that goes along with this? I believe it makes this work well in a bidirectional path tracing ray tracing environment?

IG: The problem this extended multiple importance sampling (MIS) is trying to solve actually goes back to our vertex connection and merging (VCM) and Toshiya’s unified path space (UPS) papers. These papers aimed to combine bidirectional path tracing and photon mapping by using the original formulation of MIS by Eric Veach which does its magic by comparing the probability densities (PDFs) of the light transport paths sampled by these methods (i.e. by comparing how likely it is for each method to sample a given light transport path). However, light transport paths sampled by photon mapping have one more vertex (the photon) than the ones sampled by bidirectional path tracing. This makes their PDFs “incompatible”, in the sense that they cannot be directly compared (apples-to-oranges), and MIS would not yield a meaningful combination. So VCM and UPS reformulated one of the two methods to enable an apples-to-apples comparison of its PDFs with the other method’s. (VCM reformulated photon mapping, and UPS reformulated bidirectional path tracing.) To put it simply, they had to make sure that the two methods are “compatible” with each other before plugging them into Veach’s MIS.

In this latest UPBP paper we’re trying to combine even more light transport estimators, each of which is even less compatible. We realized that reformulating and enforcing all of these estimators to make their PDFs strictly compatible would be a difficult task. Instead, we extended MIS to be able to combine such incompatible estimators. Note that this formulation, like Veach’s original one, is not limited to bidirectional ray tracing or even to rendering, for that matter. It is simply a mathematical framework to combine different Monte Carlo estimators.

In short: we extended MIS to enable the combination of a larger class of light transport estimators than possible with the original formulation.


FXG: Clearly the examples range from fog to liquids – can you discuss it in terms of extending it to possible skin shading? I know photon beam diffusion was being looked at (but perhaps not implemented at Solid Angle last year).

IG: To avoid any confusion, let me start by noting that there are two popular methods for rendering very dense and highly scattering media: (1) the diffusion approximation and (2) accurate but brute-force Monte Carlo simulation. Skin is just one particular kind of medium, and can be rendered with our Monte Carlo method from this paper. (In fact, some (GPU-only?) renderers only support brute-force Monte Carlo for subsurface scattering.) The candle shown in Figure 1 is actually a representative example of a dense medium that is typically rendered using diffusion approximation methods.

Jensen’s dipole and photon beam diffusion can be much more efficient in rendering such media, and that’s why they’re so widely used, but are less accurate and can produce artifacts. These methods compute the surface color at a given point on an object without actually tracing rays under that surface, but instead by approximating how the incoming light at nearby locations would scatter inside the object and eventually contribute to that shading point. Arnold currently uses such a diffusion approximation model, but does not rely on point clouds to store the incoming light at a fixed set of surface points; instead, it randomly samples locations around each shading point and computes the incoming light at these locations on-the-fly. Parts of this method were described in last year’s SIGGRAPH talk BSSRDF Importance Sampling.


FXG: Does this work the same for volumetric lights (as opposed to light passing through a volume)?

IG: The only difference between surface and volumetric lights is that their emission is defined in 3D space rather than on a 2D surface. From a Monte Carlo perspective they are pretty much the same, and any path tracing method (including ours) can support volumetric lights by randomly sampling points inside their associated regions in 3D space.


2014posterFXG: Are you looking now at non-homgeneous volumes?

IG: All of the volumetric light transport estimators that we consider naturally support heterogeneous media. And so does our method, which combines them in their original form. Any volumetric representation that is supported by current ray-tracing-based renderers can be handled too. The sampling techniques that such renderers use are in fact a subset of the techniques we consider.

2 thoughts on “Beam rendering extended at SIGGRAPH 2014”

  1. Pingback: Beam rendering extended at SIGGRAPH 2014 | Occupy VFX!

  2. Pingback: IO Computer – Video Projectors » Genuine Dell/Creative Labs SB0770 YN899 WW202 Sound Blaster X-Fi Xtreme Gamer PCI Audio Sound Card Model Number: SB0770 Compatible Part Numbers: YN899, WW202

Comments are closed.