Long time friend of fxguide, Dr Paul Debevec has been at Google for sometime now, having moved from being full time at USC-ICT where his Light Stage work helped usher in a new age of facial realism. We recently got to visit Paul and his team at their (semi-secret, unmarked) Google offices in LA. Paul Debevec is still associated with his old USC University but he now has a new extensive lab at Google. The good news is that, far from disappearing into the vast Googleplex never to publish again, Paul and his team have a series of innovative new research projects underway. The bad news is that only one of these is so far public, but it is great to see this new work and the continuing commitment from Google to allow Paul Debevec and his team to publish and contribute to the broader research community. This first public project is centred around Light fields, an area Debevec has actually been publishing in for decades. Debevec worked on sparse light fields as early as 1996 (Modeling and rendering architecture from photographs: a hybrid geometry- & image-based approach:  Siggraph ’96) while he was still at Berkeley around the time of his PhD.

Light bounces off surfaces in different ways and informs you from different perspectives. Filming even in 360 only captures one perspective on how different materials react to light. To help create a more realistic and complex sense of presence in VR, Paul’s Google team been experimenting with Light fields.

Light fields are a set of advanced capture, stitching, and rendering algorithms. Light fields are yet to be exploited commercially, but this work shows how real world captured light fields can give one an extremely high-quality sense of presence by producing motion parallax and extremely realistic textures and lighting. To demonstrate the potential of this highly complex way of capturing the world, Google has released “Welcome to Light Fields,” a free app available on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets.

Until recently only pre-rendered sequences were possible in VR. In those pre-rendered worlds, the computational camera maths is much easier as everything is virtual and so working out how to view any angle of the light field is all that is required. In effect it is a two-part process, render a light field out and then look at it from the angle you want – in real time. The new App demonstrates a series of captured volumes around LA, including inside the Smithsonian Institute’s Air and Space Museum exhibit of NASA’s Space Shuttle Discovery.

Debevec explains light fields by conceptually thinking of a nodal pan as a perfect 360 capture. If one imagines a stereo pair turning 360 degrees on the nodal pan – there would be almost no parallax. (A single lens panning on its nodal point has no parallax and that is what people such as Debevec have used for years to make 360 light probes and 360 panoramas). Conceptually light fields extend the ‘cameras or eyes’ to be ‘a few feet’ off the nodal center. Now you turn or spin around 360 degree and your eyes will move inside a sphere of ‘a few feet’ in diameter. Of course, when moving off a nodal pan there is a vast parallax compared to the nodal case, and this not only provides the illusion of real 3D but it allows you to see the way light is reacting to surfaces differently. Compare this approach to just projection mapping a 360 video onto static 3D objects. While this approach would give the illusion of 3D space, a reflection on a table would never change as the projected texture is static. It turns out that while parallax does inform our understanding of volume, subtle light changes, shadows and reflections also dramatically inform us about material properties and this is only possible if one uses light fields.

The light field capture is possible because a new special light field system records all the different rays of light coming into a volume of space. To record them, Debevec and his team modified a set of Hero 4 cameras in a GoPro Odyssey Jump camera rig, placing them into a vertical arc of 16 cameras mounted on a rotating platform. The normal Odyssey Rig provides 360 video but it captures a location as a simple surrounding flat 2d mapped video sphere. The light field rig captures the location as a volume, and as such is much more accurately capturing a real space and how light falls inside that volume.  To use a crude analogy, It is the difference of being inside a beach ball looking at the balls panels and being inside a snow globe and experiencing the shapes and changing views of the snow as you look around.

The rig is made up of 16 cameras. A special timing system on each of the GoPros sync all the cameras to trigger and record exactly on the same line scan (as the cameras have rolling shutter). “Although this is not as significant as you may think”, explained Debevec, “since we compute a virtual camera position anyway, all of this is taken into account in the computation. In reality the best part about the special rig is that I can switch on all the cameras to record with just one button press”.

The rig has a motor to rotate the rig filming a 360 sphere of imagery. This set of data is then the basis to work out any virtual point in the sphere. When one is wearing the VR head gear, your eye positions are effectively computed as two points inside this sphere. You see in each eye, an image reconstructed of what you would have seen if you had actually been there. Both eyes get a unique correct view on the recorded world.

To work, the system captures a vast amount of data but it is presented as a still image or volume that one can look around inside, but unlike 360 video, as you move your head to see different things, the light changes. Perhaps light that was blocked by a window frame suddenly catches your eye, as you lean an inch to the left, or a reflection on a table moves across the surface as you lean forward towards it. This effect is more than just an illusion of depth. The changing surface properties really allow you to understand the surfaces and the environment more completely. Just as you do in real life.

With light fields, nearby objects seem close to you—as you move your head, they appear to shift a lot. Far-away objects shift less and light reflects off objects differently, so you get a strong cue that you’re in a 3D space. And when viewed through a VR headset that supports positional tracking, “light fields can enable some truly amazing VR experiences based on footage captured in the real world” comments Debevec.

 

 

It takes about a minute for the camera rig to swing around and record about a thousand outward-facing viewpoints on a 70cm sphere. This gives the team a two-foot wide diameter volume of light rays, which determines the size of the headspace that users have to lean around in to explore the scenes once they are processed. To render views for the headset, rays of light are sampled from the camera positions on the surface of the sphere to construct novel views as seen from inside the sphere to match how the user moves their head. They’re aligned and compressed in a custom dataset file that’s read by special rendering software the team have implemented as a plug-in for the Unity game engine.

String

The standard rotating rig produces a uniform set of samples, spaced one camera height apart. In standard form for this innovative group, the team tired to work out a way to also scan between the camera heights. In an innovative old world solution, they worked out they could do this with a piece of string and a nail, (Actually, a highly accurate steel pin).

Note the simple sting that unwinds (top inset) as the rig rotates. The whole rig is pivoted or hinged (bottom left)

As the rig rotates the static pin allows the string to unwind and thus the hinged rig ‘falls’ by a little and the camera sweep a constantly changing vertical path. Without the string the camera return after one rotation to where they started.

The rig – (long exposure/timelapse) but without the string and pin trick to adjust the scanning height.

With the string and the hinge, the cameras fall one half of a GoPro in height each rotation, thus providing a more dense scan of the world. As anyone with high school maths could guess the amount to drop half a GoPro in height per revolution, the pin would need to have a matching circumference. “It turns out it was not quite that simple”, joked Debevec. To be as accurate as the team needed, “we also needed to allow for how thick the string is, and add in half the diameter of the string under tension”. Debevec handed the complex and detailed engineering of the seemingly simple string and pin to Jay Busch, who is the team’s mechanical and artistic engineer who followed Debevec over from ICT, where she was key in all the practical Lightstage engineering and scanning. “It was the first thing I did here at Google,” she laughed, “buying exactly milled steel pins and making it work precisely”.

Location work

Debevec chose a few special places to try out our their light field-recording camera rig. “We love the varnished teak and mahogany interiors at the Gamble House in Pasadena, the fragments of glossy ceramic and shiny mirrors adorning the Mosaic Tile House in Venice, and the sun-filled stained glass window at St. Stephen’s Church in Granada Hills”.

The team also captured at Cheri Pann & Gonzalo Duran’s Mosaic Tile House in Venice CA.

 

The rig in the very unusual Mosaic house, which is a private art project, a convergence of life and art, well known in Venice. The artists owners have been decorating their house for years and its highly reflective surfaces are perfect for understanding the merits of light field capture.

 

The perhaps most impressive part of the team’s light field app is the Smithsonian Institute’s Air and Space Museum. The 3D Digitization Office gave the team access to NASA’s Space Shuttle Discovery, providing an astronaut’s view inside the orbiter’s flight deck which has never been open to the public.

Portraits

The demo closes with recording a variety of light fields of people, experimenting with how eye contact can be made to work in a 6-degrees-of-freedom experience. The shot of the team beside the Space Shuttle was never intended to be included, but Debevec decided that portraits make up a vast amount of normal photography and it would be good to explore light fields with people. For the Shuttle team shot, everyone is required to remain perfectly still for the duration of the scan. As it was one of the first such portraits, Debevec suggested that everyone focus on the static central pillar of the rig. While this works, it does make the team of a dozen or so people appear to be looking past the viewer since from any point in the scan they were not looking at the cameras.

For the portraits of the Cheri Pann & Gonzalo Duran, Debevec decided to have the pair sit still but watch the rotating rig as it moved. The result is starkly different. In this second approach the couple seem to look at you even as you lean around inside the light field volume. In a third portrait the couple kiss, but as they had to hold the pose for over a minute, Gonzalo started to smile (a small amount). When viewing the light field in VR one can see this by moving left and right in the volume, and ths accessing data biased to the non-smile, or the later smile data, depending on which way you move in the volume. The effect is so subtle, but it is a window on how engaging this technology will be when it is possible to capture moving sequences not just static volumes.

“Welcome to Light Fields” experience, available now on Steam VR. Take the seven-minute Guided Tour to learn more about the technology and the locations, and then take your time exploring the spaces in the Gallery.