Pointcloud9 – a LIDAR case study

LIDAR is an increasingly useful visual effects tool for on-set surveying and for helping with tracking. In this case study, we talk to Florian Gellinger, visual effects supervisor and co-founder at risefx in Germany. risefx is the parent company of Pointcloud9, formed in October 2011 to bring on-set LIDAR scanning and services to the European film community. risefx has worked on a wide range of films themselves including Harry Potter and the Deathly Hallows – Part 1, Captain America, X-Men: First Class and many local German films.

Pointcloud9

Pointcloud9 recently collaborated with fxphd for a LIDAR shoot in Germany, and currently they are working on the multimillion dollar Tom Tykwer-directed Cloud Atlas with Tom Hanks, being produced by Andy and Lana Wachowski. The film is still being shot but the basic plot involves six stories each set in a different time and place that become intricately related to each other.

As more and more productions turn to LIDAR scanning for accurate object and camera tracking in addition to traditional set surveying, we wondered just how easy it was to now include LIDAR scanning as part of a normal shoot, or was it still just the domain of large scale productions?

Watch a turntable of a Pointcloud9 LIDAR scan done exclusively for fxguide & fxphd.

LIDAR on set

The answer is that while it is still more often used on large film shoots, almost any size production can benefit from an experienced LIDAR scan team and increasingly libraries of LIDAR locations and objects will be coming online for hire and/or purchase.

We spoke to Florian Gellinger about the complexity and cost of LIDAR scanning, using the fxphd shoot as a guide.

For this shoot we aimed to film with a RED One camera in a car park location and then also LIDAR scan the set for 3D camera tracking. The set was in Germany but the files were being tracked by VFX supervisor and fxphd Prof. Victor Wolansky (in Alexandria, Virginia, USA).

The LIDAR scanner.

In this shoot there were several shots, involving wide fixed lens, long lens and zooms.

The LIDAR equipment takes very little time to set up and uses a series of calibrating spheres to identify and calibrate the system. These are like 2D calibration points when stitching a panorama. An individual scan takes about five minutes to achieve, in terms of entire operator time and involves little disruption to the set these days, but this is based on having done some planning ahead.

Pointcloud9’s LIDAR system measures close to 1 million points per second while it rotates 360 degrees vertically and horizontally. Once scanning is done one has the entire environment measured precisely in 3D (latitude + longitude + distance) up to around a distance of 120 meters (the system works beyond 120 meters but as distance increases inaccuracy naturally increases). This allows not only 120 meters horizontally but scanning up several stories high in a built-up area. As an individual scan works on line of sight, to cover an area like a car park, multiple scans are needed, to see obscured sides of cars etc. While there is no hard and fast rule, an assumption of about four scans per location would be a good rule of thumb to cover the hidden sides of most objects in the scene but of course this is completely scene dependent.

The LIDAR rig is quick to set up and quick to move from location to location.

From the scans the LIDAR data can be delivered as:

1. raw scans
2. a point cloud
3. a cleaned up and pre-modeled mesh – this can include low-res version models optionally with normal and bump maps

A clip from fxphd’s class by risefx / Pointcloud9 crew explain their LIDAR scanning tools.

These individual measurement points together form the point cloud. The overall area of the scan can be much bigger than 120 meters by using multiple scans from different positions and combining them to one big point cloud. A set of almost any size and complexity can be scanned. This is, for example, how a road might be scanned in stages, later to be stitched together. Pointcloud9 has done up to 60 scans stitched together for one huge scan, but in this case each scan was lowered in resolution to keep the final file useable.

Victor Wolansky tracks the shot in PFtrack for fxphd

In visual effects LIDAR makes 3D camera tracking much easier and more accurate as it helps artists to create their work in the correct scale and distance and the point cloud produced makes re-creating a location as a digital asset much easier for live action / CG integration and interaction.

It can also aid in previs and camera/lens choices by providing highly detailed previsualization or complex animated storyboard based on the actual locations.


Victor Wolansky, who has been tracking the material at fxphd comments, “Use of LIDAR scans are very helpful to solve very weird motions, since you have the exact XYZ coordinates for each tracking point you are using, camera positions can be solved, even without much or almost null parallax.” He goes on to add that LIDAR becomes extremely useful “when you have very large sets, as a full size CGI set can be built based on that, then, if you have multiple shots, each shot and each camera will be perfectly aligned against each other and the CGI model. The survey information can also be used to align regular auto tracked shots.”

Pointcloud9’s system is fast to set up. It also takes lower quality normal color photos. These are fed into the point cloud to color the points and make scene orientation and understanding easier when previewing the cloud.

Shooting the live action with the RED One.

Multiple HDRs

On their own projects risefx has not only done large LIDAR scans but also multiple HDR scans, as their rig is design to allow an HDR rig to be swapped in place of the LIDAR scanner for HDRs matching the LIDAR locations. risefx also stitches and matches these HDRs so they can have a dynamic HDR that changes as one moves digitally around the LIDAR set scan. This allows correct reflection mapping on say a car driving around a LIDAR scanned car park.


 

Post production

It is rare that the point cloud produced will be lacking in detail – Pointcloud9’s system error factor is about 0.3 millimeters (about 0.012 inch) in a distance of 10 meters, which is incredible. A more likely problem is being overwhelmed with more data than one needs. Here the experience of risefx comes into play. Pointcloud9 can supply reduced resolution but completely workable data sets for sensible high quality work that is in tune with modern tracking and CG pipeline requirements. As risefx is a primary client of Pointcloud9 they know how to present the data in a meaningful way. This cannot be overstated in importance – while many LIDAR companies exist for the construction and building industries – having the data pre-processed by the Pointcloud9 team can greatly aid in a quick and smooth integration of LIDAR data into a tight schedule and pipeline.

This shot is from one of Victor's Tracking courses at fxphd.com

Big thanks to Florian Gellinger, visual effects supervisor and co-founder at risefx and Pointcloud9.

Thanks also to Victor Wolansky for his tracking and tutorial/classes.

 

2 thoughts on “Pointcloud9 – a LIDAR case study”

  1. Pingback: A Light In Chorus Interview | Amy Stevens Honours Blog

  2. Pingback: Pointcloud9 – a LIDAR case study | CGNCollect

Comments are closed.