Visual effects artist Scott Metzger demonstrates how he used a special build of The Foundry’s MARI tool to project HDR set-captured environment data onto geometry. The work was done for director Richard LaGravenese’s Beautiful Creatures, coming out in 2013.


Thanks so much for reading our article.

We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.

37 Responses to fxguidetv #165: Scott Metzger on MARI and HDR

  1. I’m absolutely astonished about how fast is this process of turning a pointcloud and an HDR panorama into a real 3D environment with all the textures accurately projected over the geometry, but I’m a little sceptical about the conversion process from a raw pointcloud to a clean topology 3D geometry without human intervention.

    Posted by Eric Garcia on
    • Creating clean geometry goes by pretty fast if you have good measurements of the environment. Moving primitive shapes into place can go a long way. If you to try the methods without a Faro scanner you can pick up an Asus Xtion Pro Live and run it with Faro Scenect http://3d-app-center.faro.com/index.php/stand-alone-apps-faro-scenect. I haven’t tried it yet but it looks helpful. Also there are some neat automatic retopologize tools in 3D Coat.

      I should also mention that Chaos Group has rewritten some of the ptex caching code for reading ptex in V-Ray. It’s now extremely fast to render using all CPU threads. You can trace rays from large data sets of ptex very quickly now. Also if you want to use a lower filter setting for rendering the color data of ptex, yet select a lower mipmap setting for GI and reflections you can go under MISC settings in V-Ray and change the GI Filter from 1 to 10 to force a lower quality mipmap. This will allow a crisp color render using the highest texture quality, yet downgrading the GI texture source and glossy reflections to render much faster. This is in the latest nightly build of V-Ray for Maya.

      Posted by Scott Metzger on
      • EA Tiburon here in Orlando is my customer with a Focus3D and is doing automatic feature extraction with their own proprietary process. They use both 3DS Max and Maya but rely on Houdini and custom algorithms they’ve built are fully extracting linework and meshes from the scans without human intervention. I’ll be writing a blog post about it soon here: https://www.sparpointgroup.com/blog.aspx?blogid=14536

        so keep an eye out… they are using it to build textured maps for mainly their sports games like Tiger Woods, NCAA, NBA and others…

        Posted by Ed Oliveras on
        • This is an alternative to Scenect:
          http://www.reconstructme.net/

          I’ve used it in testing for survey data with matchmove, just need to delve into what Scott is doing next :D

          Posted by Rob on
  2. The pointcloud was used as the basis for a human created topology. The modeller used the pointcloud to ensure accuracy.

    The projection tools used by Scott are included in Mari 1.6. No more custom build required.

    Posted by Jack Greasley on
  3. Nice job and nice presentation.

    Does anybody knows the exact model of the scanner Scott used and what is the price of this scanner ?

    Posted by nicolas on
    • Nicolas, it’s the FARO Focus3D 120… I personally helped Scott and passed along the info of all the plug-ins and the workflow. I’m happy to see what he’s done with it, he’s a sharp guy.

      If you or anyone else want to learn more about it, I no longer work directly for FARO but am their distributor based in Orlando, FL at http://www.go3Dusa.com

      We sell scanners, software, and offer free consulting on all this 3D imaging and post-processing tools.
      Regards,

      Ed Oliveras
      877.496.0497
      http://www.go3Dusa.com

      Posted by Ed Oliveras on
      • Thanks for the answer Ed :-)

        I am in London, so I’ll find somebody distributing FARO there I think.

        Posted by nicolas on
    • Mike and I actually visited the FARO headquarters in Singapore when we were there for SIGGRAPH Asia in December. Was really great to see it firsthand.

      We did an entire class at fxphd.com last term about it as part of the Background Fundamentals series.

      Posted by John Montgomery on
  4. Really Cool stuff Scott – we where doing something similar like this for Transformers. But it was a little pain to paint in the other HDR shots to fill in gaps and occlusions. I was surprised to see that the FARO scanner also recorded colour information. Any idea if the unit can capture HDR imagery? Almost feel like Spheron VR and FARO should talk to each other and have a full Geo/HDR capture solution. Then use Mari for cleanup work and PTEX generation. Anyhow cool stuff.

    Posted by Shervin Shoghian on
    • Hi Shervin.

      The Faro gives a scan position in 3D space with option to export an object in the position as a DXF file. You could shoot HDR’s in the same position by placing the camera at the same height. I found it a bit time consuming to do this though. I would first scan the set, then shoot the HDR’s as fast as possible shooting more HDR’s than Faro scans to get the most texure coverage. I talked with Faro about placing a better CCD in the Faro camera to collect more stops of usable range. Perhaps future Faro models might have something in store for the world of VFX.

      Cheers!

      Posted by Scott Metzger on
      • hey guys, I can say it’s being looked at. There has been some slight improvement of late with Scene v5.1 with color contrast filters. The scanner camera matches the scan resolution, so in essence you can go up to 700 megapixel but nobody in their right mind will ever scan that high. It takes 84 pictures and those are just tiled images that then get overlaid to the scan image (reflectance image from laser). There are ways to improve it in post and then reimport and overlay the scans semi-automatically but I agree would be nice to see a more advanced camera in the head. I’m also working with lighting as a big issue in low-light environments like Scott’s example… I have some folks working on that now

        Posted by Ed Oliveras on
    • If you don’t have a scanner, you can generate a point cloud using Structure From Motion tools like Photosynth, which analyzes a set of images directly to reconstruct the 3D space. Color information can be imported as a particle shape in Maya. I agree the painting of occlusion areas is a real time suck. Mari has definitely made this process easier, and the integration with ptex is crucial. For anyone using Maya to do extensive projection work like this, I wrote a set of tools a while back that I update periodically, and the demo has a fully-functional point cloud (and .ply) import that will reconstruct your camera positions as well: http://www.glyphfx.com/mptk.html

      Posted by Michael Breymann on
  5. this question may sound a bitt strange, but wouldn’t it be possible to use a hacked Micorosft Kinect to scan the environment?

    Posted by Matt Bser on
  6. Hi Scott,

    I was wondering what kind of workflow you use to process your HDR Images into Lat longs.
    Do you shoot in 3 directions 120 degrees apart and process them in Nuke or some other app?

    Thanks your demo is very impressive!

    Posted by Matt Rapp on
    • Hi Matt: I Shoot 4 directions with a Nikkor 10.5mm using a Nikon D800. I had to send that 10.5 lens to Germany to have them shave the hood that is fixed to the glass:

      http://www.360pano.de/en/tokina-sigma-nikon.html

      The lat longs are then combined in PTGUI. PTGUI does an amazing job at automating the process of stitching. PTGUI will load the Nikon Raw file directly then export an exr file. For non latlong images I combine in Photomatix then match color between latlongs and rectlinear HDRS in Nuke. You will want to adjust image exposure after creation of all HDRs in something like Nuke.

      Posted by Scott Metzger on
      • Thanks.

        I’m very interested in the process of turning real environments into digital sets. it’s actually very hard to find others working on the same sort of thing. I seems you’ve been working on this for a while. Are there any groups in Los Angeles or any online resources that you know of where people are discussing capturing sets. Just a thought, thanks Scott!

        Posted by Matt Rapp on
  7. Metzger is bauss!!!! Much respect.

    Posted by Alan on
  8. How many scans did you do for each room? And how long was the scan site work? Did you make it a priority to ensure very minimal amount of occlusion or did you have to wing-it a bit on the modeling?

    Cheers

    Dave

    Posted by David Spittle on
    • The amount of scans per room really depended on the size and occluded objects, if it was inside or outside. Scanning in sunlight outdoors would be 9 mins a scan including color capture of point cloud. In door scanning would be about 5 minutes per scan for color. The most scans I capture would be around 20 scans, and the least for a room would be about 4-5. I would try to scan every set as much as possible. Most of the time you end up scanning ahead of shooting, or during shooting while everyone is on a lunch break. You never at any point want to hold up production. For the larger outdoor areas It would sometimes take 4-6 hours for scanning and photography.

      The Faro even scans cars as well. That takes about 1 hour to do. Scanning the environment and being able to pull the scan up in Maya with a Macbook Air to visually communicate how shots need to be shot for vfx work was a huge plus of having the scanner. It’s a fantastic tool that can be used in a multitude of ways.

      Posted by Scott Metzger on
      • Cool, I’ve got some scan data from an old job I worked on. I aksed the surveyors for the raw data and I’ve got 28 .pts files totalling 35GB of an exterior site.

        It was a while ago now and cant remember whether they gave me the unregistered scans or just split the registered scan into managable chunks or perhaps even thinned it out a bit. I work for another company now so cant really ask.

        Any ideas whether this format tends to be registered or not? I know they used the sphere method and a Faro scanner. I was on site during the scan and took 20 or so spherical HDRs.

        So I might have a crack at this – I had planned to try this in Max but your workflow is much better. Just trying to establish whether I need to try and get a trial of Faro Scene or just Geomagic…

        And any ideas on a Point Cloud plug for Max? I liked their project helix plug that was in the AD labs a couple of years ago, it’s not seen the light of day though since but it’s interesting that AD bought out Alice Labs, so who knows – it could be around the corner.

        Posted by David Spittle on
        • The best thing to do would be to ask for a 1/4 res reduced version of the combined registered pointcloud. Faro Scene comes with a 30 day trial that is fully functional. I haven’t messed with pts files, but I think that might be a Leica format. I’m not sure though. Faro saves scans in FLS format which is highly compressed data. You would never reach 35gigs in FLS. Registering Faro sphere’s are really cake and shouldn’t take more than 5 minutes to register. Then all you do is reduce the point cloud and export to xyz data. I do think Think Box software has support for pointcloud using an xmesh pluggin, but I am not 100 percent on this since I mostly mingle with Maya. Pointcloud exports can get very large if you export 1:1. I only recommend 1:1 exports if you need extra detail for smaller areas. When I was handing over data to vendors on the show I would supply a 1/4 reduced pointcloud along with the faro scene files so they could export based off the original 1:1 FLS files.

          Geo Magic is a must for heavy noise reduction, or meshing for organic shapes. If you have non organic shapes, its faster to trace the pointcloud with primitives in 3D by hand.

          Posted by Scott Metzger on
          • .pts is a Leica format. You have two choices, either a 5 day trial of Cyclone to register them (if the vendor has not already done so), or you can register them in Geomagic. If the Lidar vendor has them as seperate yet registered files, just combine them in Geomagic and then downsample. Just remember that Survey files come in as Z up, so you will need to re-orient the final PC to a Y up configuration.

            I think Geomagic are still hosting one of my earlier intro’s to meshing on their site

            Posted by Craig Crane on
        • Frost plugin from Thinkbox does a good job of meshing within Max. It pretty basic compared to the pro meshing apps but is stable and does a great job. After generating the mesh you can do a pro optimize to bring the poly count down.

          Posted by Matt Rapp on
          • Turns out the scans are not registered – I recall asking them to covert the raw data to this format when I was demoing the Autodesk Project Helix plugin for Max.

            I just need to find a way to covert the .pts files to a format that I can import into Faro Scene which I’m trialing.

            I also downloaded a demo for Clouds2Max and started trying to manually register the scans – by hand placing sphere primatives as close as I can to one of their locations in the scans, adjusting the pivot points. Align. Then rotate best as I can.

            It’s not perfect by any means, and I’m only using one sphere to do the initial alignment rather than averaging (which I’m assuming Faro would do to some kind of error tolerance on the fly?).

            Clouds2Max handles many many millions of points with ease which is good news.

            Also with the data I’m using – noise has been removed already by the surveyor but they thinned out quite a lot of it but should be enough for the exercise if I can find a method the scans…

            Posted by David Spittle on
          • you can just change the pts extension to xyz or bring it thru a file converter… I posted a free one in my twitter account @ed_oliveras I think it was called http://www.cs.unc.edu/~isenburg/pointzip/… anyway, you can get it into FARO Scene, and they offer a free version Scene LT which you can find on our site http://www.go3Dusa.com

            as for meshing tools, check out these:
            1. Mesh Lab: http://meshlab.sourceforge.net/
            2. Cloud Compare: http://www.danielgm.net/cc/

            there are others, but these are popular and FREE!

            A useful tool that my friend built while Autodesk keeps the Alice Labs visualization engine on ice (we’ve heard spring 2013 for unveil) is http://www.clouds2max.com

            He uses Thinkbox Frost for his meshing.

            submit any questions you guys would like to know more, we do 100% scanning and go to all the shows for new unpublished stuff or beta stuff.

            Posted by Ed Oliveras on
  9. Great presentation, and very timely. Our company is looking at getting a Nikon d800 and we already have the 10.5 lens. I had a few questions to clarify as we will be planning on replicating portions of your process.

    Do you have the d800 or the d800E which should I get?
    Is the lens shaving process only effecting the outer plastic of the lens casing or are you effecting the glass too?

    We have the old faro scanner it weighs about 60 lbs and is a beast to take around. Do you know about what the scanner you use weighs?
    Thanks
    Aaron

    Posted by aaronshardwork on
    • I have the regular D800 and it works really well. You could always try the D800E to get a bit more of a sharper image, but you will have to pull out any moiré patterns with processing. I think that would add too many extra steps since I like to process the raw nef files directly into PTgui from the camera.

      In terms of the lens shave, it only affects the plastic hood. The glass is never touched. I highly recommend Tobias Vollmer for shaving (info@360pano.de) .

      I can hold the D800 with tripod on my right hand, and the Faro Focus 3D scanner with my left. It’s extremely light with the correct tripod. To purchase the correct kit for the faro it will cost about 48k in total.

      Posted by Scott Metzger on
  10. another useful and interesting gadget worth looking at in your fields are:
    1. iStar 360 HDR cameras:
    2. Matterport 3D scanner
    3. Mantis Vision handheld scanner

    lots more gadgets… anybody seriously considering getting or renting these should come to the SPAR conference in CO this April: https://www.sparpointgroup.com/international/

    I blog for these guys in case you were interested: https://www.sparpointgroup.com/blog.aspx?blogid=14536

    Posted by Ed Oliveras on
  11. A fantastic presentation! Can you elaborate more on what kind of artifacting you get with a sigma 8mm? and also if the process can be replicated on a smaller budget with say a Canon 5D?

    many thanks.

    Posted by Stephen Chan on
    • The Sigma has a lot more chromatic aberration, and its not as sharp as the 10.5 Nikkor.

      Posted by Scott Metzger on
  12. Great work Scott!

    If I may add, Z+F has developed a new laser scanner, which also acquires HDR imagery ( Z+F Imager 5010C, http://www.zf-laser.com/Z-F-IMAGER-5010C-3D.135.0.html?&L=1). This would eliminate the process of stitching the panorama and aligning it to the model. Let me know if you would like to play around with it (c.held@zf-laser.com).

    Thanks for the stunning presentation!

    -Chris

    Posted by Chris Held on
    • Hi Chris,

      scanner looks fantastic, can you tell us a little more about the camera? resolution per exposure how many brackets it takes for HDRs, at how many stops apart and what focal length it shots with?

      many thank!

      Posted by stelisfilm on
  13. Does anyone have a recommendation for shooting HDRi at the same time as scanning?
    Any suggestions on a viable tripod head that would allow for quick snapping between the two or would placing a second tripod in a ball park area be adequate.

    Awesome presentation.

    -g

    Posted by jiyles on

Leave a Reply