One of the more interesting new products shown at SIGGRAPH 2012 in LA was Tandent’s Lightbrush. This was first previewed at a prior SIGGRAPH but it was not yet a product that could be purchased. Lightbrush comes from computer vision research and allows artists to independently work and manipulate surface colors and shading separately in a still picture.
While video applications have been demonstrated, for now the product is sold for stills only. There is talk of a video version late next year but no details or pricing for that version have been published. While Photoshop has been able to select regions and manipulate them for a very long time, Lightbrush separates the original into a reflectance map and an illumination map. This split was previously something only possible with computer graphic generated material. What this means is that you can ‘move’ the shadow of someone on the street, or relight a wall – editing the illumination separate from the underlying texture or base texture map. The reflectance and illumination are separately calculated allowing for the base content or the overlying shading to be separately edited. The results are less of a fake, repaint or scale and more of a real illumination adjustment and image reconstruction.
We spoke to Richard Friedhoff, CEO and president of Tandent Vision Science, who explained that “we got interest from people at computer vision conferences and we decided to understand their interest – and so we thought we would exhibit our work at SIGGRAPH and we were literally inundated with interest for a tool that would strip out their illumination qualities for very fast cleaning up of textures, for game development and vfx – and that happened last year, and so we decided to build a product.”
The company is not new, it has been doing computer vision research for some time. The product is now shipping and had been in beta for several months. The program takes a standard image as input and it works out what is the base colors and textures of the objects and what is the light that lands on those objects. “It’s basically the reverse of the rendering process,” Friedhoff says. “In rendering you take the lights and apply them to surfaces to make an image, we take an actual image and reverse engineer that, to work out the process that had to take place to create that image in the first place.” Spec and diffuse on objects are all illumination properties and from a computer vision point of view they are all just illumination and material components. Once they are separated the artist can think of material and shading as layers and you can edit either as ‘separate channels’.
Previous techniques might have tracked non-shadow areas in with perhaps optical flow, but Lightbrush does not work like that. It needs only the one frame. Furthermore it does not clone in textures, it adjusts inside the dynamic range of the image to more ‘correct’ the final results. Says Friedhoff: “Our mission when we started working on this was to be able to decompose a still image about which we know nothing, and that’s what it does.” Tandent has been doing fundamental research in this area. The illumination map can be used for more than just adjusting shadows, it can be used to adjust lighting over all.
Lightbrush has its sensible limits. The product is primarily aimed at the light on objects and not the albedo of a volumetric space or the transmissive materials, these fall outside the primary design of the product. To paraphrase Tandent they are ‘looking at the nature of surfaces not the spaces between surfaces’. Similarly, specular highlight may well clip, thus providing no spatial variation, similarly crushed shadows provide little to work with sometimes, “but sometimes when we separate the image people find information in the shadows they did not know was there,” adds Friedhoff.
Ideally the product likes raw files, or linear files. While the images can be gamma corrected, ideally linear clean data produces the best results. Clearly the more dynamic range the better, especially brightening dark shadows, you do need some structural data. Similarly lens information is irrelevant, no meta data is required, it is all image processing as such processing time is a near linear function of resolution, but image complexity does affect processing time. The initial automatic process is very fast and leads to a secondary interactive session for fine tuning the results. The user gives the program samples of light and dark areas and based on this the image is adjusted in close to real time.
The product retailed at SIGGRAPH for an introductory price of $1500, but the base price is $2500 with discounts for bulk purchases etc.