This research was based on work shown at SIGGRAPH Asia 2009 in Japan. It showed a fast deblurring method that produces a seemingly crisp image from a single blurred image in a few seconds. The 2009 work had two main aspects that made it noteworthy.
The principle in simple terms is if you can work out how the camera moved and what it did to make the image blurry in the first place, you can undo that operation and return a crisp image. This is because each pixel has a time contribution for the imagery that passed before it during the time the shutter was open. This paper requires no special knowledge of the camera or a special rig on the camera.
What made the paper was two key things:
First, the team accelerated used image derivatives rather than the actual pixel values. In this step, they use simple and fast image processing techniques to predict strong edges which feeds the maths to work out what the camera did – for the ‘kernel’ estimation (the estimate of what happened).
Secondly, for kernel estimation, they then used image derivatives, but used less steps and faster ‘fourier’ transform operations to get the result, suppressing some unwanted artifacts along the way. The results runs an order of magnitude faster than previous work, and is just as good or better. Of course, Adobe is expected to use GPU facilities to further speed-up if the tool makes it into Photoshop CS6 – and judging from the buzz around this – they sure will want to.
Here is the official Adobe video of the event:
And here is the original presentation from SIGGRAPH Asia in December 2009 that lead to the Adobe work.
And here is a link to the Siggraph paper – updated for the Adobe contribution.
Here is a before and after from the Adobe research.
OK, and if you liked that, check out this from the same conference!