Adobe Demos Photo Unblurring At MAX 2011 251
karthikmns writes with word of an amazing demo presented last week at Adobe's annual MAX convention. You'll have to watch the video, but the enthusiastic crowd reaction seems genuine (or at least justified), even in an audience full of Photoshop enthusiasts, as photographs are algorithmically deblurred. (Maybe in the future, cameras will keep records of their own motion in metadata to assist such software efforts, rather than relying on in-built anti-shake software.) No word about when this will turn up for consumers in anything besides demo form, but I suspect similar software's already in use at Ft. Meade and Langley.
Don't Hold Your Breath (Score:4, Interesting)
That being said, I was at MAX and the demo was as amazing as it looks. Essentially, the software determines the motion/jitter of the camera at the time the photo was taken (i.e. figures out what caused the blur) and then undoes it. I can't imagine why they wouldn't include this in future version of photoshop.
Re:If the video could be unblurred.. (Score:4, Interesting)
Yeah, it's too bad there's not an easy way for YouTube to display the effective pixel density of a video
That would take three steps: 1. find edges; 2. pick some edges and do Fourier transforms; and 3. figure out how wide the passband is. YouTube could do that at encode time, but it'd have to be done on keyframes throughout a video, or videos with multiple resolutions edited together (e.g. HD video made with SD file footage) would fool it.
Zoom! Enhance! (Score:4, Interesting)
Of course. You can't get back more information than that is on the picture. But for a photograph it's enough that it looks good.
Wich reminds me another similar algorithm that worked on human faces. It could restore very lowres images to a sharp, almost perfect face. It's just that face was completely different from the one on the original picture.
Re:Interpolated missing data is still just a ficti (Score:5, Interesting)
It's cool, but not magic.
Right. I did exactly this with at least one ring image from Voyager 1's encounter with Saturn, and that was in 1980 (although I think I didn't get around to writing the code and actually de-blurring the image for two or three years after it was taken). I believe we used a VAX 11/730 to perform the computations.
FYI, Voyager pictures were 800x800 pixels, taken in monochrome with a filter applied in front of the camera. I don't recall whether this particular picture was a single image or a colour image taken with three filters. If the latter then there would have been an interesting twist: the three images would have been taken 48 seconds apart, so the spacecraft would have moved detectably from one colour to the next, so some semi-clever stuff would have been necessary to deblur three individual images and then merge them. But I honestly don't remember after all this time whether we had to do that.
Re:Interpolated missing data is still just a ficti (Score:2, Interesting)
> I think it would be better to say that [most of] the data are already present; the data just happen to be initially in an unwanted form.
Not necessarily.
Some techniques of reconstruction use information that is not present. There's a video about reconstructing 3D images of people (with Tom Hanks as an example) which produces a 3D model from data in a picture _AND_ a database of preprocessed perspective angles of known stereotype 3D face models (ggl "morphable", video). I guess this is "thinking outside the box". Literally.
Re:the end. (Score:5, Interesting)
This does NOT fix images that are out of focus. This fixes motion blur. The two are entirely unrelated.
Except that both are examples of convolution and deconvolution. In motion blur, the convolution kernel resembles a straight line in the direction of motion. In unfocused images, the kernel has circular symmetry. I used to write simple deconvolution algorithms about 10 years ago, but only for motion blur, where the kernel was easy to find from the conditions in a well-defined industrial setting. Unfocused images are harder to deal with, because the convolution kernel goes to zero at certain intervals, so information is destroyed.
As mentioned in my other post, here [maxent.co.uk] are some examples of more sophisticated image reconstruction from many years ago. When the kernel is unknown, the image can still be reconstructed using statistical techniques (basically because the kernel is the same for all points in the image).
Re:Interpolated missing data is still just a ficti (Score:4, Interesting)
Siggraph 2008 (Score:4, Interesting)
This uses a single image as input, and tries to determine a local prior (L) and a motion kernel (f). It switches between optimization of each in turn, and produces results similar to the demo seen in the video. Given that Aseem works for Adobe, I suspect this work is now close to release.
Cheers,
Toby Haynes