Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Software Technology

Adobe Demos Photo Unblurring At MAX 2011 251

karthikmns writes with word of an amazing demo presented last week at Adobe's annual MAX convention. You'll have to watch the video, but the enthusiastic crowd reaction seems genuine (or at least justified), even in an audience full of Photoshop enthusiasts, as photographs are algorithmically deblurred. (Maybe in the future, cameras will keep records of their own motion in metadata to assist such software efforts, rather than relying on in-built anti-shake software.) No word about when this will turn up for consumers in anything besides demo form, but I suspect similar software's already in use at Ft. Meade and Langley.
This discussion has been archived. No new comments can be posted.

Adobe Demos Photo Unblurring At MAX 2011

Comments Filter:
  • by cranky_slacker ( 815016 ) on Tuesday October 11, 2011 @11:10AM (#37679554) Homepage
    This demo came during the 'Sneak Peaks' portion of the conference. The technology may never make it to market.

    That being said, I was at MAX and the demo was as amazing as it looks. Essentially, the software determines the motion/jitter of the camera at the time the photo was taken (i.e. figures out what caused the blur) and then undoes it. I can't imagine why they wouldn't include this in future version of photoshop.
  • by tepples ( 727027 ) <tepples.gmail@com> on Tuesday October 11, 2011 @11:58AM (#37680182) Homepage Journal

    Yeah, it's too bad there's not an easy way for YouTube to display the effective pixel density of a video

    That would take three steps: 1. find edges; 2. pick some edges and do Fourier transforms; and 3. figure out how wide the passband is. YouTube could do that at encode time, but it'd have to be done on keyframes throughout a video, or videos with multiple resolutions edited together (e.g. HD video made with SD file footage) would fool it.

  • Zoom! Enhance! (Score:4, Interesting)

    by Hentes ( 2461350 ) on Tuesday October 11, 2011 @12:04PM (#37680248)

    Of course. You can't get back more information than that is on the picture. But for a photograph it's enough that it looks good.

    Wich reminds me another similar algorithm that worked on human faces. It could restore very lowres images to a sharp, almost perfect face. It's just that face was completely different from the one on the original picture.

  • by N7DR ( 536428 ) on Tuesday October 11, 2011 @12:05PM (#37680266) Homepage

    It's cool, but not magic.

    Right. I did exactly this with at least one ring image from Voyager 1's encounter with Saturn, and that was in 1980 (although I think I didn't get around to writing the code and actually de-blurring the image for two or three years after it was taken). I believe we used a VAX 11/730 to perform the computations.

    FYI, Voyager pictures were 800x800 pixels, taken in monochrome with a filter applied in front of the camera. I don't recall whether this particular picture was a single image or a colour image taken with three filters. If the latter then there would have been an interesting twist: the three images would have been taken 48 seconds apart, so the spacecraft would have moved detectably from one colour to the next, so some semi-clever stuff would have been necessary to deblur three individual images and then merge them. But I honestly don't remember after all this time whether we had to do that.

  • by Anonymous Coward on Tuesday October 11, 2011 @12:45PM (#37680656)

    > I think it would be better to say that [most of] the data are already present; the data just happen to be initially in an unwanted form.

    Not necessarily.

    Some techniques of reconstruction use information that is not present. There's a video about reconstructing 3D images of people (with Tom Hanks as an example) which produces a 3D model from data in a picture _AND_ a database of preprocessed perspective angles of known stereotype 3D face models (ggl "morphable", video). I guess this is "thinking outside the box". Literally.

  • Re:the end. (Score:5, Interesting)

    by TeknoHog ( 164938 ) on Tuesday October 11, 2011 @12:46PM (#37680662) Homepage Journal

    This does NOT fix images that are out of focus. This fixes motion blur. The two are entirely unrelated.

    Except that both are examples of convolution and deconvolution. In motion blur, the convolution kernel resembles a straight line in the direction of motion. In unfocused images, the kernel has circular symmetry. I used to write simple deconvolution algorithms about 10 years ago, but only for motion blur, where the kernel was easy to find from the conditions in a well-defined industrial setting. Unfocused images are harder to deal with, because the convolution kernel goes to zero at certain intervals, so information is destroyed.

    As mentioned in my other post, here [maxent.co.uk] are some examples of more sophisticated image reconstruction from many years ago. When the kernel is unknown, the image can still be reconstructed using statistical techniques (basically because the kernel is the same for all points in the image).

  • by Solandri ( 704621 ) on Tuesday October 11, 2011 @01:10PM (#37680950)
    Yeah, this is standard math. A completely out-of-focus picture actually contains nearly as much information as a sharp photo, it's just smeared with a reversible mathematical transform called a point spread function. Reverse it and you get the in-focus image back. There have been third party programs [focusmagic.com] to do this for about a decade. The main problems have been processing speed (it could take a half hour or more a decade ago), determining the point spread function (you have both focus and camera shake, and the former can make figuring out the latter really hard), lens/sensor defects and image format compression (the PSF you calculate for a local region may not work well for the entire picture), and boundary conditions.
  • Siggraph 2008 (Score:4, Interesting)

    by tjwhaynes ( 114792 ) on Tuesday October 11, 2011 @02:31PM (#37681886)
    This looks very much like the paper "High-quality Motion Deblurring from a Single Image" [cuhk.edu.hk] by Qi Shan and Jiaya Jia (Department of Computer Science and Engineering, The Chinese University of Hong Kong) and Aseem Agarwala (Adobe Systems, Inc).

    This uses a single image as input, and tries to determine a local prior (L) and a motion kernel (f). It switches between optimization of each in turn, and produces results similar to the demo seen in the video. Given that Aseem works for Adobe, I suspect this work is now close to release.

    Cheers,
    Toby Haynes

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...