Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software Technology

Adobe Demos Photo Unblurring At MAX 2011 251

karthikmns writes with word of an amazing demo presented last week at Adobe's annual MAX convention. You'll have to watch the video, but the enthusiastic crowd reaction seems genuine (or at least justified), even in an audience full of Photoshop enthusiasts, as photographs are algorithmically deblurred. (Maybe in the future, cameras will keep records of their own motion in metadata to assist such software efforts, rather than relying on in-built anti-shake software.) No word about when this will turn up for consumers in anything besides demo form, but I suspect similar software's already in use at Ft. Meade and Langley.
This discussion has been archived. No new comments can be posted.

Adobe Demos Photo Unblurring At MAX 2011

Comments Filter:
  • by Bongoots ( 795869 ) * on Tuesday October 11, 2011 @10:00AM (#37679452)

    I'd be able to see the demo!

  • Here's the PC Weenies [pcweenies.com] cartoon about this one...

  • by cranky_slacker ( 815016 ) on Tuesday October 11, 2011 @10:10AM (#37679554) Homepage
    This demo came during the 'Sneak Peaks' portion of the conference. The technology may never make it to market.

    That being said, I was at MAX and the demo was as amazing as it looks. Essentially, the software determines the motion/jitter of the camera at the time the photo was taken (i.e. figures out what caused the blur) and then undoes it. I can't imagine why they wouldn't include this in future version of photoshop.
    • I can't watch the vid because of an incredibly slow connection, but I was guessing this is only for motion blur and not from, say, the camera being out of focus. Is that correct?

      I can imagine people saying its impossible if its about unblurring out of focus pictures, but for motion blur, once the path is extrapolated, it seems like there should be some sort of computer magic that backtracks along the path to build up an impression of what the original image was.
  • A staged demo using images that lend themselves to the kind of interpolated guesswork that this uses is one thing. Making it work with real-world forensics is quite another.
    • by doggo ( 34827 )

      Whatever, Eeyore. Images that lend themselves to the kind of interpolated guesswork that this uses are blurry ones. Y'know, the kind you get in the real world?

    • by jez9999 ( 618189 )

      The guys at CSI have been using this to get faces from dirt particle reflections for decades. I know Slashdot submissions are old, but this is ridiculous.

  • by Dunbal ( 464142 ) *

    Maybe in the future, cameras will keep records of their own motion in metadata to assist such software efforts

    Because we all could use just a little more file size bloat. After all, memory is cheap, right?

    • Yes indeed, we can't add a few bytes of accelerometer data to a 10 megapixel image, that would make those images too large!
    • by Jeng ( 926980 )

      If people cared about file size bloat they wouldn't be purchasing the most megapixels possible.

      After all, memory is cheap.

    • I considered trying to do this once with a photo cap on Android ... store the accelerometer data in real time as the shutter was clicked.

      Wasn't helpful without the algorithm they're using though.

  • It doesn't sound that much harder of a problem to solve than what I learned in EE undergrad about deconvolution. Divide the Fourier transform of the blurred image by the fourier transform of the "motion kernel" as they call it to get the sharpened image. I routinely use a similar method in the lab to correct for visual aberrations in my diffraction spot imaging equipment, but there the problem is much easier as the motion function is exactly traced out by the diffraction spots.

    Perhaps getting the "motio
    • Yes, this is basically deconvolution. If you don't know the convolution kernel, there are statistical methods to find the most likely solution for a given blurry image. I heard about these techniques about 10 years ago via prof. Steve Gull, one of the people behind MaxEnt [maxent.co.uk].
    • It sounds similar to what you are familiar with. I would bet the motion kernel is pretty tricky to get right. And, of course, bundling it into a user friendly piece of software and shipping it add to the complexity of making it as well. I wouldn't be surprised if someone made some sort of GIMP add on that did something along the lines of this years ago but that hasn't been developed to the same level Adobe would develop it (if they release it) and that hasn't attained the level of attention since there i
  • by bradgoodman ( 964302 ) on Tuesday October 11, 2011 @10:12AM (#37679588) Homepage
    There seemed to be a bit of "smoke and mirrors" behind some of these demos. He kept "loading some parameters" for each of the demos. Granted, the video was so blurry you couldn't really see the results.

    I'll think I'll reserve judgement though until I can see it "for real".

    Who were the annoying guys off to the side that loved hearing themselves talk? Really kind of ruined the momentum. This isn't MST3k

    • by gaspyy ( 514539 )

      Too much paranoia on your part. The things they show at MAX tend to become part of the next software iteration, although usually the UI is completely different. Everything I've seen in the past 2-3 years has materialized one way or another.

      Unblurring is not a new idea, the tough part is (was?) figuring out the deconvolution kernel.

  • Surely you mean Ft. Belvoir [nga.mil].
  • Enhance 224 to 176. Enhance, stop. Move in, stop. Pull out, track right, stop. Center in, pull back. Stop. Track 45 right. Stop. Center and stop. Enhance 34 to 36. Pan right and pull back. Stop. Enhance 34 to 46. Pull back. Wait a minute, go right, stop. Enhance 57 to 19. Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there.
  • by vossman77 ( 300689 ) on Tuesday October 11, 2011 @10:32AM (#37679854) Homepage

    We've known about deconvolution [wikipedia.org] forever, the trick figuring out the path of the camera to generate the kernel for the deconvolution. In the TFV, he says we use the custom parameter file (that they probably spent months tweaking for each image), lots of computing power and TADA! unblurred image.

    Microsoft had something similar a few year ago, where you have a blurred image and a second underexposed image to do the same thing. see paper here [microsoft.com] and examples here [cuhk.edu.hk]

  • More like de-streak. This isn't CSI technology come to real life. If you take a picture while moving the camera it will basically retrace the camera's movement to make a better picture of it.

  • In the case of fixed-base (like security) cameras, there is very little camera shake that would blur the image. So tracking the motion of the camera (via 3-axis accelerometer for example) wouldn't help.

    Unless you can compute separate motion vectors for each element in the image (think people walking in different directions, each face to deblur would have a different motion vector) this would not seem to improve the performance.

    And, of course, the choice of motion vectors would have a huge impact on the rec

    • by tepples ( 727027 )

      Unless you can compute separate motion vectors for each element in the image (think people walking in different directions, each face to deblur would have a different motion vector)

      Guess what every video codec since MPEG-1 does. Granted, it's a lot more difficult because of the lack of "before motion" and "after motion" images, but there are ways of estimating motion amount from passbands in the Fourier domain.

  • ... the "enhance!" command. Yay!
  • by kiwix ( 1810960 ) on Tuesday October 11, 2011 @10:43AM (#37680002)
    Microsoft did a similar demonstration one year ago [slashdot.org].
    • Big difference. Microsoft is tracing the point spread function using accelerometers. Adobe is computing the PSF from the data (and perhaps these loaded parameters). Microsoft's technique seems more novel to me....

      • Microsoft's technique is more limited in that it only works on camera shake. It cannot correct for mis-focus (which granted, isn't usually a problem with the tiny sensors on most phone cameras).

        Calculating the point spread function from the photo can correct for both, and is the more general-purpose and more powerful technique. I can see using Microsoft's technique to augment the general purpose one though. Figuring out the PSF due to camera shake can be really hard when the photo is badly out of focu
  • I hereby challenge them. Their software versus my fast moving kids who often show up in photos as blurs. I think kids have built in sensors to let them know precisely when a camera is going off, thus enabling them to move at the exact moment to blur and/or ruin the photo.

    • I hereby challenge them. Their software versus my fast moving kids who often show up in photos as blurs. I think kids have built in sensors to let them know precisely when a camera is going off, thus enabling them to move at the exact moment to blur and/or ruin the photo.

      Or... you could just buy a camera with a wide aperture and fantastic noise reduction at high ISO. That would allow you shutter speeds that would freeze even the fastest children. I guarantee the camera will be cheaper then Photoshop.

      Case in point: Canon's G12 [amazon.com] is at least $100 USD cheaper then Photoshop.

  • It looks like the filter looks for images which have been taken by slow ccds or similar where someone moves the camera / phone while they're taking the pic. The image is in focus but the exposure time is so long it gets smeared. The analysis appears to figure out how the camera was moving during the image capture and reverse that. It's very clever but it would be nice to see some genuine before & after shot without someone's shakey audience cam, YouTube encode on top. Also, the issue of "parameters" wou
    • by pavon ( 30274 )

      This same principle works for unfocused images as well. In both cases, you need to figure how the image was blurred. In the case of motion blur the pixels were smeared along a path. In the case of an unfocused image, the pixels are blurred according to a gaussian (bell curve). Once you have this "blur kernel" (normally called a point-spread function [wikipedia.org] in the field), it is just a matter of using deconvolution [wikipedia.org] techniques to remove the distortion.

      In both cases, the information is there, it is just not in the for

  • Dammit .. now the CSI "Can you clean that up?" question is yes, and people will continue to expect miracles from technology.

  • by Animats ( 122034 ) on Tuesday October 11, 2011 @11:33AM (#37680496) Homepage

    This isn't new. There's a shareware plug-in, "DeblurMyImage" [adptools.com], for it.

    There are two main cases - focus blur and motion blur. Dealing with focus blur is well understood, because what defocusing does to an image is well understood. Motion blur is harder, because you have to extract a motion estimate first.

  • There are Photoshop plugins that do this, e.g. Topaz InFocus: http://www.topazlabs.com/infocus/ [topazlabs.com]

  • 'Enhance' clatter-clatter

    'Just print the damn photo.'
  • Siggraph 2008 (Score:4, Interesting)

    by tjwhaynes ( 114792 ) on Tuesday October 11, 2011 @01:31PM (#37681886)
    This looks very much like the paper "High-quality Motion Deblurring from a Single Image" [cuhk.edu.hk] by Qi Shan and Jiaya Jia (Department of Computer Science and Engineering, The Chinese University of Hong Kong) and Aseem Agarwala (Adobe Systems, Inc).

    This uses a single image as input, and tries to determine a local prior (L) and a motion kernel (f). It switches between optimization of each in turn, and produces results similar to the demo seen in the video. Given that Aseem works for Adobe, I suspect this work is now close to release.

    Cheers,
    Toby Haynes

  • by dbIII ( 701233 ) on Tuesday October 11, 2011 @10:23PM (#37686624)
    I attended an interesting presentation in 1990 on transmission electron microscopy being used to determine the structure down to an atomic level of a growing area of a tooth. Calcium and other atoms of interest are far too small to image so you get something blurry. The structure was determined computationally by working out what a series of different structures would look like after being blurred by the limited resolution of the microscope and then comparing that to bitmaps of the captured images.
    I know that is a different approach but people have been working on getting information from defocused images for a long time.

What hath Bob wrought?

Working...