Adobe Demos Photo Unblurring At MAX 2011 251
karthikmns writes with word of an amazing demo presented last week at Adobe's annual MAX convention. You'll have to watch the video, but the enthusiastic crowd reaction seems genuine (or at least justified), even in an audience full of Photoshop enthusiasts, as photographs are algorithmically deblurred. (Maybe in the future, cameras will keep records of their own motion in metadata to assist such software efforts, rather than relying on in-built anti-shake software.) No word about when this will turn up for consumers in anything besides demo form, but I suspect similar software's already in use at Ft. Meade and Langley.
If the video could be unblurred.. (Score:5, Funny)
I'd be able to see the demo!
Re: (Score:2)
Re: (Score:3)
Indeed, the video is quite poor quality. Rather disappointing :/ I did choose the 720p and fullscreen to see if I could see the difference better, but it doesn't really help much.
Yeah, it's too bad there's not an easy way for YouTube to display the effective pixel density of a video - that one would be maybe 60p.
That video was another fine example of the megapixel myth.
Re:If the video could be unblurred.. (Score:4, Interesting)
Yeah, it's too bad there's not an easy way for YouTube to display the effective pixel density of a video
That would take three steps: 1. find edges; 2. pick some edges and do Fourier transforms; and 3. figure out how wide the passband is. YouTube could do that at encode time, but it'd have to be done on keyframes throughout a video, or videos with multiple resolutions edited together (e.g. HD video made with SD file footage) would fool it.
Re:If the video could be unblurred.. (Score:4, Funny)
Hmm...
"drawing a laoud applause" "not yet made it clear weather" "will be shipping quite a few number of units"
laoud? weather? few number of units?
I tried saying "Enhance!" a few times, but that didn't un-blur the article's spelling/grammar/word choice+usage. ;)
Re: (Score:2)
Get Chloe to retask a grammar satellite.
We must decode the OPs comments or the terrorists have already won!
Re: (Score:3)
Re: (Score:2)
I feel seasick from the demo. Perhaps they should invest in a steady cam... If that is too expensive how about a tripod!
Re: (Score:2)
Wouldn't help much. Adobe's unblurring only deals with motion blur, not out-of-focus blur.
PC Weenies Cartoon Take (Score:2)
Here's the PC Weenies [pcweenies.com] cartoon about this one...
Don't Hold Your Breath (Score:4, Interesting)
That being said, I was at MAX and the demo was as amazing as it looks. Essentially, the software determines the motion/jitter of the camera at the time the photo was taken (i.e. figures out what caused the blur) and then undoes it. I can't imagine why they wouldn't include this in future version of photoshop.
Re: (Score:3)
I can imagine people saying its impossible if its about unblurring out of focus pictures, but for motion blur, once the path is extrapolated, it seems like there should be some sort of computer magic that backtracks along the path to build up an impression of what the original image was.
Already in use in Hollwyood (Score:2)
Re: (Score:2)
Whatever, Eeyore. Images that lend themselves to the kind of interpolated guesswork that this uses are blurry ones. Y'know, the kind you get in the real world?
Re: (Score:2)
The guys at CSI have been using this to get faces from dirt particle reflections for decades. I know Slashdot submissions are old, but this is ridiculous.
Re: (Score:2)
Re: (Score:3)
Including that makes a cell phone camera just a little larger than desirable, and makes taking a snapshot take just a fraction more time.
Re: (Score:2)
It's not always possible to hold a camera steady. If you're moving in addition to the camera (either because you're following something, or because you're on a moving platform), or you have to use a slow shutter speed, etc.
There's also potential here for image stabilization in video. Software image stabilization for video can work wonders, but it generally can't compensate for motion blur. The result is that you get a steady image with motion blur going in different directions, which looks pretty odd. This
Re: (Score:2)
It's all a matter of helping in bits. Optical IS gets you a bit, better sensors (higher usable ISO) helps, software algorithms that can compensate for motion helps, etc.
Most cameras already have MEMS gyroscopes onboard to do optical IS. The problem is that you can only compensate for so much by shifting the lens. If the camera recorded the motion data at sufficiently high resolution in the EXIF data, the postprocessing software could use this motion data to help guide the motion blur analysis.
Yeah (Score:2)
Maybe in the future, cameras will keep records of their own motion in metadata to assist such software efforts
Because we all could use just a little more file size bloat. After all, memory is cheap, right?
Re: (Score:3)
Re: (Score:3)
If people cared about file size bloat they wouldn't be purchasing the most megapixels possible.
After all, memory is cheap.
Re: (Score:2)
I considered trying to do this once with a photo cap on Android ... store the accelerometer data in real time as the shutter was clicked.
Wasn't helpful without the algorithm they're using though.
deconvolution? (Score:2)
Perhaps getting the "motio
Re: (Score:2)
Re: (Score:2)
"I'm just going to load some parameters..." (Score:3)
I'll think I'll reserve judgement though until I can see it "for real".
Who were the annoying guys off to the side that loved hearing themselves talk? Really kind of ruined the momentum. This isn't MST3k
Re: (Score:2)
Too much paranoia on your part. The things they show at MAX tend to become part of the next software iteration, although usually the UI is completely different. Everything I've seen in the past 2-3 years has materialized one way or another.
Unblurring is not a new idea, the tough part is (was?) figuring out the deconvolution kernel.
Re: (Score:2)
Re: (Score:3)
This caught my attention immediately, and I think you are right. He loaded different parameters for each photo, leading me to believe that there was a significant amount of pre-processing done even before the "analysis" step he demonstrated.
Wouldn't be surprising. Most filters in Photoshop have a fair amount of control for both artistic and practical reasons. During a demo, you don't want to be fiddling around moving sliders back and forth on a 'slow computer' (his complaint - I mean really Adobe, can't you buy some fast laptops for a demo?). So the smart thing to do would be to save at least some of your parameters (like you can do with most Photoshop filters) and push a button and make your audience swoon.
Ft. Meade? (Score:2)
Re: (Score:2)
Yes, NSA, where they do electronic, communications, and computer intelligence processing. And only that.
As opposed to NGA, or NRO, both of which are involved with imagery intel.
Try to get your agencies right.
Re: (Score:2)
Yes, NSA, where they do electronic, communications, and computer intelligence processing. And only that.
That's what they want you to think....
How's Yer Sheep? (Score:2)
Deconvolution (Score:3)
We've known about deconvolution [wikipedia.org] forever, the trick figuring out the path of the camera to generate the kernel for the deconvolution. In the TFV, he says we use the custom parameter file (that they probably spent months tweaking for each image), lots of computing power and TADA! unblurred image.
Microsoft had something similar a few year ago, where you have a blurred image and a second underexposed image to do the same thing. see paper here [microsoft.com] and examples here [cuhk.edu.hk]
Not De-Blur (Score:2)
More like de-streak. This isn't CSI technology come to real life. If you take a picture while moving the camera it will basically retrace the camera's movement to make a better picture of it.
How would this help fixed cameras? (Score:2)
In the case of fixed-base (like security) cameras, there is very little camera shake that would blur the image. So tracking the motion of the camera (via 3-axis accelerometer for example) wouldn't help.
Unless you can compute separate motion vectors for each element in the image (think people walking in different directions, each face to deblur would have a different motion vector) this would not seem to improve the performance.
And, of course, the choice of motion vectors would have a huge impact on the rec
Re: (Score:2)
Unless you can compute separate motion vectors for each element in the image (think people walking in different directions, each face to deblur would have a different motion vector)
Guess what every video codec since MPEG-1 does. Granted, it's a lot more difficult because of the lack of "before motion" and "after motion" images, but there are ways of estimating motion amount from passbands in the Fourier domain.
One step closer to... (Score:2)
Microsoft did it one year ago (Score:3, Informative)
Re: (Score:3)
Big difference. Microsoft is tracing the point spread function using accelerometers. Adobe is computing the PSF from the data (and perhaps these loaded parameters). Microsoft's technique seems more novel to me....
Re: (Score:3)
Calculating the point spread function from the photo can correct for both, and is the more general-purpose and more powerful technique. I can see using Microsoft's technique to augment the general purpose one though. Figuring out the PSF due to camera shake can be really hard when the photo is badly out of focu
Challenge! (Score:2)
I hereby challenge them. Their software versus my fast moving kids who often show up in photos as blurs. I think kids have built in sensors to let them know precisely when a camera is going off, thus enabling them to move at the exact moment to blur and/or ruin the photo.
Re: (Score:2)
I hereby challenge them. Their software versus my fast moving kids who often show up in photos as blurs. I think kids have built in sensors to let them know precisely when a camera is going off, thus enabling them to move at the exact moment to blur and/or ruin the photo.
Or... you could just buy a camera with a wide aperture and fantastic noise reduction at high ISO. That would allow you shutter speeds that would freeze even the fastest children. I guarantee the camera will be cheaper then Photoshop.
Case in point: Canon's G12 [amazon.com] is at least $100 USD cheaper then Photoshop.
Blur != focus (Score:2)
Re: (Score:2)
This same principle works for unfocused images as well. In both cases, you need to figure how the image was blurred. In the case of motion blur the pixels were smeared along a path. In the case of an unfocused image, the pixels are blurred according to a gaussian (bell curve). Once you have this "blur kernel" (normally called a point-spread function [wikipedia.org] in the field), it is just a matter of using deconvolution [wikipedia.org] techniques to remove the distortion.
In both cases, the information is there, it is just not in the for
CSI Effect (Score:2)
Dammit .. now the CSI "Can you clean that up?" question is yes, and people will continue to expect miracles from technology.
This has been done for years. (Score:3)
This isn't new. There's a shareware plug-in, "DeblurMyImage" [adptools.com], for it.
There are two main cases - focus blur and motion blur. Dealing with focus blur is well understood, because what defocusing does to an image is well understood. Motion blur is harder, because you have to extract a motion estimate first.
Nothing New (Score:2)
There are Photoshop plugins that do this, e.g. Topaz InFocus: http://www.topazlabs.com/infocus/ [topazlabs.com]
'Enhance' clatter-clatter (Score:2)
'Just print the damn photo.'
Siggraph 2008 (Score:4, Interesting)
This uses a single image as input, and tries to determine a local prior (L) and a motion kernel (f). It switches between optimization of each in turn, and produces results similar to the demo seen in the video. Given that Aseem works for Adobe, I suspect this work is now close to release.
Cheers,
Toby Haynes
1990 (Score:3)
I know that is a different approach but people have been working on getting information from defocused images for a long time.
Re:Interpolated missing data is still just a ficti (Score:4, Insightful)
Did you watch the video? It makes unreadable text readable. That falls into the category of making missing data suddenly appear.
Re:Interpolated missing data is still just a ficti (Score:5, Insightful)
I think it would be better to say that [most of] the data are already present; the data just happen to be initially in an unwanted form.
Re: (Score:2, Interesting)
> I think it would be better to say that [most of] the data are already present; the data just happen to be initially in an unwanted form.
Not necessarily.
Some techniques of reconstruction use information that is not present. There's a video about reconstructing 3D images of people (with Tom Hanks as an example) which produces a 3D model from data in a picture _AND_ a database of preprocessed perspective angles of known stereotype 3D face models (ggl "morphable", video). I guess this is "thinking outside
Re:Interpolated missing data is still just a ficti (Score:4, Informative)
It's a rather "expensive" (cpu-intensive) operation, and indeed having sensor data about how the camera has shaken during exposure would significantly help in restoring the image. Interestingly, even cheap smart phones with crappy cameras will often already have movement sensor on-board, so there are some possibilities to improve image quality right after taking a picture; all it takes is a bit of software. How long until someone here whips up an improved Android camera app?
I'm probably under-informed, but I haven't heard of any cameras with full-blown movement sensor, although I know some of them can work out portrait vs landscape by now. Sounds like camera manufacturers have some catching up to do in the hardware department.
Re:Interpolated missing data is still just a ficti (Score:5, Insightful)
No, and really no to everyone else. This is making _obfuscated_ data suddenly because visible.
It characterizes the the motion of the camera from the blur then reverses it: essentially an image stabilization algorithm. It's like making voices audible over loud music by figuring out what the song is and subtracting it from the mix.
It's cool, but not magic. They aren't even pretending to add in missing data like a CSI zoom. Nor does it even seem to take care of simple out of focus situations. So let's not get too excited, well, unless you've got a cheap/slow camera.
Re: (Score:2)
Re:Interpolated missing data is still just a ficti (Score:5, Interesting)
It's cool, but not magic.
Right. I did exactly this with at least one ring image from Voyager 1's encounter with Saturn, and that was in 1980 (although I think I didn't get around to writing the code and actually de-blurring the image for two or three years after it was taken). I believe we used a VAX 11/730 to perform the computations.
FYI, Voyager pictures were 800x800 pixels, taken in monochrome with a filter applied in front of the camera. I don't recall whether this particular picture was a single image or a colour image taken with three filters. If the latter then there would have been an interesting twist: the three images would have been taken 48 seconds apart, so the spacecraft would have moved detectably from one colour to the next, so some semi-clever stuff would have been necessary to deblur three individual images and then merge them. But I honestly don't remember after all this time whether we had to do that.
Re: (Score:2)
A microscopist would call this "deconvolution."
Re:Interpolated missing data is still just a ficti (Score:4, Interesting)
Re: (Score:3)
Yes. As far as I know the process is called deconvolution.
What is I think new about the thing that Adobe shows here, is that it doesn't just compensate for out of focus or other instrumental effects, but for camera motion. (yes I watched the video). It determines the likely motion the camera made during the exposure. and then uses that as some kind of matrix for deconvolution.
What makes that tricky compared to a classical point spread function that only includes instrumental effects, is that it's probably n
Re: (Score:2)
It makes unreadable text readable. That falls into the category of making missing data suddenly appear.
I don't know the details of that feature, but wouldn't be surprised that the "repair" mode comes into two flavors: automatic and manual
- the automatic mode would try to guess from embedded data like Exif and the picture itself what it is supposed to show.
- the manual mode would ask the user if the part to be repaired is text, landscape, face...
In the case of a text, the algorithm could either compare each letter with the ones it knows (from fonts), or words from a dictionary - the user would select a
Zoom! Enhance! (Score:4, Interesting)
Of course. You can't get back more information than that is on the picture. But for a photograph it's enough that it looks good.
Wich reminds me another similar algorithm that worked on human faces. It could restore very lowres images to a sharp, almost perfect face. It's just that face was completely different from the one on the original picture.
Re: (Score:2)
Re: (Score:2)
But that's just a computer-generated illusion, not a reflection of reality.
I wouldn't go that far, I'd say it's more like making an educated guess -- and while it's true that a guess is a guess and you should never take it for fact, a guessing tool that is consistently 95% accurate is still incredibly useful, even if just to narrow down the places that humans should then go and investigate by hand.
Re: (Score:2)
At best it will make something ugly LOOK a little better.
Or in the case of cosmetics ads, make something that looks good look a little uglier.
Re:Interpolated missing data is still just a ficti (Score:5, Funny)
How hard can it be, I mean, they've been doing it in movies since at least the 80s. Hell even the $500 dell desktop on CSI:miami can do it.
"we've got a convenience store video feed of the getaway car, the camera was recording in 480i from 300 yards away"
"can you sharpen it up a little?"
"sure. one moment... ok got it. License plate is california JGL-711. Ok just a bit more... yeah, looks like registration expires march 2012. Wait, let me clean it up some more, yeah it looks like there's a small identifying scratch on the trunk lid about a half inch long shaped like a boomerang. Oh wait, this is the new version of the software, let me zoom in a bit further, yeah I'm pretty sure I'm seeing loose skin cells on the edge of the trunk lid, maybe our missing person is in the trunk!"
"good work, now where's my sunglasses?"
yeeeeeeeeeeeeeeeeeeeeeeeeeeeaaaaahhhh!
Re: (Score:2)
You totally missed the clue. Boomerang scratch? Going under the I-37 bridge as the crossing arm lowers. We know EXACTLY where they are!
Re: (Score:3)
Zoom in on that raindrop. Enhance! [youtube.com]
Re: (Score:2)
Zoom in on that raindrop. Enhance! [youtube.com]
I thought I had seen most of Red Dwarf but I don't remember that scene... hilarious. I'll be impressed when Adobe comes up with the ability to "uncrop" :)
Re: (Score:2)
Eventually you zoom in beyond the Plank distance and know the mind of God. All crimes are simultaneously solved, and the dominion of quantum peace prevails.
Re: (Score:3)
Yes and no - No, you can't magically create information not present in the original image.
You can, however, calculate the most likely "true" value for any point on the image, given a sufficiently accurate model of the distortion present (in this case, an incorrect focal plane). Whether or not
Re: (Score:2)
Re: (Score:2)
(in this case, an incorrect focal plane)
This case involves motion blur, not incorrect focus.
Re: (Score:2)
Interpolated missing data is still just a fiction
Yes it is. However, in the present case they reuse *existing* data. Requires a lot of processing, but this is not an artifice, they really use computing power to first determine the camera movement, and from this information, they can re-compute each pixel by removing as much as they can the blur resulting from the camera movement, which gives much better results than using a straightforward generic edge enhancement.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Actually, it could quite reasonably make data that was there, but obscured in such a way that a human eye can't make it out, suddenly appear. The demo shows an example of taking a photo of a poster that's blurred beyond reading, and getting perfectly crisp sharp text back.
A fiction our brain naturally employs anyway. (Score:2, Insightful)
It's not entirely about whether a digitally de-blurred image is more accurate than an inherently sharp image.
It's also about whether the sensory impression made on a human by a digitally de-blurred image is a more accurate a model of the reality than the sensory impressions made by a blurred image. Of course your occipital lobes do plenty of interpolating of their own, so surely the question becomes which system (digital or organic) produces the more reliable interpolations.
Maybe if a person studied the blu
Re: (Score:2)
This will make things LOOK pretty. It won't make missing data suddenly appear. At best it will make something ugly LOOK a little better. But that's just a computer-generated illusion, not a reflection of reality.
It's not that simple. What's likely happening here is two things: First, the image is analysed and the probable motion trajectory of the camera while the shutter was open is calculated. Then a convolutive algorithm is used to reverse the motion. This is entirely doable. The information is there in the image, the trick is just how to extract it.
A gaussian blur, for example, can be applied "backwards" and the sharp original recovered, if you know the parameters used. So if Adobe's motion trajectory analysis i
Re: (Score:3)
The data here isn't missing, just obfuscated. A common lab for EEs is to unblur a picture in this way. It's much harder in real life, where you need to find what they're calling the blur kernel, but there's no reason for it to be impossible.
Re: (Score:3)
This does work and isn't an illusion. (Score:3)
We have done this in my image processing class. It isn't CSI bullshit.
It won't make missing data suddenly appear.
The thing is that the data isn't missing, it is just distributed throughout the image. For example consider an unfocused camera. Instead of each point in the image mapping to a single point in the image, it results in a gaussian centered at that point, and these are all summed together. In signal processing terms, you can think of the blurred image as being the convolution of the desired image and a gaussian function (plus some noise):
xb
Re: (Score:2)
This will make things LOOK pretty. It won't make missing data suddenly appear. At best it will make something ugly LOOK a little better. But that's just a computer-generated illusion, not a reflection of reality.
Not really. If I take a picture of a sign and my hand is shaking as I take the picture, you already know it's a fixed, flat surface. If you can algorithmically find exactly how my hand was shaking and so clear up the image, that's not an illusion. Consider it more like that even in the short shutter time you have many images superimposed on each other, this aligns them so the picture becomes clearer. It won't work for the generic case but for a certain class of pictures this is close to magic.
Re: (Score:2)
That's not true; the data is there. This is not for OOF (out of focus) photos due to focus error (so if you have a "bad copy" of a lens, or you didn't micro-adjust your lens, or if your lens is decentered, or you simply blew it on the shot it won't help), but for motion blur. All of the data is ther
Re: (Score:2)
If you have a fine detail that only covers a small portion of a pixel and is thus not resolvable, but you have multiple "frames" and you know exactly how the subject and the camera moved between frames (easier for a static object obviously) then you should be able to solve the system for much higher effective resolution. It has always been my assumption that this sort of thing would be used in spy satellites and other systems where you want to do much better than the diffraction limit.
At one point some year
Re: (Score:3)
But that's just a computer-generated illusion, not a reflection of reality.
Technically, any picture you take with a digital camera is just a "computer-generated illusion", even in raw format. It's not reality. It's a programmer's interpretation of the data from camera's sensors. The camera's sensors detect a different range of light than the human eye, filters are used to try to keep it within the human range. And when you view the picture, what are you going to use to view it with so that the picture is displayed in it's truest form?
Even the images you see with your own eyes are
Re: (Score:2)
Those are digits in the 0-10 range right? So you have a key space of 10^13, which is 0x918 4e72 a000, or 44 bits. So the 90 pixels could easily be more than enough. My information theory is a bit wobbly, but I think that works. Unless your encoding is horribly inefficient you've got plenty of samples there. Also if you're being smart you could probably read the barcode diagonally to gain some extra information.
Re: (Score:2)
Each pixel element consists of at least three eight or sixteen bit values, so that's at least sixteen million values per pixel. Assuming monchrome, thats 256 or 65536 values per pixel.
I did some experiments with those digital photograph postcard printing booths and a scanner. You could easily create an image, print it out, and scan it back in again to recover a few hundred kilobytes.
The trick is to create a couple of horizontal and vertical bands of coordinates encoded in binary. Then you can see what the r
Re: (Score:3)
simply the end of photography. bleh.
If you think photography is simply about getting an un-blurry image, you know nothing of photography.
Re: (Score:2)
Assuming it works, which I highly doubt it ever will work properly, this could flood the market with those photos that were compositionally perfect, just just out of focus. That being said, I'm not holding my breath.
Re:the end. (Score:4, Informative)
Re:the end. (Score:5, Interesting)
This does NOT fix images that are out of focus. This fixes motion blur. The two are entirely unrelated.
Except that both are examples of convolution and deconvolution. In motion blur, the convolution kernel resembles a straight line in the direction of motion. In unfocused images, the kernel has circular symmetry. I used to write simple deconvolution algorithms about 10 years ago, but only for motion blur, where the kernel was easy to find from the conditions in a well-defined industrial setting. Unfocused images are harder to deal with, because the convolution kernel goes to zero at certain intervals, so information is destroyed.
As mentioned in my other post, here [maxent.co.uk] are some examples of more sophisticated image reconstruction from many years ago. When the kernel is unknown, the image can still be reconstructed using statistical techniques (basically because the kernel is the same for all points in the image).
Re: (Score:2)
Simultaneously ignorant, and elitist. Congratulations, here's your (organic, fair trade) no-tea.
Re: (Score:2)
Ignorance + elitism = .... wait, you're from Ohio?!
Re: (Score:2)
no, it's not just about getting an unblurry image. But to get one you have to know a little how to get such picture. thats why pictures which are not digitally processed with Photoshop (or Gimp, whatever) are the best. Thats my opinion, maybe yours different because you shot many blurred photos, then yeah.. why not. Have fun.
Spoken like one who takes a few shots here and there, and declares himself a photographer, capable of determining what is best for the entire field. A person who makes such sweeping comments about a field as diverse as photography, is at best ignorant.
You'll also find that most professional photographers (as in, it's their profession and they get paid to do it) agree that there is a level of processing that is acceptable in photoshop, even for newsprint where the standard is very high. Writing off Photosh
Not the end, could help with kids phots (Score:2)
until you get them up to speed you could not fix some photos for them.
I see all sorts of good uses for this, including hitting up some old old photos we have around the house too what other details pop.
photography never ends, it just changes and gets better. Eventually someone had to figure out how to compensate for the photographer's mistakes.
Re: (Score:2)
Just print the damn thing!
Re: (Score:2)
Haha yeah!
Except that you clearly haven't watched the video or understood the process as it doesn't work like that.