Refocusable Plenoptic Light-Field Photography 236
virgil_disgr4ce writes "Wired is reporting that a Stanford student using about 90,000 microlenses has developed a plenoptic camera whose images can be refocused, via software, after they are exposed." From the article: "'We just think it'll lead to better cameras that make it easier to take pictures that are in focus and look good,' said Ng's adviser, Stanford computer science professor Pat Hanrahan."
You know what this means... (Score:5, Funny)
"Say Sayonara to Blurry Pics"??? (Score:3, Informative)
This technology doesn't do anything to prevent camera shake. Most modern cameras are extremely good at autofocusing on the correct subject in a short depth of field situation. The camera designed by the Stanford guys is an amazing invention and will revolutionize action, sport, and scientific photography (espec
Re:"Say Sayonara to Blurry Pics"??? (Score:2)
Re:"Say Sayonara to Blurry Pics"??? (Score:4, Informative)
The microlens approach doesn't require any moving part, it allows not only to refocus but also to extend focus as if one were using a very high F-stop for large depth of field, without the associated noise due to low light.
The downside is that it requires many pixels to produce a good image, but as the pixel count grows exponentially with time as per Moore law, it will soon be a winning proposition, even with cameras in mobile phones.
On the other hand optical stabilization is as expensive as ever, requires many moving parts and does not allow focus extention.
Re:"Say Sayonara to Blurry Pics"??? (Score:2)
Re:"Say Sayonara to Blurry Pics"??? (Score:4, Informative)
Re:"Say Sayonara to Blurry Pics"??? (Score:2)
The shake will be particularly bad when you hold the camera upside down (when shooting vertical format) as shown in the article. Your hands are supposed to be under the camera, not over it.
Conventional wisdom says that the slowest speed you can shoot 35 mm film hand held is at the reciprocal of the length of the lens. If you are using a 50 mm lens, that's 1/50 sec. If you are using a 200 mm lens, that's 1/200 sec. I have found that if I hold the camera properly (both hands under it) I can usually shoot at
Re:"Say Sayonara to Blurry Pics"??? (Score:3, Interesting)
Auto focus cameras have to focus on something... and many times I've had them focus on the wrong thing. There isn't really anything you can do at that point except reshoot.. or use the system such as they describe.
This would be of great value to me, I have many photos where the image is otherwise perfect except the focus point is off.
Re:"Say Sayonara to Blurry Pics"??? (Score:2)
Re:"Say Sayonara to Blurry Pics"??? (Score:4, Informative)
Re:"Say Sayonara to Blurry Pics"??? (Score:2)
innovation (Score:5, Insightful)
Re:innovation (Score:2)
But you are going to run into the inevitable trolling twit who will complain that this is just a way of "collecting and recording light with a new twist, and no one can own that, man!" Any second now, just wait.
Re:innovation (Score:4, Informative)
Reinnovation (Score:2)
Off the top of my head... parallax barrier cameras and newer parallax barrier 3D displays... same thing with lenticular screens... Um... This was so much easier a year ago.
Re:innovation (Score:2, Insightful)
Re:innovation (Score:2)
True but patent rights might motivate somebody to actually make a product we could use in a shorter time frame.
"It stems from early-20th-century work on integral photography, which experimented with using lens arrays in front of film, and an early-1990s plenoptic camera developed at MIT and used for range finding... Turning Ng's inv
Re:innovation (Score:2)
Re:innovation (Score:5, Informative)
Re:innovation (Score:2)
Re:innovation (Score:2)
Re:innovation (Score:2)
Smarter thinking (Score:2)
Re:Smarter thinking (Score:4, Interesting)
Re:Smarter thinking (Score:2)
Re:Smarter thinking (Score:2)
Re:innovation (Score:2)
I've often felt the same way about various things I've heard about. I wonder, is there anything scientific to this? If someone tells me X has been done (even if it hasn't), am I more likely to come up with a way to accomplish X? Or is it just my imagination?
Don't insects have prior art? (Score:2)
Re:innovation (Score:2)
To put it another way, maybe you "wouldn't have thought of it," but surely it doesn't follow that nobody would have or did.
Re:The "root" of innovation (Score:2)
oh so 1996 (Score:5, Informative)
Re:oh so 1996 (Score:2, Funny)
However, I'm sorry that slashdot hasn't been perfectly tailored to your needs. I'm sure Rob & co will get right on to that!
More like oh so this past summer at SIGGRAPH 2005 (Score:3, Informative)
Re:oh so 1996 (Score:2)
but its nice to actually build a compact instantaneous lightfield
capturing physical artifact, dont you think?
i worry though about the impacts on resolution. its a bit more
information than a 2d image, and the sample i saw shows it
Re:oh so 1996 (Score:2)
It's still a recent result (page says april 2005) and in case you missed it, it's the same researcher (Ren Ng) that's mentioned on that Siggraph page.
They've presumably made progress in 9 years. That isn't worth reporting on?
What kind of focusing? (Score:4, Interesting)
Re:What kind of focusing? (Score:2)
Re:What kind of focusing? (Score:2)
3d Images (Score:2, Interesting)
Re:3d Images (Score:3, Informative)
Re:3d Images (Score:2)
Take each focal plane (let's say one plane of "best" focus). Throw the pixel RGB values there into a 2x2 matrix. Now take the next plane and lay it behind in the same way--2x2x1. Repeat for "n" focal planes giving a 2x2xn image. Now the challenge is displaying it.
It's fun. (Score:5, Interesting)
Basically what we see as solid with 2 eyes, may not be solid at all. So much like the IR/UV cameras, this new toy has a dark side.
Re:It's fun. (Score:4, Insightful)
With two eyes you can already see the effect - holding a hand in front of your face doesn't stop you seeing what's behind it until you completely cover both eyes, etc.
Re:It's fun. (Score:2, Interesting)
Re:It's fun. (Score:3, Funny)
Re:It's fun. (Score:4, Funny)
Re:It's fun. (Score:3, Funny)
Sweet.
Just like in movies and TV! (Score:5, Funny)
Re:Just like in movies and TV! (Score:2)
Re:Just like in movies and TV! (Score:3, Informative)
Re:Just like in movies and TV! (Score:3, Informative)
Re:Just like in movies and TV! (Score:3, Informative)
I wonder about the effect on resolution and sensitivity of this technique. Modern autofocus on little point and shoot cameras is pretty good at what it does... a lot of blurry pictures are due to camera shake because poor lighting requires long exposure times. The article even mentions "poor lighting" although it somehow assumes that this technique will fix that too.
Re:Just like in movies and TV! (Score:2, Informative)
No, you can't, and you're completely right about that. But in the out-of-focus picture all the information is (mostly) there, and the question is how to transform that desired subject in focus. If you have the convolution model, you can write an inverse function using Fourier transform. For a quick mathematic formula, see here [uiowa.edu] and scroll a little down until you find secti
Intangible Pluralistic Brain-wave Phrenology (Score:5, Funny)
Obligatory... (Score:5, Funny)
At what price in resolution? (Score:5, Interesting)
Obviously taking a camera that's designed to record light intensity and modifying it to record light intensity and direction isn't free. In the worst case, you're decreasing your effective resolution by the number of new lenses, or by a factor of 90,000. I don't think that's quite what happens though, because many of these lenses will be recording essentially the same information, and while only one may be perfectly focussed on part of the frame, nearby lenses can probably contribute color and intensity information as well. If we assume a 2Mpixel image is "good", the article's comment that the student's using a 16Mpixel camera but that an 8Mpixel camera might be good enough seems to support a roughly 4x to 8x decrease in effective resolution. Can the poster who claims to have heard the actual discussion at Siggraph comment?
That's a high price to pay for not having to use the viewfinder. It's cool tech, and I'm sure there are practical uses for it somewhere, but I don't think consumer cameras are the place for it just yet.
At some point, extra resolution is pointless (Score:2)
Re:At some point, extra resolution is pointless (Score:2)
Re:At what price in resolution? (Score:2)
Re:At what price in resolution? (Score:2)
Getting the least out of your 16MB camera (Score:5, Insightful)
Look at the sample images. Even the sharpest-focused regions are soft-focused. This is a 16-megapixel camera with an effective resolution less than 1/3 that of VGA. Granted, the images can be refocused and depth information can be extracted, but do you really want to have to buy a 188-megapixel plenoptic camera to get sharp 1-megapixel images? Is focusing really that hard?
Re:Getting the least out of your 16MB camera (Score:3, Insightful)
However, I imagine this might be useful for some kinds of analysis photography, especially when dealing with high-speed motion. Those kinds of shots usually require a large aperture to gather enoug
Re:Getting the least out of your 16MB camera (Score:2)
It is if you want the exact same shot with different depths in-focus.
TFA has some good examples of this in the form of splashing water, but imagine how much more information you could extract from e.g. the Zapruder film if it had been captured this way. It's not like you can ask Kennedy to go back for another take.
Re:Getting the least out of your 16MB camera (Score:3, Insightful)
I suspect that there is now use for that 300-megapixel sensor.
Considering that we already have gigabit memory chips, I can see that it's plausible to have gigapixel light sensors (sometime in the future).
Given that 4 (good looking - low noise) megapixels would satisfy most non-professional type photographers, I think this is not that unreasonable to sacrifice pixel count for ease of use.
Has some useful features (Score:2)
I was working a year ago on a 3D imaging system that used parallax barriers. We would've killed to have had the kind of continuously-focusable output this camera could produce; one of our biggest problems was deciding what part of the image to focus upon, and then keeping the camera and light steady between shots--especially outdoors. We would have huge problems on cloudy days because the ambient light would change so much between shots at different depths of focus.
Combi
Re:Getting the least out of your 16MB camera (Score:2)
In general, focussing *is* that hard if you're not there or don't have time to do it.
Re:Getting the least out of your 16MB camera (Score:4, Insightful)
What I'm getting at is, some moments happen in literally in the blink of an eye and they only happen once in a lifetime. So in that split second where you are trying to take a shot and have no time to double check, won't you be sorely disappointed if your ticket to a Pulitzer was ruined by the wrong f-stop setting? Or the wrong focus?
Back in the day of 8mb CF cards, a 6megapixel 6mb RAW was insane. But in this day of 4GB CF and memory prices what they are, 6 or even 16 mb RAWs are but a drop in the bucket. Heck even with today's memory capacities, if you had a camera that produced a 188mb RAW, it'd still be perfectly acceptable to any photographer, considering the possibilities for photography this new technology gives you.
Can't get something for nothing (Score:5, Informative)
Re:Can't get something for nothing (Score:2)
An alternative is using deconvolution to retrieve the focussed image from the defocussed one. You'd have to know the point spread function, which I think you should be able to derive from knowledge of the optics.
Re:Can't get something for nothing (Score:2)
Re:Can't get something for nothing (Score:2)
Re:Can't get something for nothing (Score:2)
Why stop at three dimensions? (Score:4, Funny)
http://en.wikipedia.org/wiki/The_Endochronic_Prop
This will not only ensure that your photo of Auntie May is in focus, but the camera will make sure that the image is captured at a time when her eyes are open and she's smiling.
Plenoptic eyeglass (Score:3, Interesting)
The next step is to pair the cameras and the LED image emitters, similar to night-vision goggles, to make a really kewl pair of corrective lenses. Truly the ultimate nerdwear!
Megapixels (Score:2)
Re:Megapixels (Score:2)
Fly Eye from the Fly Guy (Score:2, Funny)
Patents brought to you by the fly people.
So much for thinking 32MB was decent storage. (Score:3, Insightful)
A blanket solution. (Score:3, Interesting)
Re:A blanket solution. (Score:3, Insightful)
This is due to the fact that the pictures sharpen as the size of the hole diminishes (i.e. large hole = very blurry, small hole = less blurry), but there is a limit to how small the hole can be until it becomes counter-productive due to diffraction.
Large-area plenoptic plates (Score:2)
With some improvements to the manufacturing technology, we could have a revolutionary camera based on a large, flat plate of microlenses, scaled up to whatever the manufacturing allows, or even multiple plates tiled together, with the software allowing for the seams. And that's it - you could stick them to anything - phones, for example, or walls or whatever.
The linked paper [stanford.edu] shows how to u
Re:Large-area plenoptic plates (Score:2)
Essentially a CMOS Retina is a massively parallel computer with processing logic at each and every pixel location.
Application for holographic movies? (Score:2)
You don't really lose resolution (Score:5, Informative)
The best way to think of it is take a standard good quality camera with big pixels, subdivide each pixel into a grid of 12x12 or so tiny pixels - more like the size of pixels in cell phone cameras - and put a microlens over it. You get the same spatial resolution as the good camera, roughly the same noise characteristics, and the ability to refocus and pull other light field tricks like hitchcock zooms.
You just have to be aware that treating the data as a light field it's very noisy, like a crappy cell phone camera, but when you add up pixels to make a focused image, the noise drops back to regular good camera levels.
It's just harder to deal with the amount of data you get off a large sensor with tiny pixels, and they're also harder to build, but neither point is a showstopper and these are mere engineering issues...
Re:You don't really lose resolution (Score:5, Insightful)
A question: can you refocus colors independently to correct chromatic abberation of the lens?
Re:You don't really lose resolution (Score:4, Interesting)
The other problems that you've swept under the rug seem to me to be more important, at least in the near term. If you take a CCD and replace each of its sensor sites with a 12x12 array, as you suggest, you're talking over a 100-fold increase in the data to be processed. While I haven't read the technical papers on the subject, it seems like the processing is more complicated than the processing that goes on in a standard digicam, which probably means at least a 200x increase in processing requirements. If you wait for Moore's law to save you, that's 10 years. Budgeting for a more expensive image processor will shave maybe a year or two off that number, but it's still fairly long-term research.
You could reduce the processing needed in-camera by storing closer-to-raw data and doing the processing at a workstation later, but then you have the problem of a data stream that's ~100x as large. Even with very fast flash storage, that would take 30+ seconds to write a single image, and you could only fit a few onto a 1GB card. Also, you introduce the problem that the photographer doesn't get feedback as to what he or she actually shot, and unless you can also post-process to correct for motion blur, abberation in color, etc. you still need that functionality.
It all sounds interesting, and I applaud research into what useful things could be done with likely future technology, but (and maybe I'm misreading the situation) it sounds like the core research is being cast as a thing we could be doing RSN, which I highly doubt. As a technique to make use of sensor densities that would normally exceed the capabilities of the lens they're attached to in order to do something useful, this is interesting. As a technique to be applied to today's or near-future sensors and cameras, I find it less interesting.
Re:You don't really lose resolution (Score:2)
Maybe you can reduce the size of the pixels without increasing the noise in the final image, but be warned - as these pixels get noisy, the quality of your refocusing will drop. There is always a cost of reducing pixel size.
It also doesnt work to make really tiny pixels, you basically cant go below 2umx2um for a pixel, otherwise the pixel size is too close
Re:You don't really lose resolution (Score:2)
I would gladly pay for it if it had the refocusing ability. So would any sport photographer. Imagine, you can just snap away and focus later! Forget sports, add weddings and any other even semi-critical photography and this becomes extremely useful.
Not to mention the fact that many modern digital cameras (*cough* canon *cough*) have a difficult time with focusing accuracy with ultra wide lenses, this is especially true for digital enlargements. So, perfect foc
Re:You don't really lose resolution (Score:2)
X-Ray enhancement? (Score:3, Interesting)
Re:X-Ray enhancement? (Score:4, Informative)
Medical X-ray photographies are simply taken by placing film (or these days a digital detector) behind the body and lighting with X-rays. No focussing is involved.
Idea has no practical application! (Score:4, Insightful)
They reduce resolution by a factor of 180, but only improve depth-of-field by a factor 7. This is particularly silly because the only reason they have a bad depth-of-field is because they are using a huge expensive sensor. If they would switch to a small cheap sensor like you find in any cheap digicam (1/1.8"), they would get the same improvement, and save $14800.
The light-performace of this small sensor would be just as good as their large one - if you use the same huge pixels that they do (to produce a 0.08MP image), you will get the same low light performance.
If you want more details on why this idea has no use, check out this thread:
http://luminous-landscape.com/forum/index.php?sho
Interesting article, no practical application.
Re:Idea has no practical application! (Score:2)
If the idea has no practical application, why are you complaining about a specific implementation of it?
Ten years ago digital cameras were pretty naff. Low quality images, expensive to make.
Now they're better than film for many uses.
Another ten years of improvement and suddenly we have 16 megapixel images using this technology. It's still better image quality than your screen can display or your printer produce on paper, and you've been able to play with the focus, the depth of field, everything else this g
Re:Idea has no practical application! (Score:2)
umm, I guess it was to point out that a $10000, 4lb/3kg, 0.08 megapixel camera is not very practical. You planning on buying one?
Another ten years of improvement and suddenly we have 16 megapixel images using this technology
NO, we are not going to have 16 megapixel versions of this. The laws of physics dont change very quickly, the wavelength of visible light is pretty stable and as a result, the
Re:Idea has no practical application! (Score:2)
Ideal for Ground-based Telescopes? (Score:2)
Re:There is an old adage about photos. (Score:2)
Yes, because in addition to the neat new lens, the package also includes a guy who'll come by and hold a gun to your head, forcing you to sharpen the image.
Re:Make it easier to take pictures... (Score:2)
Re:Absolutely Amazed (Score:2, Funny)
Re:3D imaging (Score:2)