Camera Lets You Shift Focus After Shooting 155
Zothecula writes "For those of us who grew up with film cameras, even the most basic digital cameras can still seem a little bit magical. The ability to instantly see how your shots turned out, then delete the ones you don't want and manipulate the ones you like, is something we would have killed for. Well, light field cameras could be to today's digital cameras, what digital was to film. Among other things, they allow users to selectively shift focus between various objects in a picture, after it's been taken. While the technology has so far been inaccessible to most of us, that is set to change, with the upcoming release of Lytro's consumer light field camera."
Re:Fake (Score:5, Informative)
No. This is known as plenoptic imaging, and the basic idea behind it is to use an array of microlenses positioned at the image plane, which causes the underlying group of pixels for a given microlens to "see" a different portion of the scene, much in the way that an insect's compound eyes work. Using some mathematics, you can then reconstruct the full scene over a range of focusing distances.
The problem with this approach, which many astute photographers pointed out when we read the original research paper on the topic (authored by the same guy running this company), is that it requires an imaging sensor with extremely high pixel density, yet the resulting images have relatively low resolution. This is because you are essentially splitting up the light coming through the main lens into many, many smaller images which tile the sensor. So you might need, say, a 500-megapixel sensor to capture a 5-megapixel plenoptic image.
Although Canon last year announced the development of a prototype 120-megapixel APS-H image sensor (with a pixel density rivaling that of recent digital compact point-and-shoot cameras, just on a wafer about 20x the area), it is clear that we are nowhere near the densities required to achieve satisfactory results with light field imaging. Furthermore, you cannot increase pixel density indefinitely, because the pixels obviously cannot be made smaller than the wavelength of the light it is intended to capture. And even if you could approach this theoretical limit, you would have significant obstacles to overcome, such as maintaining acceptable noise and dynamic range performance, as well as the processing power needed to record and store that much data. On top of that, there are optical constraints--the system would be limited to relatively slow f-numbers. It would not work for, say, f/2 or faster, due to the structure of the microlenses.
In summary, this is more or less some clever marketing and selective advertisement to increase the hype over the idea. In practice, any such camera would have extremely low resolution by today's standards. The prototype that the paper's author made had a resolution that was a fraction of that of a typical webcam; a production model is extremely unlikely to achieve better than 1-2 megapixel resolution.
Re:I want it all (Score:5, Informative)
The website about the camera doesn't have enough details, either, but this paper [stanford.edu] does give a reasonable idea of what's going on.
Re:Interesting. (Score:4, Informative)
... demonstated to be a working principle [stanford.edu].
The paper includes graphics and formulas... a fuck load more detail than the story link given to us...
Re:Interesting. (Score:4, Informative)
It's called a Plenoptic Camera [wikipedia.org]. You put a bunch of microlenses on top of a regular sensor. Each lens is the equivalent of a single 2D image pixel, but the many sensor pixels under it capture several variations of that pixel in the light field. Then you can apply different mapping algorithms to go from that sub-array to the final pixel, refocusing the image, changing the perspective slightly, etc. So color-wise it's just a regular camera. What you get is an extra two spatial dimensions (the image contains 4 dimensions of information instead of 2).
Of course, the drawback is that you lose a lot of spatial resolution since you're dividing down the sensor resolution by a constant. I doubt they can do anything interesting with less than 6x5 pixels per lens, so a 25 megapixel camera suddenly takes 1 megapixel images at best. The Wiki article does mention a new trick that overcomes this to some extent though, so I'm not sure what the final product will be capable of.
"hyper stereo" cameras (Score:5, Informative)
I think the infamous bullet-dodging scene in the first Matrix movie was a type of hyper-stereo camera, a row of them albeit. The output lightfield was reconfigured expand point-of-view into time.
Re:Interesting. (Score:5, Informative)
Read this paper [stanford.edu] (or at least skim it) - these are called plenoptic cameras.
It doesn't do any particular voodoo. I suppose you could distill it down to the point where the camera is (in function) a compound eye.