Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Camera Lets You Shift Focus After Shooting 155

Zothecula writes "For those of us who grew up with film cameras, even the most basic digital cameras can still seem a little bit magical. The ability to instantly see how your shots turned out, then delete the ones you don't want and manipulate the ones you like, is something we would have killed for. Well, light field cameras could be to today's digital cameras, what digital was to film. Among other things, they allow users to selectively shift focus between various objects in a picture, after it's been taken. While the technology has so far been inaccessible to most of us, that is set to change, with the upcoming release of Lytro's consumer light field camera."
This discussion has been archived. No new comments can be posted.

Camera Lets You Shift Focus After Shooting

Comments Filter:
  • Re:Fake (Score:5, Informative)

    by wickerprints ( 1094741 ) on Wednesday June 22, 2011 @05:26PM (#36535098)

    No. This is known as plenoptic imaging, and the basic idea behind it is to use an array of microlenses positioned at the image plane, which causes the underlying group of pixels for a given microlens to "see" a different portion of the scene, much in the way that an insect's compound eyes work. Using some mathematics, you can then reconstruct the full scene over a range of focusing distances.

    The problem with this approach, which many astute photographers pointed out when we read the original research paper on the topic (authored by the same guy running this company), is that it requires an imaging sensor with extremely high pixel density, yet the resulting images have relatively low resolution. This is because you are essentially splitting up the light coming through the main lens into many, many smaller images which tile the sensor. So you might need, say, a 500-megapixel sensor to capture a 5-megapixel plenoptic image.

    Although Canon last year announced the development of a prototype 120-megapixel APS-H image sensor (with a pixel density rivaling that of recent digital compact point-and-shoot cameras, just on a wafer about 20x the area), it is clear that we are nowhere near the densities required to achieve satisfactory results with light field imaging. Furthermore, you cannot increase pixel density indefinitely, because the pixels obviously cannot be made smaller than the wavelength of the light it is intended to capture. And even if you could approach this theoretical limit, you would have significant obstacles to overcome, such as maintaining acceptable noise and dynamic range performance, as well as the processing power needed to record and store that much data. On top of that, there are optical constraints--the system would be limited to relatively slow f-numbers. It would not work for, say, f/2 or faster, due to the structure of the microlenses.

    In summary, this is more or less some clever marketing and selective advertisement to increase the hype over the idea. In practice, any such camera would have extremely low resolution by today's standards. The prototype that the paper's author made had a resolution that was a fraction of that of a typical webcam; a production model is extremely unlikely to achieve better than 1-2 megapixel resolution.

  • Re:I want it all (Score:5, Informative)

    by pjt33 ( 739471 ) on Wednesday June 22, 2011 @05:28PM (#36535138)

    The website about the camera doesn't have enough details, either, but this paper [stanford.edu] does give a reasonable idea of what's going on.

  • Re:Interesting. (Score:4, Informative)

    by X0563511 ( 793323 ) on Wednesday June 22, 2011 @05:29PM (#36535140) Homepage Journal

    ... demonstated to be a working principle [stanford.edu].

    The paper includes graphics and formulas... a fuck load more detail than the story link given to us...

  • Re:Interesting. (Score:4, Informative)

    by marcansoft ( 727665 ) <hector@TOKYOmarcansoft.com minus city> on Wednesday June 22, 2011 @05:32PM (#36535166) Homepage

    It's called a Plenoptic Camera [wikipedia.org]. You put a bunch of microlenses on top of a regular sensor. Each lens is the equivalent of a single 2D image pixel, but the many sensor pixels under it capture several variations of that pixel in the light field. Then you can apply different mapping algorithms to go from that sub-array to the final pixel, refocusing the image, changing the perspective slightly, etc. So color-wise it's just a regular camera. What you get is an extra two spatial dimensions (the image contains 4 dimensions of information instead of 2).

    Of course, the drawback is that you lose a lot of spatial resolution since you're dividing down the sensor resolution by a constant. I doubt they can do anything interesting with less than 6x5 pixels per lens, so a 25 megapixel camera suddenly takes 1 megapixel images at best. The Wiki article does mention a new trick that overcomes this to some extent though, so I'm not sure what the final product will be capable of.

  • by peter303 ( 12292 ) on Wednesday June 22, 2011 @05:35PM (#36535212)
    There has been a fair amount of computer science research over the last decade over what you could do if you took a picture with a plane of cameras instead of just one or two. The resulting dataset is called a "light field". You can re-composite the pixels to change depth of focus, look around or through occluding obstacles, dynamically change point of view, etc. As digital webcams became dirt cheap people started building these hyper-cameras and experimenting with them. people learned you could relatively interesting things with small arrays of 4 or 5 squared cameras. Later on they discovered you do this with one camera, with a multi-part lense, then reconfigure the output pixels in the computer in real time. I've seen all these systems demo'ed at SIGGRAPH over the years. Now someone appears to be commercializing one.

    I think the infamous bullet-dodging scene in the first Matrix movie was a type of hyper-stereo camera, a row of them albeit. The output lightfield was reconfigured expand point-of-view into time.
  • Re:Interesting. (Score:5, Informative)

    by X0563511 ( 793323 ) on Wednesday June 22, 2011 @05:36PM (#36535228) Homepage Journal

    Read this paper [stanford.edu] (or at least skim it) - these are called plenoptic cameras.

    It doesn't do any particular voodoo. I suppose you could distill it down to the point where the camera is (in function) a compound eye.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...