Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software Technology

Refocusable Plenoptic Light-Field Photography 236

virgil_disgr4ce writes "Wired is reporting that a Stanford student using about 90,000 microlenses has developed a plenoptic camera whose images can be refocused, via software, after they are exposed." From the article: "'We just think it'll lead to better cameras that make it easier to take pictures that are in focus and look good,' said Ng's adviser, Stanford computer science professor Pat Hanrahan."
This discussion has been archived. No new comments can be posted.

Refocusable Plenoptic Light-Field Photography

Comments Filter:
  • by doxology ( 636469 ) <cozzyd@ m i t . e du> on Tuesday November 22, 2005 @12:28AM (#14087909) Homepage
    Better Porn!
  • innovation (Score:5, Insightful)

    by Lord Ender ( 156273 ) on Tuesday November 22, 2005 @12:33AM (#14087929) Homepage
    As soon as I heard of this, I immediately realized how to do it. But I would not have thought to do it on my own. This kind of smart thinking is why we have a patent system. The patent system was not designed to protect business methods, such as completing a sale using n clicks instead of n+1.
    • This kind of smart thinking is why we have a patent system. The patent system was not designed to protect business methods, such as completing a sale using n clicks instead of n+1.

      But you are going to run into the inevitable trolling twit who will complain that this is just a way of "collecting and recording light with a new twist, and no one can own that, man!" Any second now, just wait.
      • Re:innovation (Score:4, Informative)

        by Directrix1 ( 157787 ) on Tuesday November 22, 2005 @01:04AM (#14088080)
        If you look at this site: Stanford Lightfield Project [stanford.edu]. You will see that the basic premise behind defining a light field and mathematically manipulating it has been around sine the 30's. Whats cool here is the camera. In fact being in the photography business myself, I was just telling my father a couple months ago about how it would be easy to refocus an image if there was a lense that just captured a grid of images with slightly different perspectives from each cell. Refocusing the light field is a pretty obvious benefit to this system, I would deem not worthy of a patent, as it is just a way to mathematically manipulate a light field.
        • All sorts of optical technologies have been around for a long time, and only nowadays are people saying "Of course! Add this to this and I get a great thing!"

          Off the top of my head... parallax barrier cameras and newer parallax barrier 3D displays... same thing with lenticular screens... Um... This was so much easier a year ago.
    • Re:innovation (Score:2, Insightful)

      by Anonymous Coward
      To the contrary, this kind of smart thinking is exactly why we don't need a patent system. Did this guy get a patent? No! Patent rights are not what is motivating him at all. Furthermore, this guy didn't invent this idea. Practically no worthwhile invention is invented out of the blue by a single person, at least not any more. People have been researching this for years. Building on the accomplishments of previous experiments, publishing their results in peer-reviewed journals, that sort of thing. T
      • To the contrary, this kind of smart thinking is exactly why we don't need a patent system. Did this guy get a patent? No! Patent rights are not what is motivating him at all.

        True but patent rights might motivate somebody to actually make a product we could use in a shorter time frame.

        "It stems from early-20th-century work on integral photography, which experimented with using lens arrays in front of film, and an early-1990s plenoptic camera developed at MIT and used for range finding... Turning Ng's inv
    • Re:innovation (Score:5, Informative)

      by RedWizzard ( 192002 ) on Tuesday November 22, 2005 @01:00AM (#14088063)
      As soon as I heard of this, I immediately realized how to do it. But I would not have thought to do it on my own. This kind of smart thinking is why we have a patent system. The patent system was not designed to protect business methods, such as completing a sale using n clicks instead of n+1.
      The patent system is not meant to protect an idea either. It's meant to protect a non-obvious implementation of an idea.
    • Heh, this is how the eye/brain processes information (vaguely speaking). I wonder if you could argue that that constituted prior art. Guess it'd depends on if you're an Intelligent Design nut.
    • Even better you can do this with a single lens and an ordinary camera. just take one photo in focus and one photo at a different focal plane. Voila. all the information you need to reconstruct the phase front for perfect focus. Bonus is that if it is ion focus then one of the photos is probably good enough right from the start without any signal processing. This whole 90,000 lenslet camera seems liek the hard way.
    • I've often felt the same way about various things I've heard about. I wonder, is there anything scientific to this? If someone tells me X has been done (even if it hasn't), am I more likely to come up with a way to accomplish X? Or is it just my imagination?

    • Insects' eyes are made op zillions of individual "facets", with each its own microlens and microretina.
    • Do you really think this guy was the first to think of it? Surely you've had like 10 great ideas this week that you don't have the time, resources, or expertise to develop. How would you feel if next year, you finally get around to it, but then get sued by some jerk (or incorporated group of jerks) who beat you to the patent office?

      To put it another way, maybe you "wouldn't have thought of it," but surely it doesn't follow that nobody would have or did.

  • oh so 1996 (Score:5, Informative)

    by griffster ( 529186 ) * on Tuesday November 22, 2005 @12:34AM (#14087934)
    http://graphics.stanford.edu/projects/lightfield/ [stanford.edu] If you've attended siggraph for the last 8 or 9 years you yawn with me.
    • by Anonymous Coward
      ...and the other 99.9% of us, who haven't, can be very interested by this article.

      However, I'm sorry that slashdot hasn't been perfectly tailored to your needs. I'm sure Rob & co will get right on to that!
    • Sounds like this is a popularized writeup about the work that was just published at SIGGRAPH in July. So it's more recent than 1996.
    • you're right. lightfields are cool. and they are very old news.

      but its nice to actually build a compact instantaneous lightfield
      capturing physical artifact, dont you think?

      i worry though about the impacts on resolution. its a bit more
      information than a 2d image, and the sample i saw shows it
    • Oh wow, you heard about it in 1996. Good for you. But why does that deserve a high moderation?

      It's still a recent result (page says april 2005) and in case you missed it, it's the same researcher (Ren Ng) that's mentioned on that Siggraph page.

      They've presumably made progress in 9 years. That isn't worth reporting on?
  • by Dekortage ( 697532 ) on Tuesday November 22, 2005 @12:35AM (#14087938) Homepage
    I'm curious... how adjustable is the post-processing focusing? E.g. depth of field, f/stop, etc. Do you basically get to adjust ANY of that after the image is recorded?
    • If you think about how this type of camera records an image you will see that depth of field is actually just a byproduct of the current photographic process and f-stop is a method of controlling that byproduct. So in answer to your question, yes, you can adjust depth of field in post processing by changing the focus curves.
  • 3d Images (Score:2, Interesting)

    by Anonymous Coward
    I wonder if the image data gathered by such a camera could somehow be transformed into basic 3d depth information. If so, this could be the beginning of 3D imaging for the consumer.
    • Re:3d Images (Score:3, Informative)

      by griffster ( 529186 ) *
      There was a demo by Sony at GDC 2005 where they had a next generation "eye-toy" that could (essentially) extract a Z buffer with the captured image. they had some very cool demos... the most memorable was a virtual butterfly that flew around the head of the demonstrator and then landed on his arm :)
    • Why not?

      Take each focal plane (let's say one plane of "best" focus). Throw the pixel RGB values there into a 2x2 matrix. Now take the next plane and lay it behind in the same way--2x2x1. Repeat for "n" focal planes giving a 2x2xn image. Now the challenge is displaying it.
  • It's fun. (Score:5, Interesting)

    by Duncan3 ( 10537 ) on Tuesday November 22, 2005 @12:40AM (#14087966) Homepage
    Having seen this stuff in action first hand, it's cool as heck. Also a tad scary. Miniblinds not closed 100% then you can see in, tree in the way no problem.

    Basically what we see as solid with 2 eyes, may not be solid at all. So much like the IR/UV cameras, this new toy has a dark side.
  • by malraid ( 592373 ) on Tuesday November 22, 2005 @12:41AM (#14087970)
    Have you seen how in movies and TV they can zoom and then sharpen any image using software? We'll it seems that technology is finally comming to real life!
    • ya, reminds me of a CSI (Vegas) episode. They're examining some blurry security video from a crime scene, one of the crime technicians says 'focus in on his eye!'... so they zoom up and guess what, they can read a newspaper or something reflecting in the perps eyeball.. lol
  • by millennial ( 830897 ) on Tuesday November 22, 2005 @12:43AM (#14087984) Journal
    I can make up really technical sounding names, too!
  • by millennial ( 830897 ) on Tuesday November 22, 2005 @12:45AM (#14087997) Journal
    Countdown until you hear about someone using one on CSI: 5... 4... 3...
  • by ottffssent ( 18387 ) on Tuesday November 22, 2005 @12:49AM (#14088012)
    The linked article comments that there's an effective loss of resolution, but goes no further.

    Obviously taking a camera that's designed to record light intensity and modifying it to record light intensity and direction isn't free. In the worst case, you're decreasing your effective resolution by the number of new lenses, or by a factor of 90,000. I don't think that's quite what happens though, because many of these lenses will be recording essentially the same information, and while only one may be perfectly focussed on part of the frame, nearby lenses can probably contribute color and intensity information as well. If we assume a 2Mpixel image is "good", the article's comment that the student's using a 16Mpixel camera but that an 8Mpixel camera might be good enough seems to support a roughly 4x to 8x decrease in effective resolution. Can the poster who claims to have heard the actual discussion at Siggraph comment?

    That's a high price to pay for not having to use the viewfinder. It's cool tech, and I'm sure there are practical uses for it somewhere, but I don't think consumer cameras are the place for it just yet.
    • Ren Ng gave a talk on this work last April at the University of Washington, and IIRC, he argued that the resolutions of CCDs are increasing exponentially, and after a certain point, the extra resolution is pointless, so why not use that extra resolution to encode additional information not normally captured? I believe he also speculated that the rate of resolution improvements isn't nearly as high as it could be, and technology that could take advantage of the extra resolution would motivate development of
    • From TFA, you end up with as many pixels as there are lenses, i.e. 90,000, and indeed the sample images in the article are about 300x300, i.e. 90,000 pixels.

      • Are you sure? That doesn't make sense (except perhaps for initial research). Each of the hundreds of sensor elements under each lens will be imaging a slightly different object, at a slightly different angle. Not taking this into account could explain the softness in even the in-focus parts of the images though.
  • by Tsar ( 536185 ) on Tuesday November 22, 2005 @12:50AM (#14088017) Homepage Journal
    Yes, the plenoptic camera has some neat benefits, including the ability to reconstruct the field of view from the perspective of any point on its objective lens. But for the image to contain all that information, it by necessity does NOT contain information that it otherwise would--in this case, resolution.

    Look at the sample images. Even the sharpest-focused regions are soft-focused. This is a 16-megapixel camera with an effective resolution less than 1/3 that of VGA. Granted, the images can be refocused and depth information can be extracted, but do you really want to have to buy a 188-megapixel plenoptic camera to get sharp 1-megapixel images? Is focusing really that hard?
    • I don't think this technology will ever be useful to typical snapshooters or photographers. For the former, just stick an f16 lens on a small-sensor digicam and you'll have near-infinite DOF for most shots, and the latter generally prefer narrow DOF and know where they will be focusing before pressing the shutter.

      However, I imagine this might be useful for some kinds of analysis photography, especially when dealing with high-speed motion. Those kinds of shots usually require a large aperture to gather enoug
    • Is focusing really that hard?

      It is if you want the exact same shot with different depths in-focus.

      TFA has some good examples of this in the form of splashing water, but imagine how much more information you could extract from e.g. the Zapruder film if it had been captured this way. It's not like you can ask Kennedy to go back for another take.
    • I suspect that there is now use for that 300-megapixel sensor.

      Considering that we already have gigabit memory chips, I can see that it's plausible to have gigapixel light sensors (sometime in the future).

      Given that 4 (good looking - low noise) megapixels would satisfy most non-professional type photographers, I think this is not that unreasonable to sacrifice pixel count for ease of use.

    • Is focusing really that hard?

      I was working a year ago on a 3D imaging system that used parallax barriers. We would've killed to have had the kind of continuously-focusable output this camera could produce; one of our biggest problems was deciding what part of the image to focus upon, and then keeping the camera and light steady between shots--especially outdoors. We would have huge problems on cloudy days because the ambient light would change so much between shots at different depths of focus.

      Combi
    • Think special cases. Security cameras. Unrepeatable events. Quasi-autonomous robots.

      In general, focussing *is* that hard if you're not there or don't have time to do it.
    • by Viceice ( 462967 ) on Tuesday November 22, 2005 @11:26AM (#14090503)
      You obviously aren't a photographer. Many award winning shots are accidents. Taken during such times where, for example, a photographer is running for his life in a hail of bullets, simply pressing the shutter as he runs, not even looking in the viewfinder.

      What I'm getting at is, some moments happen in literally in the blink of an eye and they only happen once in a lifetime. So in that split second where you are trying to take a shot and have no time to double check, won't you be sorely disappointed if your ticket to a Pulitzer was ruined by the wrong f-stop setting? Or the wrong focus?

      Back in the day of 8mb CF cards, a 6megapixel 6mb RAW was insane. But in this day of 4GB CF and memory prices what they are, 6 or even 16 mb RAWs are but a drop in the bucket. Heck even with today's memory capacities, if you had a camera that produced a 188mb RAW, it'd still be perfectly acceptable to any photographer, considering the possibilities for photography this new technology gives you.
  • by Deep Fried Geekboy ( 807607 ) on Tuesday November 22, 2005 @01:00AM (#14088059)
    The more potential focal points you want, the less resolution you can have for any particular one of them. You have to record information for all possible focal points on the CCD. Conceptually it's no different from, say, dividing the CCD into four parts and recording an image with a different focal point on each of the quarters, then post processing to combine them as required. I think. So photographically speaking the image is degraded compared to just getting the focal point right in the first place. Which isn't to say there aren't cool things you can do with it.
    • This method allows you to open up the aperture to get more light while retaining large depth of field.

      An alternative is using deconvolution to retrieve the focussed image from the defocussed one. You'd have to know the point spread function, which I think you should be able to derive from knowledge of the optics.
  • by ian_mackereth ( 889101 ) on Tuesday November 22, 2005 @01:07AM (#14088088) Journal
    Give the lenses a coating of resublimated Thiotimoline
    http://en.wikipedia.org/wiki/The_Endochronic_Prope rties_of_Resublimated_Thiotimoline [wikipedia.org]

    This will not only ensure that your photo of Auntie May is in focus, but the camera will make sure that the image is captured at a time when her eyes are open and she's smiling.

  • Plenoptic eyeglass (Score:3, Interesting)

    by rewinn ( 647614 ) on Tuesday November 22, 2005 @01:08AM (#14088091) Homepage

    The next step is to pair the cameras and the LED image emitters, similar to night-vision goggles, to make a really kewl pair of corrective lenses. Truly the ultimate nerdwear!

  • Finally, something more to cameras than megapixels.
  • using about 90,000 microlenses

    Patents brought to you by the fly people.
           
  • by illumina+us ( 615188 ) on Tuesday November 22, 2005 @01:54AM (#14088223) Homepage
    From the looks of it, this takes hundreds of images and stores them in one file. Then uses software to create a single, desired, image. This means that conventional storage will no longer be enough, for while one image now takes up several hundred kilobytes to a couple megabytes (JPEG compression), this new method will take up hundreds times that size. >.
  • A blanket solution. (Score:3, Interesting)

    by Belseth ( 835595 ) on Tuesday November 22, 2005 @02:00AM (#14088242)
    You could always go to a pin hole camera and eliminate the problem entirely. Alright so you'd need 10,000 ASA film or .1 lux for video but focas would never be an issue. Always been a massive fan of pin hole cameras. It's also a handy trick for those of us with failing eyesight for reading fine print. There have also been lensless cameras that use a rotating slit. There are 360 cameras that use the principal. Fun with optics
    • by HuguesT ( 84078 )
      Except real-world pinhole cameras are always blurry instead of always sharp...

      This is due to the fact that the pictures sharpen as the size of the hole diminishes (i.e. large hole = very blurry, small hole = less blurry), but there is a limit to how small the hole can be until it becomes counter-productive due to diffraction.

  • So, now that we have arrays of microlenses & software-based focusing, why do we need a conventional lens at all?

    With some improvements to the manufacturing technology, we could have a revolutionary camera based on a large, flat plate of microlenses, scaled up to whatever the manufacturing allows, or even multiple plates tiled together, with the software allowing for the seams. And that's it - you could stick them to anything - phones, for example, or walls or whatever.

    The linked paper [stanford.edu] shows how to u

  • The best 3D technology we have now still sucks. Its basic premise consists in showing different images to each of your two eyes, but those images are taken with standard photo equipment, so some portions are blurry and others are sharp. This really makes people nauseous at 3D movies. It sucks, really takes away the realism! So I wonder if a retinal projection of a 3D movie shot with these cameras could make the focus more natural. Basically, it would read the depth of your retinal focus and adjust the image
  • by mrmojo ( 841397 ) on Tuesday November 22, 2005 @02:50AM (#14088394)
    I'm one of the guys who works on this stuff at Stanford. I should point out that it's not fair to say you lose resolution, because good cameras have large pixels to reduce noise over a finite exposure time. Lightfield cameras, because they add up a whole lot of individual pixel samples to produce an image pixel, can get away with much much smaller pixels, because the noise goes down as you sum up the pixel values.

    The best way to think of it is take a standard good quality camera with big pixels, subdivide each pixel into a grid of 12x12 or so tiny pixels - more like the size of pixels in cell phone cameras - and put a microlens over it. You get the same spatial resolution as the good camera, roughly the same noise characteristics, and the ability to refocus and pull other light field tricks like hitchcock zooms.

    You just have to be aware that treating the data as a light field it's very noisy, like a crappy cell phone camera, but when you add up pixels to make a focused image, the noise drops back to regular good camera levels.

    It's just harder to deal with the amount of data you get off a large sensor with tiny pixels, and they're also harder to build, but neither point is a showstopper and these are mere engineering issues...

    • by (negative video) ( 792072 ) <me@NospaM.teco-xaco.com> on Tuesday November 22, 2005 @03:56AM (#14088557)
      The best way to think of it is take a standard good quality camera with big pixels, subdivide each pixel into a grid of 12x12 or so tiny pixels - more like the size of pixels in cell phone cameras - and put a microlens over it. You get ... roughly the same noise characteristics, ...
      The space between the pixels tends to be hard to shrink, so as you add pixels an ever-increasing fraction of the image sensor tends to become dead zones. Using Foveon-style stacked detectors instead of a filter mosaic would, of course, help quite a bit.

      A question: can you refocus colors independently to correct chromatic abberation of the lens?

    • by ottffssent ( 18387 ) on Tuesday November 22, 2005 @03:57AM (#14088560)
      That's not completely fair. If I understand you, what you're saying is that in fact you DO lose resolution, but the loss in resolution can be compensated for by higher-resolution sensors and because you don't have to increase the physical size of the sensor, the production costs won't go up too much. I don't know enough about CCDs and CMOS sensors to know what the probable increase in cost would be, but it sounds fairly minor. At least for CPUs, I know die size is a stronger indicator of manufacturing cost than transistor count, though the latter obviously plays a role.

      The other problems that you've swept under the rug seem to me to be more important, at least in the near term. If you take a CCD and replace each of its sensor sites with a 12x12 array, as you suggest, you're talking over a 100-fold increase in the data to be processed. While I haven't read the technical papers on the subject, it seems like the processing is more complicated than the processing that goes on in a standard digicam, which probably means at least a 200x increase in processing requirements. If you wait for Moore's law to save you, that's 10 years. Budgeting for a more expensive image processor will shave maybe a year or two off that number, but it's still fairly long-term research.

      You could reduce the processing needed in-camera by storing closer-to-raw data and doing the processing at a workstation later, but then you have the problem of a data stream that's ~100x as large. Even with very fast flash storage, that would take 30+ seconds to write a single image, and you could only fit a few onto a 1GB card. Also, you introduce the problem that the photographer doesn't get feedback as to what he or she actually shot, and unless you can also post-process to correct for motion blur, abberation in color, etc. you still need that functionality.

      It all sounds interesting, and I applaud research into what useful things could be done with likely future technology, but (and maybe I'm misreading the situation) it sounds like the core research is being cast as a thing we could be doing RSN, which I highly doubt. As a technique to make use of sensor densities that would normally exceed the capabilities of the lens they're attached to in order to do something useful, this is interesting. As a technique to be applied to today's or near-future sensors and cameras, I find it less interesting.
    • Sure its fair to say you lose resolution - the output image of your 16MP camera is 0.08MP. How do you want to pass this off as "no loss of resolution???"

      Maybe you can reduce the size of the pixels without increasing the noise in the final image, but be warned - as these pixels get noisy, the quality of your refocusing will drop. There is always a cost of reducing pixel size.

      It also doesnt work to make really tiny pixels, you basically cant go below 2umx2um for a pixel, otherwise the pixel size is too close
      • who is going to pay $5000 for a 1.5MP camera?
        I would gladly pay for it if it had the refocusing ability. So would any sport photographer. Imagine, you can just snap away and focus later! Forget sports, add weddings and any other even semi-critical photography and this becomes extremely useful.
        Not to mention the fact that many modern digital cameras (*cough* canon *cough*) have a difficult time with focusing accuracy with ultra wide lenses, this is especially true for digital enlargements. So, perfect foc
    • Except that recombining images gives you about a sqrt(N) reduction in noise, while partitioning your pixel will give you about an N increase in noise. Each subpixel of your 12x12 paritioned pixel will be about 1/144th as sensitive to light as the full pixel and summing up the parts will only get you back to 1/12th the sensitivity of the original.
  • X-Ray enhancement? (Score:3, Interesting)

    by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Tuesday November 22, 2005 @04:44AM (#14088674) Journal
    Could this be used to sharpen what we see in an x-ray image of a person? Take an x-ray of the whole body and then refocus to concentrate on one particular cross-sectional plane?
     
    • by HuguesT ( 84078 ) on Tuesday November 22, 2005 @05:29AM (#14088817)
      Not with this camera, as X-Rays are hardly ever focussed (they don't bend easily!). Here is an image of a rare and expensive X-ray focussing mirror [nasa.gov]. You have to use grazing incidence for it to work.

      Medical X-ray photographies are simply taken by placing film (or these days a digital detector) behind the body and lighting with X-rays. No focussing is involved.
  • by Crspe ( 307319 ) on Tuesday November 22, 2005 @05:02AM (#14088736)
    I saw this article about a week back. I am quite sure that this will never see a practical application ... They take a 16MP input image to produce a 0.08MP output image!!! They are using a $15000 camera system to produce images one quarter the size of VGA!!! Say what you want, but there are better ways to improve DOF.

    They reduce resolution by a factor of 180, but only improve depth-of-field by a factor 7. This is particularly silly because the only reason they have a bad depth-of-field is because they are using a huge expensive sensor. If they would switch to a small cheap sensor like you find in any cheap digicam (1/1.8"), they would get the same improvement, and save $14800.

    The light-performace of this small sensor would be just as good as their large one - if you use the same huge pixels that they do (to produce a 0.08MP image), you will get the same low light performance.

    If you want more details on why this idea has no use, check out this thread:
    http://luminous-landscape.com/forum/index.php?show topic=9354 [luminous-landscape.com]

    Interesting article, no practical application.

    • If the idea has no practical application, why are you complaining about a specific implementation of it?

      Ten years ago digital cameras were pretty naff. Low quality images, expensive to make.

      Now they're better than film for many uses.

      Another ten years of improvement and suddenly we have 16 megapixel images using this technology. It's still better image quality than your screen can display or your printer produce on paper, and you've been able to play with the focus, the depth of field, everything else this g
      • If the idea has no practical application, why are you complaining about a specific implementation of it?
        umm, I guess it was to point out that a $10000, 4lb/3kg, 0.08 megapixel camera is not very practical. You planning on buying one?

        Another ten years of improvement and suddenly we have 16 megapixel images using this technology
        NO, we are not going to have 16 megapixel versions of this. The laws of physics dont change very quickly, the wavelength of visible light is pretty stable and as a result, the
  • Could the principles be used to correct for atmospheric distortion of stellar images in ground-based telescopes? It would seem so if the image was spread over more than a few pixels. Could it also be used to correct for optical flaws in a lense?

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...