Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Technology Build

New Technique Creates 3D Images Through a Single Lens 56

Zothecula writes "A team at the Harvard School of Engineering and Applied Sciences (SEAS) has come up with a promising new way to create 3D images from a stationary camera or microscope with a single lens. Rather than expensive hardware, the technique uses a mathematical model to generate images with depth and could find use in a wide range of applications, from creating more compelling microscopy imaging to a more immersive experience in movie theaters."
This discussion has been archived. No new comments can be posted.

New Technique Creates 3D Images Through a Single Lens

Comments Filter:
  • by Anonymous Coward on Wednesday August 07, 2013 @05:02PM (#44502433)

    "Harvard researchers have found a way to create 3D images by juxtaposing two images taken from the same angle but with different focus depths"

  • by harvestsun ( 2948641 ) on Wednesday August 07, 2013 @05:29PM (#44502755)
    Except what they're actually doing has nothing to do with juxtaposition. They're inferring the angle of the light at each pixel, and then using that angle to dynamically construct new perspectives. The person who wrote the article on Gizmodo just didn't know what he was talking about.
  • by Rui del-Negro ( 531098 ) on Wednesday August 07, 2013 @06:52PM (#44503631) Homepage

    Having two lenses is not a requirement to capture stereoscopic images. It can be done with a single (big) lens, and two slightly different sensor locations. But you're limited by the distance between those two sensors, and a single large lens isn't necessarily cheaper or easier to use than two smaller ones.

    What this system does is use the out-of-focus areas as a sort of "displaced" sensor - like moving the sensor within a small circle, still inside the projection cone of the lens - and therefore simulating two (or more) images captured at the edges of the lens.

    But, unless the lens is wider than the distance between two eyes, you can't really use this to create realistic stereoscopic images at a macroscopic scale. The information is simply not there. Even if you can extract accurate depth information, that is not quite the same as 3D. A Z-buffer is not a 3D scene; it's not sufficient for functional stereoscopy.

    Microscopy is a different matter. In fact, there are already several stereoscopic microscopes and endoscopes that use a single lens to capture two images (with offset sensors). Since the subject is very small, the parallax difference between the two images can be narrower than the width of the lens and still produce a good 3D effect. Scaling that up to macroscopic photography would require lenses wider than a human head.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...