Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Technology Build

New Technique Creates 3D Images Through a Single Lens 56

Zothecula writes "A team at the Harvard School of Engineering and Applied Sciences (SEAS) has come up with a promising new way to create 3D images from a stationary camera or microscope with a single lens. Rather than expensive hardware, the technique uses a mathematical model to generate images with depth and could find use in a wide range of applications, from creating more compelling microscopy imaging to a more immersive experience in movie theaters."
This discussion has been archived. No new comments can be posted.

New Technique Creates 3D Images Through a Single Lens

Comments Filter:
  • by Anonymous Coward on Wednesday August 07, 2013 @04:02PM (#44502433)

    "Harvard researchers have found a way to create 3D images by juxtaposing two images taken from the same angle but with different focus depths"

  • We don't care too much how the wonks do it as long as it doesn't involve headgear beyond what moviegoers walk into the theater with.
    • Point is... this about about taking a 3D image from a single every-day consumer level DSLR camera at a single viewpoint, not projecting a 3D image for people to look at.

    • You might not care about that, but I personally care more about the crap they show on screen. As long as what they're showing was shot in 2D and turned into 3D using computer manipulation, I'm not interested.

      From the sound of this, they can use one lens to create an image that's effectively 3D and do so for the entire scene, rather than portions. That I'd consider seeing.

      When they get that down, then worrying about the eye wear will make some sense. At this point the 3D just isn't good enough in most cases

      • If you need special eye wear or need to stand in a certain position, it's not 3D, merely stereo.
        • Re: (Score:3, Insightful)

          by hedwards ( 940851 )

          No, that's 3D, 3D is when the eyes see slightly differing images and interpret that as a scene with depth.

          The definition you're using is highly non-standard and completely misses the point. A movie will always require that you be sitting in the right place. Just as one doesn't typically watch a Broadway play from backstage. Or aren't those plays 3D?

        • Right and wrong. If you're narrowing the definition to the source, you're correct, however if your brain interprets what it sees in three dimensions, then you're seeing 3D. Or simply: Projected Image - not 3D. Visualised image - 3D.

          • Your brain is confused, as it is getting some cues telling it it is viewing a 3D volume, and other cues telling it it is viewing a 2D plane, so you get headaches.
      • by jovius ( 974690 )

        True. At least the movie studios have the material for reprocessing, but it lacks the information behind the objects. It's pretty funny that something shot as 3D is turned into a 2D projection for the screen. Everything is still flat. Curved screens are better, but still it's basically 2D...

  • by Anonymous Coward

    "students use focus stacking to make wobble gifs" would have really captured the meat of the article in a single sentence.

    • After watching the demonstration in the YouTube video shown in the article, it definitely looks that way — the wobble gif technology was re-purposed; then declared as a promising new way to make 3D images. However, what would happen if a way could be devised to take (arbitrary values of) 100 depths per frame and run it at 24 fps? Is that the direction they're trying to go?

      Although I didn't get the feeling from the article it was in their plans, most likely someone somewhere has consider the idea or

  • Saarland University developed a reconfigurable camera add-on, the kaleidocam [mpi-inf.mpg.de] which can do 3D [youtube.com] as well as many other things. It allows you to take a single picture that is split by the device into multiple images that appear on the sensor as an array of smaller images. Possible functions include:

    • Multi-spectral imaging (including simulation of different white points and source lighting)
    • Light field imaging (3D, focal length change, depth of field change)
    • Polarised imaging (e.g. glass stress, pictures of smoke in
  • by Rui del-Negro ( 531098 ) on Wednesday August 07, 2013 @05:52PM (#44503631) Homepage

    Having two lenses is not a requirement to capture stereoscopic images. It can be done with a single (big) lens, and two slightly different sensor locations. But you're limited by the distance between those two sensors, and a single large lens isn't necessarily cheaper or easier to use than two smaller ones.

    What this system does is use the out-of-focus areas as a sort of "displaced" sensor - like moving the sensor within a small circle, still inside the projection cone of the lens - and therefore simulating two (or more) images captured at the edges of the lens.

    But, unless the lens is wider than the distance between two eyes, you can't really use this to create realistic stereoscopic images at a macroscopic scale. The information is simply not there. Even if you can extract accurate depth information, that is not quite the same as 3D. A Z-buffer is not a 3D scene; it's not sufficient for functional stereoscopy.

    Microscopy is a different matter. In fact, there are already several stereoscopic microscopes and endoscopes that use a single lens to capture two images (with offset sensors). Since the subject is very small, the parallax difference between the two images can be narrower than the width of the lens and still produce a good 3D effect. Scaling that up to macroscopic photography would require lenses wider than a human head.

  • by EmperorOfCanada ( 1332175 ) on Wednesday August 07, 2013 @09:07PM (#44505089)
    It is a cool idea but they are rotating the "3D" image about 1 degree. If they had even halfway good 3D data they could have rotated a whole lot more. My guess is that after 1 degree their "3D" turns into a spiky mess. Man I am getting sick of this popular science news, "Science has way to make flying cars a reality in 5 years."

    I am not doubting that 3D information can be extracted from focal data, I am doubting that these guys can do it.
  • A. Orth and K. B. Crozier, "Light field moment imaging", non paywalled version:

    From Crozier's web page: http://crozier.seas.harvard.edu/publications-1/2013/76_Orth_OL_2013.pdf

    So commentators here can be a little more informed about what these guys are really doing. From quick reading, I'm not an expert but this is different from the usual depth from defocus. It allows for some 3D information but not a lot, obviously. Still could be useful.

    All the best.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...