New Technique Creates 3D Images Through a Single Lens 56
Zothecula writes "A team at the Harvard School of Engineering and Applied Sciences (SEAS) has come up with a promising new way to create 3D images from a stationary camera or microscope with a single lens. Rather than expensive hardware, the technique uses a mathematical model to generate images with depth and could find use in a wide range of applications, from creating more compelling microscopy imaging to a more immersive experience in movie theaters."
What they actually did (Score:5, Informative)
"Harvard researchers have found a way to create 3D images by juxtaposing two images taken from the same angle but with different focus depths"
Re:What they actually did (Score:5, Informative)
Re: (Score:2)
Point is... (Score:2)
Re: (Score:2)
Point is... this about about taking a 3D image from a single every-day consumer level DSLR camera at a single viewpoint, not projecting a 3D image for people to look at.
Re: (Score:3)
You might not care about that, but I personally care more about the crap they show on screen. As long as what they're showing was shot in 2D and turned into 3D using computer manipulation, I'm not interested.
From the sound of this, they can use one lens to create an image that's effectively 3D and do so for the entire scene, rather than portions. That I'd consider seeing.
When they get that down, then worrying about the eye wear will make some sense. At this point the 3D just isn't good enough in most cases
Re: (Score:3)
Re: (Score:3, Insightful)
No, that's 3D, 3D is when the eyes see slightly differing images and interpret that as a scene with depth.
The definition you're using is highly non-standard and completely misses the point. A movie will always require that you be sitting in the right place. Just as one doesn't typically watch a Broadway play from backstage. Or aren't those plays 3D?
Re: (Score:2)
A hologram is considered 3D, because it captures a continuous interference pattern representing any number of viewing angles on a 3D object. ... but not the back of it, so it's not "true" 3D either.
Re: (Score:2)
Re: (Score:2)
Right and wrong. If you're narrowing the definition to the source, you're correct, however if your brain interprets what it sees in three dimensions, then you're seeing 3D. Or simply: Projected Image - not 3D. Visualised image - 3D.
Re: (Score:2)
Re: (Score:2)
True. At least the movie studios have the material for reprocessing, but it lacks the information behind the objects. It's pretty funny that something shot as 3D is turned into a 2D projection for the screen. Everything is still flat. Curved screens are better, but still it's basically 2D...
Re: (Score:2)
Re: (Score:2)
The result does not impress me too. :
You can do TRUE 3D through a lens because you can get depth from interferences of the input light field over the finite aperture (\approx lens) size. That requires only A SINGLE IMAGE.
One example of this method is the Double Helix Point Spread Function (DHPSF) developed at Univ of Colorado by Pr. Piestun from a specific phase-mask
http://3.bp.blogspot.com/-oX6BL98Bi8Y/TvstLYSp5jI/AAAAAAAABLI/fDKeFKvKWs0/s400/MS+Double-Helix+PSF.JPG [blogspot.com]
If you have the angle, you will have a me
Re: (Score:2)
I'm not sure how you can achieve the DHPSF from standard optical equipment. How do you measure the phase mask from a single image? Do you have a reference ?
Re: (Score:3)
I've done some quick research into what you suggest:
http://www.stanford.edu/group/moerner/sms_3Dsmacm.html
Basically you need a lot more than a single 2D image. You need a stack, and from the stack you can measure the angle you suggest. Your very own link illustrates this. What this technique allows you to do is to measure depth of point-like objects to a very good resolution, better that can be usually done with confocal imaging, but this is not easily applicable to other modalities.
I think you might be con
Re: (Score:1)
(Sorry for the late answer)
No you don't need a stack. The illustration of the stack is just telling you that the PSF is rotating with depth by showing you multiple transverse slices of the EM field during propagation. But imaging particles (=point sources) at different distances from the aperture will produce Dirac's function convoluted by different PSF. This PSF encodes their distance (which is related to the curvature of the input wavefront).
As for the phase mask it must be inserted at the aperture (eithe
Re: (Score:2)
Sorry for the late answer too.
How then can you have in a single 2D frame two (or more) point sources, both in focus, with significant different depth from the aperture? In a system with a large numerical aperture like in confocal microscopy, I think you cannot, since by definition you have very low depth of field. Hence the angle only allows one to compute the depth with more precision. It seems this idea only works for point sources as well.
So overall, what you suggest is a different idea applicable to a d
Re: (Score:2)
You're confused.
This is taking 3D pictures with a non-3D camera. Not viewing 3D images.
Re: (Score:2)
wrong title. (Score:1)
"students use focus stacking to make wobble gifs" would have really captured the meat of the article in a single sentence.
Re: (Score:2)
After watching the demonstration in the YouTube video shown in the article, it definitely looks that way — the wobble gif technology was re-purposed; then declared as a promising new way to make 3D images. However, what would happen if a way could be devised to take (arbitrary values of) 100 depths per frame and run it at 24 fps? Is that the direction they're trying to go?
Although I didn't get the feeling from the article it was in their plans, most likely someone somewhere has consider the idea or
Kaleidocamera can do this as well (Score:2)
Saarland University developed a reconfigurable camera add-on, the kaleidocam [mpi-inf.mpg.de] which can do 3D [youtube.com] as well as many other things. It allows you to take a single picture that is split by the device into multiple images that appear on the sensor as an array of smaller images. Possible functions include:
Re: (Score:2)
No. Lytro's software allows refocusing in post (at a huge cost in terms of resolution). It does not try to extract any parallax information from the image.
Re: (Score:2)
Lytro's basic building part is an array of microlens, a rather expensive piece of hardware that also limits effective resolution dramatically. This is why the title here touts "single lens".
The array of microlens captures the light field, something that's used for computational focus in Lytro. However, capturing the lightfield in a microscopic imagery of translucent sample does allow you to post-fact adjust the viewing angle of the sample (to some degree), therefore do 3D imagery in microscopic scales.
Re: (Score:3)
The irony of Lytro is that people fail to realize that depth of field is inherently a function of resolving power. When Lytro destroys resolving power to create alterable depth of field in post, all they are really doing is creating a means of artificially limiting depth of field, not a means of enhancing it in. With sufficiently good techniques for simulating OOF areas and image reduction the same ability could be offered without the immense penalties and with conventional optics (except no one would wan
Re: (Score:2)
You don't understand what Lytro did. It's not about depth of field, it's about capturing the light rays (i.e. in different directions) rather than one set of pixels. One of the things you can do with that is alter depth of field, but you can also alter focus, so you're focusing nearer or further, or shifting perspective to look behind things. But since the Lytro is capturing light across a large sensor, not from two points, you can shift up/down/left/right and by variable amounts, not just flip between left
Re: (Score:2)
Oh, I knew they could extract (very limited) parallax information from the plenoptic image data, I just didn't know they had coded that into their software (they didn't have it the last time I checked, they were only doing refocusing).
Re: (Score:2)
Not true. http://www.wired.com/gadgetlab/2012/11/lytro-3d-feature/ [wired.com]
Lytro is desperate to find an application where their technology is relevant. Now they are claiming perspective shift as a feature they've "launched".
The world is really excited about 1 MP camera these days, especially one that can wiggle the perspective a few mm in each direction or reduce the depth of field, just not both at the same time. ;)
Re: (Score:2)
I stand corrected. Last time I'd checked out their software all it could do was refocus. Once they finally support simultaneous refocusing and wiggling (which is technically possible, by limiting the amount of each)... their cameras will still be just as useless.
Not exactly new, and pretty limited (Score:4, Informative)
Having two lenses is not a requirement to capture stereoscopic images. It can be done with a single (big) lens, and two slightly different sensor locations. But you're limited by the distance between those two sensors, and a single large lens isn't necessarily cheaper or easier to use than two smaller ones.
What this system does is use the out-of-focus areas as a sort of "displaced" sensor - like moving the sensor within a small circle, still inside the projection cone of the lens - and therefore simulating two (or more) images captured at the edges of the lens.
But, unless the lens is wider than the distance between two eyes, you can't really use this to create realistic stereoscopic images at a macroscopic scale. The information is simply not there. Even if you can extract accurate depth information, that is not quite the same as 3D. A Z-buffer is not a 3D scene; it's not sufficient for functional stereoscopy.
Microscopy is a different matter. In fact, there are already several stereoscopic microscopes and endoscopes that use a single lens to capture two images (with offset sensors). Since the subject is very small, the parallax difference between the two images can be narrower than the width of the lens and still produce a good 3D effect. Scaling that up to macroscopic photography would require lenses wider than a human head.
Re: (Score:2)
No, it isn't. The only information you can get is the one from the light hitting the lens. That's effectively limited to parallax information between the edges of the lens (in reality, less than that, but let's pretend). In other words, as I wrote above, "unless the lens is wider than the distance between two eyes, you can't really use this to create realistic stereoscopic images at a macroscopic scale".
Re: (Score:2)
No, because the light rays coming from the crab nebula are all parallel.This relies on light coming from the sample at various angles to the lens. Essentially the sample must be close to the optical system.
In radio astronomy, you can get some 3D information from radio sources, because radiotelescopes can measure the phase directly.
Re: (Score:2)
You could... if your lens was about the size of a galaxy. ;-)
Cool idea but... (Score:3)
I am not doubting that 3D information can be extracted from focal data, I am doubting that these guys can do it.
Non-paywalled version (Score:2)
A. Orth and K. B. Crozier, "Light field moment imaging", non paywalled version:
From Crozier's web page: http://crozier.seas.harvard.edu/publications-1/2013/76_Orth_OL_2013.pdf
So commentators here can be a little more informed about what these guys are really doing. From quick reading, I'm not an expert but this is different from the usual depth from defocus. It allows for some 3D information but not a lot, obviously. Still could be useful.
All the best.
Re: (Score:3)
Sorry, with the clicky:
non-paywalled version [harvard.edu] of the article.