Stanford Team Developing Super 3D Camera 105
Tookis writes "Most of us are happy to take 2D happy snaps with single lens digital cameras. Imagine if you had a digital camera that could more accurately perceive the distance of all objects in its field of vision than your own eyes and brain. That's exactly what a team of researchers from Stanford University are working on — and it could even be affordable for ordinary consumers."
Sounds cool (Score:1)
Re: (Score:2)
Re: (Score:1, Interesting)
Re:Sounds cool (Score:4, Funny)
1) Don't have children and/or have never tallied what you actually cost to house and maintain.
or
2) Live in a box, eat strays that you catch yourself, and don't bother with doctors or hygiene.
Re:Sounds cool (Score:4, Funny)
Say what? (Score:1)
Re: (Score:2, Interesting)
Image analysis will be more accurate, in turn improving image search engine utility, giving robots better spatial vision, allowing big brother to identify bombs and brunettes more accurately, etc..
Re: (Score:3, Funny)
Re: (Score:2)
Wait. (Score:3, Insightful)
Re:Wait. (Score:5, Funny)
We've already got 3D pr0n, they're called girls.
Re:Wait. (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Re:Wait. (Score:5, Funny)
First of all they are ladened with pretty nasty Digital Rights Management. If you try to access one with your digits and you don't have the proper authorization, you're going to get whacked. And it's harder than you'd think to get authorization. The one I tried seemed to have been encumbered with the ForePlay(tm) DRM system. Man, you practically have to jump through hoops to get any access at all.
Also, you'd think that you pay once and it's yours forever, right? That's not how it works. It's kind of a pay-per-use situation. You've got to buy dinner, movie, etc. Then once you've spent all the cash, you have to negotiate the whole ForePlay system and then finally you get access -- maybe. These things seem to be pretty flakey, because most of the time I just got the "headache" response. What's worse is that the more time that goes by, you have to spend progressively more money. And even with that expenditure, somehow you end up will less and less access.
Oh and did I mention that you're only supposed to have one at a time? That's right. Let's say your primary girl is in headache mode, you aren't supposed to be able to get access to another girl. You just have to wait until the first one comes back on line. *And* most of them are equipped with spyware that calls you up every couple of hours and says inane things like, "Whatcha doin'?"
So, like I said, they're OK I guess. But probably they won't be that popular with most
Re:Wait. (Score:5, Funny)
Re: (Score:1)
Re: (Score:2)
Re: (Score:3, Funny)
Re: (Score:3, Funny)
You might think yours is GPL'd right now, but I think you are going to find that later, when you start thinking about distributing copies, that EULA is going to come up and bite you on the arse. At some point, they ALL have a clause about using other systems.
Me, I think I'm pretty lucky. Mine is expensive, but she brings me cans of beer and watches the football with me, while the dinner is being cooked and the washing machine is doing its things. I've hacked the access system so ForePlay is minimal, but on
Re: (Score:3, Funny)
Re: (Score:1)
There are at least two posts saying that unless you're a loser you'll have a hot babe that doesn't mind if you cheat. Well good for you Fabio! Now open up the windows cause it stinks like bullsh** in here.
Duh. (Score:2)
Device drivers (Score:2)
Re: (Score:2, Funny)
Yeah, but when I ever go into the locker room to view that "real" porn I get arrested.
Of course, I guess it still ends up with sex. It's just that it's then with a guy named Bubba who's sharing my cell.
Re: (Score:1)
Re: (Score:2)
What's awesome about that is you get the full depth of the scene available to you and don't even have to worry about having the other cameras in the picture... just edit them out after since they'll all be at the same dista
Re: (Score:1)
Closely related recent development from Adobe? (Score:4, Informative)
It looks like here we've got an image sensor that would allow you to use your own lens, again provided that whatever camera body it found its way into had the right adapter. They also mention that it doesn't necessarily need an objective lens, though, and that's interesting...
Re: (Score:2)
Not correcting you or anything, but I believe, Adobe's innovation comes from using the Photoshop application along with the compound lens. So it's not only the adapter which will be required, but with the new Photoshop application so that compound image can be rendered as 3D.
But the primary difference I believe is that, 19 objective lens taking one single image compounded in 19 s
Re: (Score:2)
Uses (Score:5, Funny)
This could revolutionize the entire practice of voyeurism completely! Stanford == science for the masses.
Re: (Score:1)
Astronomy photos... (Score:1)
Re: (Score:2)
Prior art (Score:2)
Lightfields (Score:5, Informative)
I wonder... (Score:2)
Re: (Score:1)
If you use a high-res 16bpp b/w digital camera, you can produce "true" HDR images by using the same technique as an early Russian photographer - simply rotate between red, green and blue filters. You now have a 48bpp colour image.
There are two problems with this:
1. 16bpp is still not enough to represent a true high dynamic range, and
2. the change in colour filters requires time, bringing home the root of all modern HDR capture problems: scenes almost never remain static!
Now I know this is a little nit-picky, but it's certainly worth the mention. Modern consumer hardware just doesn't cut-it when acquiring HDR images. You need plenty of time to capture all your exposures and, in your case, colour planes, and this time is ofte
Re: (Score:2)
If, in 1915, you could take superb photos of the natural world with clunky colour filters, then a pinwheel on a stepper motor should be vastly superior. The device needs only to respond 3 times faster than y
Re: (Score:2)
Re: (Score:2)
If you have a monochrome CCD image sensor and have interchangable filters, then you can keep your images to the full resolution of the sensor, and have a much easier time sharpening the image.
Re: (Score:2)
Re: (Score:2)
The point is to actually capture the red data, the green data, and the blue data from the scene. Sure, I can take a grayscale image of a scene and then artificially tint it, but that doesn't actually tell me anything about how much red, or blue, or green there really is. The original poster isn't trying to tint the image; he's trying to capture the red, green, and blue data from it. And he recognizes that he can do this at higher resolution using three shots with a monochrome sensor and solid-color filte
Re: (Score:2)
Research paper (Score:4, Informative)
Re: (Score:2)
They've shrunk the pixels on the sensor to 0.7 microns, several times smaller than pixels in standard digital cameras.
...
The first benefit of the Stanford technology is straightforward: Smaller pixels mean more pixels can be crowded onto the chip.
I thought the prevailing wisdom was that smaller pixels equaled noiser images, assuming the sensor size stayed the same. Did I miss something in TFA which explains how really small pixels somehow change this dynamic?
http://www.google.com/search?q=pixel+size+noise [google.com]
Re: (Score:2)
Just imagine... (Score:4, Funny)
Re: (Score:2)
Re:Just imagine... (Score:5, Funny)
How does this compare to Apple's tech? (Score:1)
The Stanford camera uses a dense array of micro-cameras with one main objective lens for large scans, or for macro, just the array without said lens. Apple's patent filing is for a much larger (physically), sparser array in an integral camera-display--a display and compound camera made of micro-cameras interspersed with the pixels in a display.
One would expect that Apple's method could provide similar z-axis data, no?
Ooh, bluescreen technology (Score:5, Interesting)
Also, I'm not quite sure I'm understanding this right, but would this mean the camera is NEVER out of focus? Like, you'll be able to make out every detail of my thumbprint on the corner of the lens and also see the face of the person I'm photographing and ALSO read the inscription on the wall half a mile behind them?
Man, this thing sounds really cool.
Re: (Score:3, Interesting)
They've made some progress on the manufacturing front. Last time I saw this idea posted to /. they were talking about placing a sheet of small lenses in front of a standard camera CCD at the focal point of the main camera lens.
From what I understood the last time, each small lens intercepts all the light at that focal point and splits it up on the small pixel grid behind it. So instead of just getting the intensity of the light at that point you also capture vector information about where that light entere
Re: (Score:1, Funny)
Not Necessarily New (Score:1)
Re: (Score:1)
Re: (Score:3, Informative)
http://graphics.stanford.edu/papers/lfcamera/ [stanford.edu]
Re: (Score:1)
I'd be happy if _any_ part of my camera's shots were in focus...
Camera not the problem (Score:2)
Re: (Score:1)
Super 3D? (Score:1)
Re: (Score:2)
Re: (Score:2, Funny)
Re: (Score:2)
I would assume the term is used in the same sense that Super 8mm was used to denote a higher-quality image than that typically provided by Standard 8 mm on similar technology. The difference came from film/image management rather than objective lens improvement. I won't bore you with the details, but if you RTFA, you'll notice that the analogy applies quite nicely.
Your simplistic analysis and comment leads me to believe that you misunderstood the reference.
Wow (Score:2)
Majority Report (Score:1)
Existing 3D technology (Score:4, Informative)
Stereographic imagery has existed since before the creation of the camera. 3D cameras have undergone several bouts of popularity. As a child, I remember my grandfather getting out his ancient 3D camera, and my father had a 3D adapter for his regular camera. 3D lenses are now available for digital SLRs [loreo.com], and if you are interested in video, you can even get a box that converts 2D TV to 3D TV in realtime [yahoo.net]. (Note: CRT TV required. That aside, I've got one, and it works much better than I expected.)
Among the advantages of the system they're describing in the article we're discussing is that it actually has depth information for everything in the image, and using that, it can either be used for measurements or to pick out things in the image at specific depths. It also can be done with one lens, so the 3D image can be rotated while preserving the 3D effect. With conventional stereo imagery, you have to use 2 lenses, and if you turn the camera sideways to take the picture, you can only ever look at it sideways afterward.
In all, I think this new system sound like a great advance and I hope they'll license it cheaply so it can become widely used.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Artsy photographers like me are all about the bokeh, which means the out-of-focus areas in a photo. We use it to draw attention to the subject, and to make a pleasingly abstract blurred background out of the dumpster or whatever that we're shooting against.
We often pay big money for lenses that create pleasing bokeh.
If I can say that everything more than 3.1m away
Quantum mechanics innefficiency (Score:1)
an array of smaller cameras, put on a single 2D pixel sensor. (I gather)
The problem with this is that the picture to be taken, is 3D, not 4D.
Thus there is one extra unnecessary dimension. This means that for a 100 MPixel (100^4)
sensor, there will be about 100^3 voxels in the resulting 3D image, while it should
have been 464^3 if it had been efficient.
One simple way to make it efficient, is to make a short movie with an ordinary camera,
whi
Re: (Score:2)
Is it really less efficient? It's true that you have more data points than you "really need" in the sensor. But since you combine them by averaging, you would expect this to increase your signal-to-noise ratio, and give you more effective bits of precision.
In short, I think one needs to sit down and do a thorough information-theoretic analysis of this scheme, because it's not obvious (to me) that it's actually less (or more) efficient.
Already a working product? (Score:1)
I read this article yesterday, it seems related: Skjut från höften [nyteknik.se] (in Swedish, but has some pretty pictures!)
It's about a Standford Scientist, Ren Ng, that has made a camera where the focus plane can be set after the shot has been taken, using a set of microlenses just the way this article describes. Should be related, but how could a camera already be working if these guys just publicized?
Already released (Score:2)
A ZBUFFER? (Score:2)
Just think of all the depth of field stuff you could do in postprocessing.
Narrow range of viewpoint; 2D happy snaps win (Score:2)
Old Story (Score:2)
I can't believe there was no mention of their web site on either this Slashdot posting or the article.
Watch the movie!
http://graphics.stanford.edu/papers/lfcamera/ [stanford.edu]
http://graphics.stanford.edu/papers/lfcamera/lfcamera.avi [stanford.edu]
Also from Stanford (Score:2)
Depth perception for AI/robots (Score:2)
Moreover... imagine the interface options now. Suddenly we have Minority Rep
Resolution/Accuracy (Score:1)
plenoptic camera (Score:1)
Somewhat off-topic - CMY filters instead of BRGR (Score:1)
The biggest downside I see of current cameras is that they need a lot of light for an image; if you can get 2 photons/color I would think that you would end up with a much more sensitive camera, but I've only seen that in some astronomy cameras...