Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology Science

Stanford Team Developing Super 3D Camera 105

Tookis writes "Most of us are happy to take 2D happy snaps with single lens digital cameras. Imagine if you had a digital camera that could more accurately perceive the distance of all objects in its field of vision than your own eyes and brain. That's exactly what a team of researchers from Stanford University are working on — and it could even be affordable for ordinary consumers."
This discussion has been archived. No new comments can be posted.

Stanford Team Developing Super 3D Camera

Comments Filter:
  • *imagines a 3D digital photo frame*
    • Sounds obvious. Is anyone surprised that machines can do things better, longer, and more reliably than a human body? And how exactly does a machine "perceive"?
      • Re: (Score:1, Interesting)

        by Anonymous Coward
        How many cars do you know that still run past, say, 80 years? The human body is a far superior machine, and far less expensive. Plus, you get one for free! Shoot, you can even start your own production plant and create your own; with a little female assistance of course.
        • by gnick ( 1211984 ) on Wednesday March 19, 2008 @11:25PM (#22803544) Homepage

          The human body is a far superior machine, and far less expensive.
          A human less expensive than a car? You obviously either:
          1) Don't have children and/or have never tallied what you actually cost to house and maintain.
          or
          2) Live in a box, eat strays that you catch yourself, and don't bother with doctors or hygiene.
        • by daem0n1x ( 748565 ) on Thursday March 20, 2008 @05:41AM (#22804828)
          Humans are cheap (and fun) to manufacture but the maintenance fee is a nightmare.
          • Humans are cheap (and fun) to manufacture
            Well only the beginning of the manufacturing process is fun. The end of the manufacturing process most certainly isn't fun for the woman, and no part of the process is cheap. Sailing a yacht across a private lake doesn't cost anything, by those metrics.
    • Re: (Score:2, Interesting)

      by baffled ( 1034554 )
      Imagine how robust image editing will be. Instead of contrast-based edge-detection, you'll have 3d-surface based object detection.

      Image analysis will be more accurate, in turn improving image search engine utility, giving robots better spatial vision, allowing big brother to identify bombs and brunettes more accurately, etc..
    • by SeaFox ( 739806 )
      Holographic Diorama
  • Wait. (Score:3, Insightful)

    by More_Cowbell ( 957742 ) * on Wednesday March 19, 2008 @10:19PM (#22803232) Journal
    This story has been up for over four minutes and no comments about revolutionizing the pr0n industry?
    • Re:Wait. (Score:5, Funny)

      by edwardpickman ( 965122 ) on Wednesday March 19, 2008 @10:23PM (#22803264)
      This story has been up for over four minutes and no comments about revolutionizing the pr0n industry?

      We've already got 3D pr0n, they're called girls.

      • Re:Wait. (Score:5, Funny)

        by More_Cowbell ( 957742 ) * on Wednesday March 19, 2008 @10:35PM (#22803328) Journal

        We've already got 3D pr0n, they're called girls.
        Wait... are we still on Slashdot?
        • of course, but he means one of those creatures that modeled for the vaginal orifice on our fleshlights
        • We've already got 3D pr0n, they're called girls.
          Wait... are we still on Slashdot?
          For now, but we're all headed to the strip clubs... bring singles!
      • Re: (Score:2, Funny)

        We've already got 3D pr0n, they're called girls.

        Yeah, but when I ever go into the locker room to view that "real" porn I get arrested.

        Of course, I guess it still ends up with sex. It's just that it's then with a guy named Bubba who's sharing my cell. :(
    • That's the first place my brain went lol. You snap a pic and analyze the particular shapes and stuff later on your comp. Actually that sounds illegal rofl. But this wouldn't work for like fly-around 3D models of stuff. It's like sending out a virtual sheet in one direction and as soon as it hits something, it's done. So you'll only ever catch the front side of an object and any object partially covering a second object would kinda ruin it. As soon as you pan like 5 degrees in one direction, the 2nd ob
      • Actually it would work for the 3D fly around models... you'd just need one camera for every 5 degrees (your number not mine)... so 72 cameras on a track, all taking a picture at the same time or if it's a static object, just put it on a turntable and do it at your leisure with one camera.

        What's awesome about that is you get the full depth of the scene available to you and don't even have to worry about having the other cameras in the picture... just edit them out after since they'll all be at the same dista
    • Knowing the exact 3D distance to the target isn't that useful to know. It really does nothing to change the basic method of staying in motion until you hit the warm wet spot.
  • by neocrono ( 619254 ) on Wednesday March 19, 2008 @10:21PM (#22803242)
    This sounds like sort of a flip of what Adobe announced recently with their "compound eye" camera lens [audioblog.fr]. The benefit with that, I suppose, is that you'd be able to use your existing camera body provided the lens had the right adapter.

    It looks like here we've got an image sensor that would allow you to use your own lens, again provided that whatever camera body it found its way into had the right adapter. They also mention that it doesn't necessarily need an objective lens, though, and that's interesting...
    • The benefit with that, I suppose, is that you'd be able to use your existing camera body provided the lens had the right adapter.

      Not correcting you or anything, but I believe, Adobe's innovation comes from using the Photoshop application along with the compound lens. So it's not only the adapter which will be required, but with the new Photoshop application so that compound image can be rendered as 3D.

      But the primary difference I believe is that, 19 objective lens taking one single image compounded in 19 s
      • The two methods are actually remarkably similar, just that one handles it in the lens and the other on the sensor. Both have advantages... the former you don't need a new sensor, meaning you can use it on any camera body that accepts the lens including any converter kits for fixed lens cameras. The latter means you can use -any- lens you'd like (from long focal lengths through to fisheye lenses, although the result will be somewhat odd in the latter case) as long as it fits on the camera body that has the
  • Uses (Score:5, Funny)

    by explosivejared ( 1186049 ) <hagan.jaredNO@SPAMgmail.com> on Wednesday March 19, 2008 @10:22PM (#22803250)
    But there are a number of other possibilities for a depth-information camera: biological imaging, 3-D printing, creation of 3-D objects or people to inhabit virtual worlds, or 3-D modeling of buildings...

    ... that cute girl next door, the cute girl that works across the street, the cute girl walking down the street.

    This could revolutionize the entire practice of voyeurism completely! Stanford == science for the masses.
    • Perhaps you could model all three so they can all make out with each-other... while you go post on slashdot.
  • ... are going to be a bitch to store!
    • A database of objects with nothing more than xarc,yarc,distance,color,brightness? Sounds a lot smaller than actual image data.
  • The insects are calling.

  • Lightfields (Score:5, Informative)

    by ka9dgx ( 72702 ) on Wednesday March 19, 2008 @10:58PM (#22803436) Homepage Journal
    The work they've been doing on lightfields is pretty innovative. I first heard about this when Robert Scoble interviewed [podtech.net] Marc Levoy [stanford.edu] and got some cool demos into the video. I've done some lightfield experiments [flickr.com] with my trusty Nikon D40, it's interesting to see what new ideas [flickr.com] you can come up with for using a camera once you get into it.
    • If you use a high-res 16bpp b/w digital camera, you can produce "true" HDR images by using the same technique as an early Russian photographer - simply rotate between red, green and blue filters. You now have a 48bpp colour image. If you now apply the 3D techniques, you would get a far more realistic 3D image (as you have far better data to work with).
      • If you use a high-res 16bpp b/w digital camera, you can produce "true" HDR images by using the same technique as an early Russian photographer - simply rotate between red, green and blue filters. You now have a 48bpp colour image.

        There are two problems with this:
        1. 16bpp is still not enough to represent a true high dynamic range, and
        2. the change in colour filters requires time, bringing home the root of all modern HDR capture problems: scenes almost never remain static!

        Now I know this is a little nit-picky, but it's certainly worth the mention. Modern consumer hardware just doesn't cut-it when acquiring HDR images. You need plenty of time to capture all your exposures and, in your case, colour planes, and this time is ofte

        • by jd ( 1658 )
          16bpp is for a single pre-processed image, so only gives you one colour plane, as you are filtering out unwanted planes. This means you actually have 48bpp for the post-processed image, 16bpp is merely the capability of typical device,so we work round that by only capturing part of the range at a time.

          If, in 1915, you could take superb photos of the natural world with clunky colour filters, then a pinwheel on a stepper motor should be vastly superior. The device needs only to respond 3 times faster than y

      • Is there any point in changing filters? Modern DSLRs (e.g. a Nikon D80) have options for simulating different coloured filters in B&W mode, I'm sure you could do the same thing in post processing on a computer with a single 16bpp B&W image.
        • by mikael ( 484 )
          Modern DSLR have a monochrome CCD image sensor. But there is a color filter array [wikipedia.org] above this which converts each group of 2x2 elements into a GRBR pattern. You lose half the full resolution that way. You also get color bleed from adjacent elements which can be difficult to correct.

          If you have a monochrome CCD image sensor and have interchangable filters, then you can keep your images to the full resolution of the sensor, and have a much easier time sharpening the image.
          • Yes I'm aware of that, and I see why you would want to use a camera with a monochrome CCD (or CMOS) sensor. I was just wondering whether there is any reason to use coloured filters when you could artificially colour the images in post-processing?
            • The point is to actually capture the red data, the green data, and the blue data from the scene. Sure, I can take a grayscale image of a scene and then artificially tint it, but that doesn't actually tell me anything about how much red, or blue, or green there really is. The original poster isn't trying to tint the image; he's trying to capture the red, green, and blue data from it. And he recognizes that he can do this at higher resolution using three shots with a monochrome sensor and solid-color filte

              • by jd ( 1658 )
                Exactly. Although I now see I could do the same with a prism that split out the red, green and blue, using three different cameras. However, this would seem to be a more expensive option and the prism must absorb some of the light energy. Nonetheless, the prism method seems to be the popular method for very high-end photography, to judge from the companies selling the components. Thoughts on how the two methods would compare in practice would be appreciated.
  • Research paper (Score:4, Informative)

    by FleaPlus ( 6935 ) on Wednesday March 19, 2008 @11:21PM (#22803524) Journal
    For anyone interested in more than the press release, here's a link to their paper [stanford.edu], "A 3MPixel Multi-Aperture Image Sensor with 0.7m Pixels in 0.11m CMOS."
    • They've shrunk the pixels on the sensor to 0.7 microns, several times smaller than pixels in standard digital cameras.
      ...
      The first benefit of the Stanford technology is straightforward: Smaller pixels mean more pixels can be crowded onto the chip.

      I thought the prevailing wisdom was that smaller pixels equaled noiser images, assuming the sensor size stayed the same. Did I miss something in TFA which explains how really small pixels somehow change this dynamic?

      http://www.google.com/search?q=pixel+size+noise [google.com]

      • They did address this... the pixels are organized into 256 pixel arrays, each the same color, which sit behind their own lens (or are focused at the same point within the greater lensed image lightfield - depending how you want to set it up. This means there won't be what they called 'cross-talk' between pixels/sensors for different colors /wavelengths resulting in less noise.

  • by madbawa ( 929673 ) on Wednesday March 19, 2008 @11:24PM (#22803540) Journal
    ....Goatse in 3D!!!! Yay!!
  • Also note Apple's tech [slashdot.org] discussed here a couple of years ago.

    The Stanford camera uses a dense array of micro-cameras with one main objective lens for large scans, or for macro, just the array without said lens. Apple's patent filing is for a much larger (physically), sparser array in an integral camera-display--a display and compound camera made of micro-cameras interspersed with the pixels in a display.

    One would expect that Apple's method could provide similar z-axis data, no?

  • by ZombieRoboNinja ( 905329 ) on Wednesday March 19, 2008 @11:44PM (#22803628)
    That doesn't even require a blue screen! Just tell it to cancel out everything > 5 feet away and you're set. That'll be fun for webcam stuff.

    Also, I'm not quite sure I'm understanding this right, but would this mean the camera is NEVER out of focus? Like, you'll be able to make out every detail of my thumbprint on the corner of the lens and also see the face of the person I'm photographing and ALSO read the inscription on the wall half a mile behind them?

    Man, this thing sounds really cool.
    • Re: (Score:3, Interesting)

      They've made some progress on the manufacturing front. Last time I saw this idea posted to /. they were talking about placing a sheet of small lenses in front of a standard camera CCD at the focal point of the main camera lens.

      From what I understood the last time, each small lens intercepts all the light at that focal point and splits it up on the small pixel grid behind it. So instead of just getting the intensity of the light at that point you also capture vector information about where that light entere

    • Re: (Score:1, Funny)

      by Keyboarder ( 965386 )
      Wait! Blue screen technology with no blue screen? You mean Linux?
  • Before adobe had announced their camera, I did some research. There are existing patents which cover using multiple lenses on various types of surfaces allowing the very same thing (like insect compound eyes) allowing software to capture 3D images. using multiple sensors like this is a way to capture light as a vector and not just as pixel of intensity.
    • Re: (Score:3, Informative)

      by kilraid ( 645166 )
      There are pictures shot with the Stanford prototype, and they date back to 2005! Oh and be gentle with the 74 MB video...

      http://graphics.stanford.edu/papers/lfcamera/ [stanford.edu]

      • by Ant P. ( 974313 )
        Yep, I remember those the last time this exact same story was posted on /. over a year ago.

        I'd be happy if _any_ part of my camera's shots were in focus...
    • Cameras that take 3D images have been around since the beginning of photography. The real impediment to 3D images is not the camera but the display. Most require glasses of some kind for viewing and for most people that extra inconvenience does not offset the benefits of 3D.
      • Stereo != 3D. These cameras would actually produce 3D data; with stereo, you have to do really complicated pattern matching to try to produce depth values. Most humans can do it instinctively. Except me, born with squiffy eyes and practically no depth perception.
  • What the hell is "Super 3D"? You take a picture, with some data about the 3D structure. That sounds like 3D. Super 3D would be, perhaps, 4D. Or maybe something that doesn't exactly give 4D data, but gives the impression of it. This isn't that -- this is 3D. Are all scientific journalists retards, or does Slashdot just pick the biggest ones?
    • by tsa ( 15680 )
      No, super 3D is just 3D but then very much so.
    • I would assume the term is used in the same sense that Super 8mm was used to denote a higher-quality image than that typically provided by Standard 8 mm on similar technology. The difference came from film/image management rather than objective lens improvement. I won't bore you with the details, but if you RTFA, you'll notice that the analogy applies quite nicely.

      Your simplistic analysis and comment leads me to believe that you misunderstood the reference.

  • by tsa ( 15680 )
    I read TFA and WOW, it's such a very simple idea, easy to make, and has such enormously cool implications! Simple ideas are often the best!
  • You know, we're coming even closer to Minority Report tech with this. Presumably you could shoot videos with this stuff and you could get that cool projection tech that makes hologram-type videos. Plus you can modify Wii controllers to make those awesome multi-touch screens. With a bit of money, we could even clone Tom Cruise and have him fight crime with jetpacks and sonic blasters. I'm telling you, 2054 man, just a few decades away...
  • by TheMCP ( 121589 ) on Thursday March 20, 2008 @01:14AM (#22803996) Homepage
    Before everyone gets excited over 3D porn, I think we should consider existing 3D technology, and how this differs.

    Stereographic imagery has existed since before the creation of the camera. 3D cameras have undergone several bouts of popularity. As a child, I remember my grandfather getting out his ancient 3D camera, and my father had a 3D adapter for his regular camera. 3D lenses are now available for digital SLRs [loreo.com], and if you are interested in video, you can even get a box that converts 2D TV to 3D TV in realtime [yahoo.net]. (Note: CRT TV required. That aside, I've got one, and it works much better than I expected.)

    Among the advantages of the system they're describing in the article we're discussing is that it actually has depth information for everything in the image, and using that, it can either be used for measurements or to pick out things in the image at specific depths. It also can be done with one lens, so the 3D image can be rotated while preserving the 3D effect. With conventional stereo imagery, you have to use 2 lenses, and if you turn the camera sideways to take the picture, you can only ever look at it sideways afterward.

    In all, I think this new system sound like a great advance and I hope they'll license it cheaply so it can become widely used.
    • by ch-chuck ( 9622 )
      Anyone can take 3d photos of relatively static scenes with one camera and some special software [photoalb.com] that'll display nicely with lcd shutter glasses. Just take one picture, move the camera over a few inches then take another pictures. Some 3d photos I've seen have like a car on a distant road in one eye that is not there in the other eye, so you know there was some time elapsed between shots.

    • The biggest benefit described from a manufacturing POV is that it's all on the sensor chip.... meaning that you can get greater fidelity with a less accurate lens... which lowers the cost considerably. Often the lens of a camera is what costs the most and certainly chip manufacturing can become way more efficient, especially with the process they describe where sensors overlap each other so that even if a pixel is DOA there will be no loss in quality overall.

    • If it works as advertised, then in addition to my photos always being in focus, I can selectively de-focus parts of them in the laboratory later.

      Artsy photographers like me are all about the bokeh, which means the out-of-focus areas in a photo. We use it to draw attention to the subject, and to make a pleasingly abstract blurred background out of the dumpster or whatever that we're shooting against.

      We often pay big money for lenses that create pleasing bokeh.

      If I can say that everything more than 3.1m away
  • The camera is in practice a 4D sensor, organised as an array of arrays,
    an array of smaller cameras, put on a single 2D pixel sensor. (I gather)

    The problem with this is that the picture to be taken, is 3D, not 4D.
    Thus there is one extra unnecessary dimension. This means that for a 100 MPixel (100^4)
    sensor, there will be about 100^3 voxels in the resulting 3D image, while it should
    have been 464^3 if it had been efficient.

    One simple way to make it efficient, is to make a short movie with an ordinary camera,
    whi
    • Is it really less efficient? It's true that you have more data points than you "really need" in the sensor. But since you combine them by averaging, you would expect this to increase your signal-to-noise ratio, and give you more effective bits of precision.

      In short, I think one needs to sit down and do a thorough information-theoretic analysis of this scheme, because it's not obvious (to me) that it's actually less (or more) efficient.

  • I read this article yesterday, it seems related: Skjut från höften [nyteknik.se] (in Swedish, but has some pretty pictures!)

    It's about a Standford Scientist, Ren Ng, that has made a camera where the focus plane can be set after the shot has been taken, using a set of microlenses just the way this article describes. Should be related, but how could a camera already be working if these guys just publicized?

  • This is old news, a high-technology firm has already released one of these stereo-cameras. [techfever.net]
  • A Zbuffer for digital cameras? Yas pleez!

    Just think of all the depth of field stuff you could do in postprocessing.
  • This seems like interesting and cool technology. But I'm not sure exactly how far it takes us, because if the total distance between the most extreme lenses in the array is only a few inches, it's not as if you could reconstruct the full scene and synthesize views from any viewpoint: the background objects are still concealed behind the foreground objects and the lenses don't have much capability to look "around" them, which in turn means that the finished product will still have to be "viewed" from a very
  • Cool, but old story.
    I can't believe there was no mention of their web site on either this Slashdot posting or the article.

    Watch the movie!

    http://graphics.stanford.edu/papers/lfcamera/ [stanford.edu]
    http://graphics.stanford.edu/papers/lfcamera/lfcamera.avi [stanford.edu]
  • Light Field Photography with a Hand-Held Plenoptic Camera [stanford.edu]. A regular camera with a special lens that emulates the "thousands of tiny lens" from the thing in the article. Includes pictures and a video of how the focus of images taken with the camera can be adjusted as a post processing step.
  • Finally there is a method to give depth perception to AI and robots without a million lines of code needing to be written for mediocre results. They will know where things are in relation to each other and themselves via real time lightfield info.... and once they map it out internally, won't even need to look at things to know where they are... just triangulate based on last known location and what ever is in their field of vision.

    Moreover... imagine the interface options now. Suddenly we have Minority Rep
  • Anyone know what the Z data resolution and accuracy is? What about the size of the head? I'm trying to get the topography of the inside of some very small objects and I'd love to use something like this.
  • Dunno, seems like some earlier research on a similar idea might be more useful - the plenoptic camera. Just one big CCD and a lot of math. CCDs are getting bigger all the time, most people don't really need all the pixels they have already, so sure, why not use some of those pixels for depth. But sticking with one CCD will probably be cheaper. http://graphics.stanford.edu/papers/lfcamera/ [stanford.edu]
  • Why aren't more cameras using CMY filters instead of BRGR filters?

    The biggest downside I see of current cameras is that they need a lot of light for an image; if you can get 2 photons/color I would think that you would end up with a much more sensitive camera, but I've only seen that in some astronomy cameras...

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...