Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Software Science

Seeing Around Corners With Dual Photography 381

An anonymous reader writes "This project (which is part of this year's SIGGRAPH) has absolutely blown my mind. Basically they photograph an object with the photosensor at one point, and the light projector at another, and use the Helmholtz reciprocity algorithm to virtually switch the locations of the camera and projector, showing exactly what the light source "sees"! If that doesn't make sense to you, check out the research page and make sure to watch the 60MB video at the bottom. The playing card trick will leave you speechless!"
This discussion has been archived. No new comments can be posted.

Seeing Around Corners With Dual Photography

Comments Filter:
  • by REBloomfield ( 550182 ) on Tuesday May 10, 2005 @08:14AM (#12487085)
    make sure to watch the 60MB video at the bottom. The playing card trick will leave you speechless!"

    The exploding server one has already rendered me speechless. Why in the name of god do they do it!

  • by nmg196 ( 184961 ) * on Tuesday May 10, 2005 @08:15AM (#12487089)
    ..it would be much easier.
    • Goldfingers solution was much sexier.
      • by kevinank ( 87560 ) * on Tuesday May 10, 2005 @10:54AM (#12488500) Homepage
        Rather than dual photography I would be more inclined to describe the method as real-world ray tracing. A focused pixel of light is captured for each pixel of the light source, then the scene is transformed so that the camera image is in the plane of the light source and the lighting function discovered earlier is inverted.

        The article claims that there is no need to describe the geometry of the scene, and I understand why that is true for the structure of the subject, but it seems as though the geometry of the light and camera would still have to be known. Anything that isn't in view of the camera in the first image is unlit in the second image, and vice versa, but I don't understand how you would determine what transformation would result in that exchange without any information on the camera-light geometry in relation to the scene.
  • Wasn't there a scene in Blade Runner where he used something like that?
    • Wasn't it Arnie in Total Recall? (Disclaimer: I may be totally and utterly wrong.)
      • Can't remember Arnie using a photo like that in Total Recall but in Blade Runner, Deckard "3D analyses" a photo he confiscated from Leon's apartment to see around a corner and get a pic of Zhora, thanks to a reflection in a mirror.
    • Are you referring to the scene where he zooms into a photograph using a automated magnifier?

      From what I remember, he caught the reflection of the dancers from a mirror partially visible through the bathrooom door.
  • Never! (Score:5, Funny)

    by beders ( 245558 ) on Tuesday May 10, 2005 @08:15AM (#12487093) Homepage
    make sure to watch the 60MB video at the bottom

    I find it highly unlikely that many will manage that :0

  • I think we are going to need a couple of mirrors of this file or get a torrent setup....

    I am trying, I have 26 meg of the file down now, but the speed of my download is definately slowing.
  • Quick! (Score:2, Funny)

    by Jozer99 ( 693146 )
    Quick, shine a light into your monitor and take a picture. Then use their software to capture an image of their exploding server!
  • This page was slashdotted before the story went public. Sigh.
  • around corners? (Score:2, Interesting)

    by psyon1 ( 572136 )
    Where does seeing around corners come in?
    • Re:around corners? (Score:5, Informative)

      by Anonymous Coward on Tuesday May 10, 2005 @08:27AM (#12487186)
      Seeing around corners is really stretching it. You switch positions with the light source, so you can technically look at the scene from a point which is "around a corner". What they so casually mention as "structured lighting" is really the key to the whole algorithm and means that the light source shines a pattern on the scene which then allows the camera to retrace where every bit of light it sees is coming from. This means that the light source needs to be part of the scheme. You won't be able to switch yourself into the position of arbitrary lights on the street.
    • Re:around corners? (Score:2, Informative)

      by indy ( 23876 )
      The parent is right. You will not be able to see things that were hidden to the camera.

      All you are going to see is the scene as if camera and light source had switched places. Everything that was hidden to the camera in the original image will fall into black shadow regions in the generated image.
      • Re:around corners? (Score:5, Informative)

        by MankyD ( 567984 ) on Tuesday May 10, 2005 @10:03AM (#12488024) Homepage
        Half truth:

        If you watch the video, the very last demonstration is that of them generating the image of a King (of hearts?) that was not directly visible to the camera. Rather, its face was reflected onto the page of an open book - much more complicated that just, say, a mirror. The cards reflection is not visible in the still image of the book and is only made possible through pixel scanning with the projector.

        In sum, they are seeing around a corner and are seeing something the camera could not see (directly).
  • rays? (Score:3, Insightful)

    by dhbiker ( 863466 ) on Tuesday May 10, 2005 @08:17AM (#12487108) Homepage
    isn't this just the same in principle as ray tracing? or am I missing something
    • Re:rays? (Score:4, Informative)

      by Wyzard ( 110714 ) on Tuesday May 10, 2005 @08:49AM (#12487332) Homepage

      If you mean in the sense that POV-Ray does, then no, this is very different. It's an "image-based" rendering technique, which means that you create new images using photographs and other such real-world measurements as input. Conventional ray tracing gives you pictures of models built in the computer's memory, which might approximate a real-world object.

      The important difference is that you don't have to build a computer model of the geometry you're trying to render. This is both a help because many real-world objects are hard to model accurately in a computer, and a hindrance because you can only render pictures of objects that you actually have in the real world.

  • ...is the ./ effect from downloading a 60MB file.

    -Mr. Fusion
  • n/t (Score:4, Funny)

    by Dacmot ( 266348 ) on Tuesday May 10, 2005 @08:17AM (#12487116)
    The playing card trick will leave you speechless!"
    ...
  • by aug24 ( 38229 ) on Tuesday May 10, 2005 @08:18AM (#12487121) Homepage
    Clicky! [66.102.9.104]

    Anyone please mirror the movie?

    J.

  • ARTICLE CONTENTS (Score:5, Informative)

    by Anonymous Coward on Tuesday May 10, 2005 @08:21AM (#12487143)
    Dual Photography

    Abstract

    We present a novel photographic technique called dual photography, which exploits Helmholtz reciprocity to interchange the lights and cameras in a scene. With a video projector providing structured illumination, reciprocity permits us to generate pictures from the viewpoint of the projector, even though no camera was present at that location. The technique is completely image-based, requiring no knowledge of scene geometry or surface properties, and by its nature automatically includes all transport paths, including shadows, interreflections and caustics. In its simplest form, the technique can be used to take photographs without a camera; we demonstrate this by capturing a photograph using a projector and a photo-resistor. If the photo-resistor is replaced by a camera, we can produce a 4D dataset that allows for relighting with 2D incident illumination. Using an array of cameras we can produce a 6D slice of the 8D reflectance field that allows for relighting with arbitrary light fields. Since an array of cameras can operate in parallel without interference, whereas an array of light sources cannot, dual photography is fundamentally a more efficient way to capture such a 6D dataset than a system based on multiple projectors and one camera. As an example, we show how dual photography can be used to capture and relight scenes.

    (a) Conventional photograph of a scene, illuminated by a projector with all its pixels turned on. (b) After measuring the light transport between the projector and the camera using structured illumination, our technique is able to synthesize a photorealistic image from the point of view of the projector. This image has the resolution of the projector and is illuminated by a light source at the position of the camera. The technique can capture subtle illumination effects such as caustics and self-shadowing. Note, for example, how the glass bottle in the primal image (a) appears as the caustic in the dual image (b) and vice-versa. Because we have determined the complete light transport between the projector and camera, it is easy to relight the dual image using a synthetic light source (c) or a light modified by a matte captured later by the same camera (d).
  • by Sir_Real ( 179104 ) on Tuesday May 10, 2005 @08:23AM (#12487156)
    Seeing that R-ing the F-ing A is an impossibility for me right now, due to an inexcuseable lack of .torrent or google cache link, I'll just post some outright fabrications about it's content.

    This technology proves that there was a third gunman on the grassy knoll. This technique is like what they did in the Matrix, except "backwards." With this technology, any man can find the g-spot. When you look at the videos upside down, you can see into the past.
  • A Mirror? (Score:5, Funny)

    by Bob(TM) ( 104510 ) on Tuesday May 10, 2005 @08:24AM (#12487165)
    Doesn't it seem a little funny that we need a mirror to get a look at this movie?
  • Another application (Score:5, Interesting)

    by Technician ( 215283 ) on Tuesday May 10, 2005 @08:29AM (#12487192)
    With a video projector providing structured illumination, reciprocity permits us to generate pictures from the viewpoint of the projector, even though no camera was present at that location.

    Other than using electrons instead of light, that's how a scanning electron microscope works. An object is scanned (raster scan) and one or more sensors near the target pick up the reflections to generate an image. In the SEM the image appears as viewed from the scanning electron beam source.

    In the optical one mentioned in the article, the light source is a raster scanning projector which lights a target. The image is produced from photodiodes picking up reflected light.

    These two systems are very much alike. One uses photons and the other electrons. The end image is generated the same way.
  • by Alizarin Erythrosin ( 457981 ) on Tuesday May 10, 2005 @08:30AM (#12487202)
    Note: I haven't read the paper yet, but it is downloading.

    It seems like this might have some military applications as a result. Imagine sticking a photo-resistor array under a door or through a window and then getting "viewpoints" from any of the lights in the room. Could aid in target aquisition and elimination.

    Not sure how well it works for something like that, but this is a rather impressive (at least to me) research project.
    • by Technician ( 215283 ) on Tuesday May 10, 2005 @08:43AM (#12487299)
      It seems like this might have some military applications as a result. Imagine sticking a photo-resistor array under a door or through a window and then getting "viewpoints" from any of the lights in the room. Could aid in target aquisition and elimination.


      If you can get to the article, it mentions the light source as a projector. The projector controls the resolution. How it works is a raster scanning video projector lights objects. A photoresistor (in my opinion way too slow. A fast photodiode would be better or photomultiplier tube) picks up the reflected light from the object scanned by the light projector.

      A simple street light or the ceiling light in the room will not modulate the light to provide an image signal on a photo sensor slid under a door. On the other hand, if they were doing a video presentation, and the presenter walked between a projector and the screen and you had a photoresistor slid under the door, you would be able to see his arm movements.

      You would get the best image when the projector was not showing a slide, but showing a blank screen. Use a CRT projector, not an LCD. LCD's don't raster scan.
      • Actually, you could theoretically get a good image if the person simply had a TV on in the room, tuned to a known channel--the bigger the TV, the better. You could synchronize your sensor to the channel and use it to normalize your light readings. This could even be done asynchronously at a later time.
        • Actually, you could theoretically get a good image if the person simply had a TV on in the room, tuned to a known channel--the bigger the TV, the better.

          And just how are you going to image such little things as facial features? Or image big items like how many people are in the room?

          To work the light source must scan the target. If I had a light detector in the corner of a room tucked under the door, and they were showing a slideshow with a projector, and the presentor walked in front of the screen, th
      • True, but as with most research, somebody will most likely pick this up and add to it, perhaps using infared or some other form of light not visible to humans.

        Honestly, I'm not really sure where it can lead, but it should be an interesting path as it goes along.
    • Noted.

      There may be military applications for this - however, this is not magic - you cannot stick a photo-resistor array (or camera) under a door and see behind obstacles.

      This is simply a more efficient way of gathering information about a scene. The light source used fot the paper is structured, so unless the people in the room are using some pretty specialist lighting equipment you'll see nothing more than a camera would.

    • Or imagine sticking a miniaturised camera under a door or through a window and then getting a clear viewpoint of the room!
  • You can take a picture using an unstructured light source and a structured receiver (e.g. a light bulb and a camera). Or you can use a structured light source (e.g. an LCD projector) and an unstructured light sensor (e.g. a photodiode.)

    OK, the stitching together is harder in the latter case, maybe an awful lot harder, but unless I have missed something really big it is a statement of the nearly obvious. Anyone remember the scanning electron microscope? By collecting backscattered electrons, you could use on

  • Structured light. (Score:4, Informative)

    by Anonymous Coward on Tuesday May 10, 2005 @08:41AM (#12487284)
    They make the point that if you illuminate an object with a projector, you can get the image with a photocell. That's because the projector scans the image with a light beam. If you know when you see the reflection, you know where the light beam was when it reflected because you have prior knowledge of the scanning pattern. That technique has been used forever. It's like the flying spot scanners that predate camera tubes.

    The 3D part is obtained when you offset the detector and the projector. If I look at a particular point on an object and scan the object with a beam of light, I can get the distance between me and the object as a function of the scanning angle.
    • Re:Structured light. (Score:5, Informative)

      by Technician ( 215283 ) on Tuesday May 10, 2005 @08:59AM (#12487398)
      It's like the flying spot scanners that predate camera tubes.


      Wow, you remember those?

      For those who don't know what they are, it's simply a CRT with a blank raster and a photo detector. Usualy a photomultiplier tube (fast and before photodiodes). The flying spot was simply the bright spot on the CRT. If you put movie film in front of the CRT, the brightness detected by the photodetector was modulated by the film in-between. This was the standard way of showing movies on television in the early days. The flying spot scanner was built into a movie projector with a CRT for the lamp and a photomultiplier tube where the projection lens would go.

      In this example, it's a very big flying spot scanner. The lightsource is a projector. (raster scanning light source) The target is a 3D object instead of movie film, and the detector is offset so the 3D object casts shadows to the detector.

      The scanned image looks like it would be viewed from the light source with shadows that look like the light source is from the photo detector.
      • by Bigman ( 12384 )
        In fact, I can't see how this is a million miles away from what Logi Baird did with a Mechanical scanner, other than being more general.

        Oh, comments above have to be interpreted in the light of the fact that I can't RTFA because of /.ing - !

        Ian
  • by tonywestonuk ( 261622 ) on Tuesday May 10, 2005 @08:42AM (#12487287)
    ... a form of This technique has been done before. Take a bar code for example. A bar code could be read in 2 ways
    • {usual method} laser scans over barcode, light sensor picks up changing intensity of light, as the light is either reflected, or absorbed by the pattern.... or
    • Camera take photo of barcode in one go.

    All these people are doing, are using the first barcode technique to, take a picture of the scene. Instead of using a laser, an animation of a moving white dot is sent to the projector. The Camera, is then treated like a light sensor, for each point in the animation, the camera is queried for the brightness of the perhaps, brightest dot in it's field of view. Gradually the picture is built up, pixel by pixel, untill, finally a picture is formed in memory. This picture would be from the perspective of the projector.
    • I can't RTFA, but I'm pretty sure that what you describe is not what they're doing. The remarkable claim that they make is that from images of a three-dimensional scene that are captured at a particular camera location, they can render an image that the camera would have seen from a different location (namely the location of the illuminator). Furthermore, they do this without a priori knowledge of the scene geometry. In your barcode example, you need a priori knowledge of the position of the source and t
      • It is what they are doing. First suppose that you rasterize from the projector one pixel at a time. Getting the scene geometry in this scenario is standard "off-the-shelf" computer vision. Think about the barcode example. You don't need a priori knowledge at all because you know (1) the ray along which the laser is pointing and (2) the ray along which you have seen the point. It's fairly trivial to reconstruct the geometry.

        But there are two catches: (1) when you see a point in the scene it might not be al

  • by capsteve ( 4595 ) * on Tuesday May 10, 2005 @08:58AM (#12487383) Homepage Journal
    I totally lack any scientific degrees, but this technique looks an awful lot like raytracing in reverse(or even real world application of algebra)... the projector is necessary to help map the way certain areas of the subject react to light based on the surface quality, and using pixel level illumination from the projector recreates the camera... FUCKING BRILLIANT.

    this technique works because of the lcd/dlp array in a projector, but i wonder if it can be reproduced if the light source is already a pinpoint(chrismas light, or very small bulb). what happens when the light source is very broad, like that of a computer monitor/ TV? i wonder if this technique could also be used to extrapolate what someone is watching/reading/viewing on screen? taking another stab from a raytracing perspective, i wonder if an environment could be revealed thru image analysis, aka reverse-HDRI?

    hats off to the dually photo boys of stanford and cornell... keep up the cool work.
  • Torrent (Score:5, Informative)

    by spadadot ( 879731 ) on Tuesday May 10, 2005 @09:10AM (#12487504)
    Only the first part for now :

    http://dload.digitalriviera.com/DualPhotography-pa rt1.mp4.torrent [digitalriviera.com]

    Second part in 30 minutes !

    First torrent I host, I hope it's ok.
  • by UnknowingFool ( 672806 ) on Tuesday May 10, 2005 @09:31AM (#12487718)
    Seeing Around Corners With Dual Photography

    Was I the only one that saw that as:
    Seeing Around Corners With Dual Pornography.

    I need more coffee.

  • by dohboy ( 449807 ) on Tuesday May 10, 2005 @09:31AM (#12487720)
    Don't blame their webserver/fileserver for not being able to see the movie they raved about.

    It is the laziness and irresponsibility of the slashdot editors to not provide a bittorrent link.
    I am disgusted that slashdot raves about a site/file/mpeg then DDOSs
    it so that nobody sees it. This is particularly bad when a hobbyist site is crushed.

    Mod me into oblivion, I don't care.

    • They linked to Stanford.

      Who would imagine that we could /. Stanford. This is not Podunk U!

      Oh well, I guess the Graphics department at Stanford isn't recieving any love from their IT department.
      • We didn't /. Stanford. Almost all the research groups in the CS department run their own servers and the same is true of the graphics folks. It's simply one server that's being hammered and it can't handle the capacity. Bandwidth and network latency are fine - just the server itself does not have enough processing power/memory to handle all the requests (it's probably not much better than your desktop).

        By the way, one of the guys, Levoy, is awesome. He did all that digital modelling of the statue of David
    • Yep. This is why Slashdot should cache pages along with the associated images and videos. Presto! No more slashdot effect. And saying "But what about mirrordot?" is not valid, because people only go to mirrordot after the original sight has already been crushed into oblivion. And the argument in the FAQ is total BS, too. Oh sure, Slashdot is really concerned that the site in question won't get its precious ad revenue when people are viewing the cached version.

      NEWS FLASH: The only people who will be
      • OTOH, by having people go to mirrordot after the site goes down they can be assured that they got the maximum number of hits that they can handle (and therefor the maximum amount of ad revenue that you could at the time) before people starting viewing the mirrors and bypassing the ads. Flawed I know, but it should not be ignored.
        • I guess so, but I think the whole question of giving the site the hits that it "deserves" is bogus, because all those hits are coming from Slashdot users. In other words, the site wouldn't even get those hits if not for Slashdot. So if Slashdot chooses to cache the page for its own users, how can the owner of the site complain?

          Besides, The traffic to the site will still increase, simply because the site will be getting free advertising on Slashdot. The story will fall off the Slashdot front page in a da
  • I'll throw my poor server into the flames

    http://www.whaleweb.net/mirror.html [whaleweb.net]

    2x 1.1Mbit DSL lines + PacketShaper
    *ducks behind table*
  • by marat ( 180984 ) on Tuesday May 10, 2005 @09:34AM (#12487738) Homepage
    1. Reverse transformation for any interesting case (note that no places are actually revealed on their example!) will always be close to singular, that means in practice that your noises (due to raster, finite precision, and just measurement error) will eat any signal in result.

    2. You should know not only amplitude, but *phase* of the source signal, that means for light that you have to use coherent light source and utilize interference on the receiver.

    1 + 2 = holography, so what is new?

    (Read the article, but still downloading the movie)
  • Another mirror... (Score:5, Informative)

    by Malcolm Scott ( 567157 ) on Tuesday May 10, 2005 @09:37AM (#12487768) Homepage
    Another mirror here [retrosnub.co.uk]. No guarantees as to how long it will stay up; if it pushes me close to my monthly bandwidth limit I'll kill it...
  • I haven't read TFA yet, but how involved is the maths behind that project? Is it simple trigonometry? In particular: Is it possible to build such a setup at home from consumer LCD/DLP projectors?

    Could I image my hot neighbour's bedroom and see her make out in her bed from the perspective of her bedroom's ceiling light ? That would be killer ;)

  • http://graphics.stanford.edu.nyud.net:8090/papers/ dual_photography/ [nyud.net]

    Come on kids, coralcache is the way to go. no more direct linking to servers that go down quicker than, well, you know.
  • Torrent file (Score:4, Informative)

    by Bisqwit ( 180954 ) <bisqwitNO@SPAMiki.fi> on Tuesday May 10, 2005 @09:50AM (#12487912) Homepage
  • by peter303 ( 12292 ) on Tuesday May 10, 2005 @10:09AM (#12488075)
    Several projects at SIGGRAPH last year addressed the question of what you could do with a planar array of cameras. You could consider this the natural extrension of stereoscopy (two cameras) or a cost-effective approximation of real-time holography. Some of this research is motivated by that commodity digital cameras and real time digital image processing computers can be bought at low prices, and assembled like RAID disk arrays or cluster computers.

    Applications of these arrays included several kinds of real-time 3D TV (without silly glasses). The Stanford group pushed "conformal imaging", that is a cube of image planes at various depths and all viewpoints. This has the effect of looking around corners and through keyholes: if there a path for light to get through, you can probably extract a complete image. This does involve some mathematical massaging of multiple-camera images. Cheap Graphical Processing Units (GPU) from game machines can be reprogrammed to process images in real-time.
  • Watching TV (Score:3, Insightful)

    by Doc Ruby ( 173196 ) on Tuesday May 10, 2005 @11:44AM (#12489023) Homepage Journal
    To analyze the projector's image quickly, they need to control the projector, sampling its pixels' images to factor out redundant pixels. Trojan-horse programs which control the projector probably won't trigger current antivirus SW. Any screen can now spy on you, if a camera can only get a glimpse of its reflected light. Combined with laser microphones [mtmi.vu.lt], you're on candid camera! Beware untrusted screensavers!

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...