Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Input Devices

Capturing 3D Surfaces Simply With a Flash Camera 131

MojoKid writes with this excerpt from Hot Hardware (linking to a video demonstration): "Creating 3D maps and worlds can be extremely labor intensive and time consuming. Also, the final result might not be all that accurate or realistic. A new technique developed by scientists at The University of Manchester's School of Computer Science and Dolby Canada, however, might make capturing depth and textures for 3D surfaces as simple as shooting two pictures with a digital camera — one with flash and one without. First an image of a surface is captured without flash. The problem is that the different colors of a surface also reflect light differently, making it difficult to determine if the brightness difference is a function of depth or color. By taking a second photo with flash, however, the accurate colors of all visible portions of the surface can be captured. The two captured images essentially become a reflectance map (albedo) and a depth map (height field)."
This discussion has been archived. No new comments can be posted.

Capturing 3D Surfaces Simply With a Flash Camera

Comments Filter:
  • Amateurs. (Score:5, Funny)

    by bigtallmofo ( 695287 ) * on Wednesday August 27, 2008 @01:04PM (#24768391)
    Creating 3D maps and worlds can be extremely labor intensive and time consuming.

    Bah! I completed my last project in exactly 6 days and used nothing but voice commands. It turned out so well I sat on my couch and ate Cheetos the entire next day. Today, there are over 6 billion users and we're only now starting to run into scalability issues.

    -God

    .

  • by jeffb (2.718) ( 1189693 ) on Wednesday August 27, 2008 @01:05PM (#24768409)
    ...all sorts of problems become simple. I'd love to take a picture with some mirrors, some windows, maybe a reflective sign or two in the background, and see the funhouse effects that result. Oh, and don't forget emissive elements (lamps), which will appear to recede to infinity.
    • Re: (Score:3, Insightful)

      by Squapper ( 787068 )
      Yeah, this only seams to work with lamertian surfaces in flat-lit enviroments.

      That's not the biggest problem though, i am a 3d-artist, and it's a pain to try to make a tiling texture map out of a picture containing more than three channels, due to stupid limitations in all 2d applications.
      It's often more efficient to first make the color texture tile, then create a heightmap from that data. I guess that's why they are targetting scientific applications such as archeology, that requires more accuracy, a
      • Try using a compositing program. Something like Nuke will enable you to paint on all the layers using a OpenEXR file. It's kind of a cheat but it can be done.
      • Flat lighting is still pretty easy to come by. Some call it shadow, photographers call it skylight. Beyond that it's pretty easy to buy some diffusers and lights if not cheap in this day and age. To me this looks generally applicable to any pseudosurface flat enough that inverse-square on the flash is negligible. The low equipment cost on this is the key, now if only we could get them to cough up source code it would make for some kick-but amateur game development tech.
  • Quite old news (Score:5, Informative)

    by gardyloo ( 512791 ) on Wednesday August 27, 2008 @01:05PM (#24768411)

    Slashdot (can't be bothered to find it) had a story several years ago about the (then old!) technique of capturing complicated 3D objects, such as car engines, by using two flash images, each with the flash located in slightly different locations. Threshholding the difference between the images gives very nice edge detection, along with very accurate depth information.

    A project I'm working on uses the technique to capture information about arrowheads/spearheads.

    • Re:Quite old news (Score:5, Informative)

      by jellomizer ( 103300 ) on Wednesday August 27, 2008 @01:12PM (#24768495)

      But this time the camera stays fixed and there is one without flash and the other with it. Allowing for 3D Cameras to be made on the cheap by just a firmware upgrade (one click of the camera takes 2 shots 1 without flash the next with. Your way is different as it requires the camera to have 2 flash thus needed the making of new cameras.

      • You're right -- my way requires two flashes (it really doesn't, but we found it slightly more effective that way). The old slashdot article which I mention (but don't reference) also talked about only needing one camera. I think that it said that Chilton's Repair Manuals was using both techniques to produce their series of DVDs. Of course, I could be really wrong!

      • by Firehed ( 942385 )

        That won't work nearly as well unless you know the location and intensity of the ambient lighting sources for the non-flash image. In theory, you could make a fairly simple system that has two strobes (one on either side of the lens) that are powerful enough to overcome ambient, and use that slight difference to map out the texture (though for maximum effect, you'd really want them offset from the lens by 45 or so). The advantage to that approach is that you could fire off two shots in such quick successi

      • Hacking something together from cheap hotshoe-to-pc adapters and cords and some switches is still not exactly rocket science and good flashes are actually pretty cheap to come by.
        • No but for releasing a product for consumption would be more of an issue. If I wanted to market a 3D camera and I already make a camera and I can just change the firmware and keep all the Mass Production techniques the same I will vs. having to redesign a camera with 2 flashes making it bigger, and using more battery, more expensive just for a feature that someone would use every so not often but a firmware upgrade could be quick and easy add an extra sales bullet without much extra cost.

      • Or the revolutionary invention of a flash on an extension lead.

    • Re:Quite old news (Score:5, Informative)

      by glyph42 ( 315631 ) on Wednesday August 27, 2008 @01:28PM (#24768731) Homepage Journal
      NOT old news. Google for "2008 siggraph papers". Read the paper. Google for "2004 siggraph papers". Read about the old paper. Note the differences. Tim Rowley posts links to the papers from each year, so his site is recommended. Virtually all of these image-processing-related news items can be read long before they reach slashdot simply by keeping up with the latest papers from siggraph. In case you're lazy, the old paper is "Non-photorealistic Camera: Depth Edge Detection and Stylized Rendering Using a Multi-Flash Camera". Oddly, it's offline now. But I do have a copy of it on my hard drive. If you're not lazy, I HIGHLY recommend perusing all of the years' papers listed on Tim's site.
      • http://groups.csail.mit.edu/graphics/pubs/siggraph2004_nprcamera.pdf [mit.edu]

        Perhaps the previous slashdot story wasn't "old" -- if you count things post-2004 as "new". However, even the paper in the .pdf notes that people have been concertedly using these techniques since 1998, and I happen to know that a lot of the work was pioneered as early as the mid-1940's with depth-maps and stereograms. The new work IS nice, but it's not totally new.

        • by glyph42 ( 315631 )
          Good find with the link.

          The new work IS nice, but it's not totally new.

          Of course. Not much work is totally new. But it's new enough to be accepted into Siggraph, which is not an easy conference to get into.

        • So things have to be completely revolutionary in order to count as new? There's no such thing as evolutionary development? Your link is appreciated, and helps further the discussion, but why bookend it in such a "haughty" tone implying that the work is a dupe or nothing worth noting? Lots of papers in the same field will seem similar but each can often provide a new valuable insight building on the last one. To imply that nothing is new because someone did something in the 1940's is assinine and arrogan

          • Please. When I write papers, I reference works all the way back to Newton, Galileo, and even before (a nice habit inculcated in me by my former advisors and current boss), and I *know* that much of what I do is not new (or, if it is new, it usually only new in the context of the field in which it's placed).

            What I was apparently being "haughty" about was the breathless way in which advancements are lauded on the front page of slashdot as though they're revolutionary. To not acknowledge

      • We were trying a lot of different approaches at Canon Research Europe in the nineties. We tried using an ordinary camera with built in flash. It works, but with the provisos other people have pointed out - subject should be still, opaque and matte, yada yada. I think there was already prior patent art even then.

        It's still neat, though.

    • Hi!

      I know they're not as conspicuous as they could be, but there are frequently stories included near the body of the new story. It took me a while to dig this one up (I remembered posting it, but that was several thousand posts ago, and a few years, too), so I hope people notice it.

      https://science.slashdot.org/article.pl?sid=04/12/01/0238222 [slashdot.org]

      Cheers,

      timothy

  • They make a version of Flash for digital cameras? Is it secure?

    • Re: (Score:3, Funny)

      by MBCook ( 132727 )
      Yes, but for some odd reason it lacks any kind of image capture support.
      • by fbjon ( 692006 )
        Well duh, it's for projecting images, not capturing them. It only supports one color and the blink tag, though.
  • This is quite unusual for a university. Many schools have a department of computer science or a school of computer science. But combining that with a school of Dolby Canada is quite unusual. What kind of degrees in Dolby Canada do they offer? :-)

    • Re: (Score:3, Funny)

      by gstoddart ( 321705 )

      What kind of degrees in Dolby Canada do they offer?

      Primarily "Blinding Yourself with Science", with a minor in "Sound and Signal Processing".

      Cheers

  • Warning: (Score:5, Funny)

    by Anonymous Coward on Wednesday August 27, 2008 @01:14PM (#24768537)

    TFA requires Flash.

  • by sm62704 ( 957197 ) on Wednesday August 27, 2008 @01:18PM (#24768587) Journal

    Why didn't you just link to the more informative New Scientist [newscientist.com] article that the blog you linked quoted?

    • by discards ( 1345907 ) on Wednesday August 27, 2008 @01:43PM (#24768915)
      Because it's his blog and he would like some traffic.
    • Re: (Score:3, Interesting)

      by RyoShin ( 610051 )

      Because the NewScientist article doesn't get him the 18 billion ad impressions.

      Seriously, look at the page in FireFox with adBlock. Seems... kinda bare, right? It did to me, and I opened it in Opera (where I don't have ad blocking set up) and almost every single blank space had an ad.

      These are the kind of sites that require AdBlock.

      • by sm62704 ( 957197 )

        These are the kind of articles that shouldn't be posted on slashdot's front page. It's not like his was the only submission.

  • That's really freakin cool. How long before there's a GIMP plugin for this? I'd like it by 3pm Pacific please.

  • 8 years ago a manager in my lab thought that you could use a digital camera to get a 3D mesh model of whatever you photographed. It's a digital camera right? It took months for us to explain what a digital camera really was. Maybe he should have been teaching us!

  • by Rui del-Negro ( 531098 ) on Wednesday August 27, 2008 @01:28PM (#24768717) Homepage

    This is just a way to automatically generate surface bump maps. It does not really capture depth information (like a Z-buffer).

    Conceptually it seems simple enough (take a photo with shadows from a light source not in line with the camera, take another where all the shadows are in line with the camera (making them virtually invisible), tell the software which direction the light is coming from in the first photo, and let it figure out the relative height of each pixel, by analysing the difference between it and the uniform (flash-lit) version, after averaging the brightness of the two. It's similar to the technique some film scanners use to automatically remove scratches.

    I can think of a lot of cases where it won't work at all (shiny objects, detached layers, photos with multiple "natural" light sources, photos with long shadows), but still, for stuff like rock or tree bark textures it should save a lot of time. As the video suggests, this should be very pretty useful for archaeologists.

    • Re: (Score:2, Informative)

      by collywally ( 1223456 )
      Actually you can use a bump map (which just changes the angle light is reflected without deforming the actual surface) to create a displacement map (which actually moves the polygons up and down). You just have to play a little with the depth to get it right. And when using something like RenderMan which does displacement almost as fast as other renderers do bump maps it doesn't take long to figure out the right depth.
      • Re: (Score:3, Informative)

        Well, yes and no. The problem isn't how you use the map (to fake the normals or actually displace vertices), the problem is what kind of maps this technique can create. And my point is that it can't handle (for example) the Z-range of something like a person's face. Anything deep enough to actually cast shadows over other (relevant) parts of the geometry will break it (a shadow will appear much darker and the algorithm will assume it's a suface facing away from the light (or a hole). Use the result as a dis

  • Outside the box (Score:2, Insightful)

    by Anonymous Coward

    Probably has significant potential in the pr0n industry.

  • First an image of a surface is captured with flash. The problem is that the different colors of a surface also reflect light differently, making it difficult to determine if the brightness difference is a function of depth or color. By taking a second photo without flash, however, the accurate colors of all visible portions of the surface can be captured.

    This is reversed, the flash-lit image will show you the reflectance (and possibly some depth) information, whereas the non flash-lit image will show you the bare color map for the scene (provided the scene is properly lit to begin with.) FTFY!

    • by sm62704 ( 957197 ) on Wednesday August 27, 2008 @01:57PM (#24769069) Journal

      No, with flash (light source coming from the camera) shows the colors without shadows; i.e. without color perspective. Without flash (light source at an angle to the model/subject) shows the deeper parts in shadow (known to us former art students as "color perspective").

      You could actually fo this with two flashes, provided one was on the camera and one to the side. The fact that it flashes has nothing to do with it, it has to do with the angle of the light sources.

      • by Sj0 ( 472011 )

        Sounds like what you'd actually want is one with no depth information from lighting at all, and one with only depth information from flash.

        The fully lit one would contain the base colours, the flash one would drop off the brightness as the square of distance.

        Of course, as a voxelmap, I'd argue that it's not very useful to 99% of applications...

      • Re: (Score:3, Interesting)

        by jeffmeden ( 135043 )

        That's contrary to the article abstract. They describe using the difference between a diffuse lit scene (no shadows) and a flash lit scene (shadows only due to deviation of flash angle) where the brightness delta is used to fudge a distance/reflectivity calculation. Shadow detection is not a part of it, at least in this particular paper.

  • Much like the printing press, I can only assume this technology will find its first commercial success in pornography. Some angles are worth hiding.

    • 3d goatse! Awesome!

      Also, that would be quite a depth calculation!

      • 3d goatse! Awesome!

        After reading that sentence, I was dismayed to discover that nobody has invented bleach that can be used on one's mind.

    • Dear Sir,
                I would very much like to see what version of the Gutenberg Bible that you have been reading.

  • Why a flash? (Score:4, Interesting)

    by phorm ( 591458 ) on Wednesday August 27, 2008 @01:53PM (#24769035) Journal

    Why not cameras that use different wavelengths of light, etc? For example, one that works in visible light, and one that works in infrared?

    How about the use of different polarized lenses to block certain wavelengths of light?

    • Re: (Score:2, Informative)

      by Anonymous Coward

      RTFA. Because it is a cheap method. This way you do not need expensive infrared cameras or polarizers or, as mentioned in the article, laser equipement.

      And the great thing is, the results are perceived as as good as those obtained from more expensive equipement.

    • And of what use would that be? Reflectance is not very dependent on wavelength for most materials. Only thin materials close to the range of lambda / 2 to lambda / 4 and transparent materials with an optical density different than the surrounding medium at surfaces (refraction and dispersion) show big differences in wavelength. Other materials reflectance is only lightly dependent on wavelengths. The magnetic permeability basically defines how strong it is via the Fresnel equation for dielectric reflection.

    • Because every camera has a flash and what you are suggesting requires specialist equipment costing a lot of money as well as calibration etc?? (possibly custom/purpose made eq. which will increase the cost significantly)

      The article makes direct reference to cost. Hell, you could go laser 3D image if money was no object right?

      • by phorm ( 591458 )

        I'm not sure about digital snapshot cameras. But I've seen plenty of security/web/etc cameras that do IR. Filters for a camera may be somewhat affordable as well.

    • Because using different wavelengths may get you some weird surface interpretations.

    • by 4D6963 ( 933028 )

      Why not cameras that use different wavelengths of light

      It's called a colour camera. It's quite a common sort of device these days. It uses filters in the form of a grid to dedicate certain pixels to a wavelength band of light. And most are sensitive to infrared, just post your camera to a hot electric plate before it becomes orange to see it glow in purple in your camera or point it to the end of your remote control when you're pushing a button.

    • Why not cameras that use different wavelengths of light, etc? For example, one that works in visible light, and one that works in infrared?

      Why? What would this give you?

      How about the use of different polarized lenses to block certain wavelengths of light?

      This is already done to capture specular information.

  • I noticed this when I was in photoshops if you pick a circular brush and choose white on a black background you can "paint", quasi-3D ish landscapes, because of the way perspective works. And you can turn it into a height map, Supreme commander uses a similar/same method.

    It sounds like they just figured out how to use photographic techniques to make a height map.

  • Can anyone elucidate why this is so whizbang neato when we've had 3D photography ever since someone with a camera figured out about parallax [wikipedia.org]? Why is this different from stereoscopy [wikipedia.org]?

    Bemused,

    • by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Wednesday August 27, 2008 @02:22PM (#24769361) Homepage

      Parallax and stereoscopy both require the camera to be in two (or ideally with parallax more) positions. The ingenious thing about this idea (watch the video, it's good) is that the camera doesn't need to be moved. By taking two shots in the same spot, one with flash and one without, you can get a good depth map.

      Now it's not as good as a laser scanner, but it's much cheaper and faster and smaller (since you could use any little camera). It's a very simple but ingenious idea. I'm quite surprised by the amount of detail they are able to get this way.

      Of course it could be argued that parallax and stereoscopy are ways of viewing images with pseudo-depth as opposed to taking them (at least for the purpose of this article). Parallax has no real depth, but helps simulate the effect in the brain. Stereoscopy has no depth, but works just like the eyes to give the brain the data it needs to reconstruct the depth.

      • Both approaches require taking two photographs, so I confess I don't see too much difference that way. Part of what I'm confused about, I guess, is why it's easier to reconstruct 3D-ness from flash+nonflash rather than from parallax. Per your point, yes, stereoscopy has no depth per se, but then neither does flash+nonflash, really, which appears to be suggested by this bit:

        ...one aspect that researchers are still working on is how to capture an image that incorporates more than one surface field, such as

        • by MBCook ( 132727 )

          You're right that this still requires two pictures, but they are taken from the same point of view. You don't have to move the camera, re-focus, etc. To get stereoscopy to look right for human eyes, the cameras need to be just the right distance apart otherwise things look weird or out of scale. I'd imagine you'd have a similar issue with computer processing. To get much depth with parallax I think you need to have the camera shots a good difference apart as well, especially if you are trying to photograph

        • by Sj0 ( 472011 )

          Here's my take on it:

          Parallax and stereoscopy won't give you 3d information. There is no depth field. You've got two 2d images from which 3d information can be speculated, but no 3d information.

          This technique sounds like it would give you two arrays: The first array would be a colour map, the second array would be a height map. This would be done basically by taking the image without any flash, which would have no distance cues based on distance from the flash lens, and comparing it to the image with flash,

          • "You won't be able to see behind the object or anything insane like that, but you could concievably take two pictures of someone's face, and get a 3d snapshot of the face which would require only small changes to look normal."

            I seem to recall a short story somewhere(can't remember where, or by who) where the protagonist was working with the same kind of technology but found that he COULD see the back of objects. If I remember correctly, he could see the back of objects, but when he went and actually looked

            • by 4D6963 ( 933028 )
              You mean like what happens to some people during near death experiences?
              • Not sure what you mean by that.

                Another story that came to mind Was "The Sun Dog" by Stephen King.

                • by 4D6963 ( 933028 )
                  I'm talking about people having NDEs who report seeing themselves out of their bodies and being able to fly across the room and even read some inscription under the operating table.
          • by 4D6963 ( 933028 )

            Parallax and stereoscopy won't give you 3d information. There is no depth field. You've got two 2d images from which 3d information can be speculated, but no 3d information.

            What does it even mean "3D information can be speculated but no 3D information"? The information is either there or it's not. Of course it only gives you information on what both cameras see, which in some cases might even be all the 3D information you can get from the scene (picture shooting a "bumpy" wall, or really anything else which you can see in its entirety from some point), but it's a bunch of information you can extract "3D information" from.

  • by Anonymous Coward

    I wonder how well this works with faces, if it works well it could be an easy way to create head busts for 3d heads for "icons" in your contact list.

    • by Kaetemi ( 928767 )

      Eyetronics (http://www.eyetronics.com/) has a similar technology (they flash different patterns, and compile the 3d image from that). It's the company that does the 3d scans for a lot of movies and games these days. I saw a live demonstration of that once, where people could just go sit in a picture booth, and have their face photographed to a 3d file. It works really fast, and the result is ok (as long as the system is synchronized up correctly).

      They also did a presentation where they did say that they wer

  • by Anonymous Coward

    Caution: Do not use camera, flash or not, around minors, some asians, some tribes of africa and south america, or anyone in the protection of the united states federal government. Use of camera in any of these situations can result in physical harm or jail time.

  • by JoshDM ( 741866 ) on Wednesday August 27, 2008 @02:26PM (#24769407) Homepage Journal

    "shooting two pictures with a digital camera -- one with flash and one without. "

    This difference has already been well-expressed across the internet for years. [imageshack.us]

  • But still could be good for quick and dirty bumpmaps.

  • This actually isn't all that different from some methods I've seen to generate 3D geometry of a subject using cameras and lighting. One method in particular uses cameras mounted in strategic locations around the subject as a DLP projector rapidly displays a series of light and dark lines patterns across the subject's surface, then shooting photos of lines.

    Not quite as cool as a 3D scanner using lasers, but it seems to be easier on subjects like humans or animals that tend to move a lot.

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...