Capturing 3D Surfaces Simply With a Flash Camera 131
MojoKid writes with this excerpt from Hot Hardware (linking to a video demonstration): "Creating 3D maps and worlds can be extremely labor intensive and time consuming. Also, the final result might not be all that accurate or realistic.
A new technique developed by scientists at The University of Manchester's School of Computer Science and Dolby Canada, however, might make capturing depth and textures for 3D surfaces as simple as shooting two pictures with a digital camera — one with flash and one without. First an image of a surface is captured without flash. The problem is that the different colors of a surface also reflect light differently, making it difficult to determine if the brightness difference is a function of depth or color.
By taking a second photo with flash, however, the accurate colors of all visible portions of the surface can be captured. The two captured images essentially become a reflectance map (albedo) and a depth map (height field)."
Quite old news (Score:5, Informative)
Slashdot (can't be bothered to find it) had a story several years ago about the (then old!) technique of capturing complicated 3D objects, such as car engines, by using two flash images, each with the flash located in slightly different locations. Threshholding the difference between the images gives very nice edge detection, along with very accurate depth information.
A project I'm working on uses the technique to capture information about arrowheads/spearheads.
Re:Quite old news (Score:5, Informative)
But this time the camera stays fixed and there is one without flash and the other with it. Allowing for 3D Cameras to be made on the cheap by just a firmware upgrade (one click of the camera takes 2 shots 1 without flash the next with. Your way is different as it requires the camera to have 2 flash thus needed the making of new cameras.
Just buy 3d camera (Score:3, Informative)
Homemades.
http://www.ghouse.com/daniel/stereoscopy/equipment/index.html [ghouse.com]
http://www.teamdroid.com/how-to-make-a-cheap-digital-camera/ [teamdroid.com]
Store
http://www.3dstereo.com/viewmaster/cam-kal.html [3dstereo.com]
Don't get too excited (Score:4, Informative)
This is just a way to automatically generate surface bump maps. It does not really capture depth information (like a Z-buffer).
Conceptually it seems simple enough (take a photo with shadows from a light source not in line with the camera, take another where all the shadows are in line with the camera (making them virtually invisible), tell the software which direction the light is coming from in the first photo, and let it figure out the relative height of each pixel, by analysing the difference between it and the uniform (flash-lit) version, after averaging the brightness of the two. It's similar to the technique some film scanners use to automatically remove scratches.
I can think of a lot of cases where it won't work at all (shiny objects, detached layers, photos with multiple "natural" light sources, photos with long shadows), but still, for stuff like rock or tree bark textures it should save a lot of time. As the video suggests, this should be very pretty useful for archaeologists.
Re:Quite old news (Score:5, Informative)
Re:Don't get too excited (Score:2, Informative)
Re:Article has a minor gaffe (Score:4, Informative)
No, with flash (light source coming from the camera) shows the colors without shadows; i.e. without color perspective. Without flash (light source at an angle to the model/subject) shows the deeper parts in shadow (known to us former art students as "color perspective").
You could actually fo this with two flashes, provided one was on the camera and one to the side. The fact that it flashes has nothing to do with it, it has to do with the angle of the light sources.
Re:Why a flash? (Score:2, Informative)
RTFA. Because it is a cheap method. This way you do not need expensive infrared cameras or polarizers or, as mentioned in the article, laser equipement.
And the great thing is, the results are perceived as as good as those obtained from more expensive equipement.
There's a reason for that... (Score:4, Informative)
Obligatory XKCD [xkcd.com]
Re:Don't get too excited (Score:3, Informative)
Well, yes and no. The problem isn't how you use the map (to fake the normals or actually displace vertices), the problem is what kind of maps this technique can create. And my point is that it can't handle (for example) the Z-range of something like a person's face. Anything deep enough to actually cast shadows over other (relevant) parts of the geometry will break it (a shadow will appear much darker and the algorithm will assume it's a suface facing away from the light (or a hole). Use the result as a displacement map and it'll look very weird.
Panasonic (IIRC, possibly JVC or someone else) was working on a video camera that could capture a Z-buffer in real time (meant to be used as a replacement for chroma-keying), but I don't think they ever put a usable product out the door. The techniques used in Radiohead's "House of Cards" video look interesting, too, but also not really usable in most cases.
Anyway, the technique mentioned in this article should still be practical for bas-reliefs and shallow matte surfaces, which is what archaeologists deal with most of the time.
P.S. - Dense geometry (required by displacement maps) isn't particularly slower to render for any high-end shaders (raytracing / photons / GI / QMC / whatever). But those are always painfully slow (compared to basic non-GI, shadow mapped, non-bouncing renderers), and the denser meshes required for good displacement mapping still take up huge amounts of RAM, so bump still has its place.