Capturing 3D Surfaces Simply With a Flash Camera 131
MojoKid writes with this excerpt from Hot Hardware (linking to a video demonstration): "Creating 3D maps and worlds can be extremely labor intensive and time consuming. Also, the final result might not be all that accurate or realistic.
A new technique developed by scientists at The University of Manchester's School of Computer Science and Dolby Canada, however, might make capturing depth and textures for 3D surfaces as simple as shooting two pictures with a digital camera — one with flash and one without. First an image of a surface is captured without flash. The problem is that the different colors of a surface also reflect light differently, making it difficult to determine if the brightness difference is a function of depth or color.
By taking a second photo with flash, however, the accurate colors of all visible portions of the surface can be captured. The two captured images essentially become a reflectance map (albedo) and a depth map (height field)."
Amateurs. (Score:5, Funny)
Bah! I completed my last project in exactly 6 days and used nothing but voice commands. It turned out so well I sat on my couch and ate Cheetos the entire next day. Today, there are over 6 billion users and we're only now starting to run into scalability issues.
-God
.
Re:Amateurs. (Score:5, Funny)
Yeah, but look at how bloated your operating systemn is!
Re:Amateurs. (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Funny)
There's a reason for that... (Score:4, Informative)
Obligatory XKCD [xkcd.com]
Re:Amateurs. (Score:5, Funny)
Your project is a case study in bad management, though. Sure, you completed the whole thing in six days, but what are we left with? Documentation that's cryptic at best, and literally billions of bugs.
Re:Amateurs. (Score:5, Funny)
And don't get me started on that unhandled divide-by-zero exception!
Re:Amateurs. (Score:5, Funny)
The divide-by-zero exception is hardly fair. How can he fix a bug that we can't even replicate? As soon as the LHC comes on-line, we can file an official bug report. Until then, let him off the hook.
Re:Amateurs. (Score:5, Funny)
He does at least seem to fix hacking vulnerabilities though. According to accounts there used to be a lot more magic about only a few centuries ago. Or maybe the talent just matured and moved over to the more challenging but reliable fileds of reverse engineering and repurpousing the apparrently intentional features.
If only similar attention was directed to safety...
Re: (Score:1)
heh, good one! no mod points here at the moment, though.
but you have to admit, at least this world doesn't have script kiddies. you can't take someone else's hacks and use them yourself like you can for any microsoft system. gotta love that little bit of attention to detail, i mean.. imagine what would happen if everyone could just telekinese some deadly shit on anyone else's head.
chaos, mayhem and no more politicians.
yup, great job, god!
now, folks, please excuse me while i go look for some exploits.
Re: (Score:2)
at least this world doesn't have script kiddies.
Ever heard of grimoires? Sounds like script collections to me.
Just made obsolete by patches.
Re: (Score:2, Funny)
Re: (Score:3, Funny)
Re: (Score:2)
Re:Amateurs. (Score:4, Funny)
Gameplay sucks, just one endless grind.
Re:Amateurs. (Score:5, Funny)
Obviously, you haven't unlocked the right minigame. It's a short game, but it makes grinding fun.
Oh, you make it sound so easy... (Score:5, Funny)
Unfortunately unlocking the minigame can be nearly impossible if you have the wrong arbitrarily-assigned game character. Of course you could modify your character and change your character's gear to make it a little easier, but that's even more work and expense and doesn't make a big difference. There's also a way to pay your way into one minigame session but you'll have to be discreet about it unless you want to start another minigame that involves a lot of not-fun stuff like carefully balancing a slippery bar of soap.
Re: (Score:2)
The warning in your sig thoroughly disregarded, that post is a perfect example of why most slashdotters aren't playing that minigame.
Re: (Score:2, Funny)
Re: (Score:3, Funny)
I used to think so too. But once I got my HDTV set up, the resolution on my back yard's just not that impressive.
Re: (Score:2, Funny)
Re: (Score:2)
I'm taking Improved Daydreaming next patch. They dropped Advanced Sexology from the Codemonkey tree - got to get through the day _somehow_ after all.
NERF!
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
3D maps? Voice commands? Sitting on the couch? Deckard does that stuff much faster than you, nooblet.
If you make enough simplifying assumptions... (Score:5, Interesting)
Re: (Score:3, Insightful)
That's not the biggest problem though, i am a 3d-artist, and it's a pain to try to make a tiling texture map out of a picture containing more than three channels, due to stupid limitations in all 2d applications.
It's often more efficient to first make the color texture tile, then create a heightmap from that data. I guess that's why they are targetting scientific applications such as archeology, that requires more accuracy, a
Re: (Score:1)
Re: (Score:2)
Quite old news (Score:5, Informative)
Slashdot (can't be bothered to find it) had a story several years ago about the (then old!) technique of capturing complicated 3D objects, such as car engines, by using two flash images, each with the flash located in slightly different locations. Threshholding the difference between the images gives very nice edge detection, along with very accurate depth information.
A project I'm working on uses the technique to capture information about arrowheads/spearheads.
Re:Quite old news (Score:5, Informative)
But this time the camera stays fixed and there is one without flash and the other with it. Allowing for 3D Cameras to be made on the cheap by just a firmware upgrade (one click of the camera takes 2 shots 1 without flash the next with. Your way is different as it requires the camera to have 2 flash thus needed the making of new cameras.
Re: (Score:2)
You're right -- my way requires two flashes (it really doesn't, but we found it slightly more effective that way). The old slashdot article which I mention (but don't reference) also talked about only needing one camera. I think that it said that Chilton's Repair Manuals was using both techniques to produce their series of DVDs. Of course, I could be really wrong!
Re: (Score:2)
That won't work nearly as well unless you know the location and intensity of the ambient lighting sources for the non-flash image. In theory, you could make a fairly simple system that has two strobes (one on either side of the lens) that are powerful enough to overcome ambient, and use that slight difference to map out the texture (though for maximum effect, you'd really want them offset from the lens by 45 or so). The advantage to that approach is that you could fire off two shots in such quick successi
Re: (Score:2)
Re: (Score:2)
No but for releasing a product for consumption would be more of an issue. If I wanted to market a 3D camera and I already make a camera and I can just change the firmware and keep all the Mass Production techniques the same I will vs. having to redesign a camera with 2 flashes making it bigger, and using more battery, more expensive just for a feature that someone would use every so not often but a firmware upgrade could be quick and easy add an extra sales bullet without much extra cost.
Re: (Score:2)
Or the revolutionary invention of a flash on an extension lead.
Re:Quite old news (Score:5, Informative)
Re: (Score:2)
http://groups.csail.mit.edu/graphics/pubs/siggraph2004_nprcamera.pdf [mit.edu]
Perhaps the previous slashdot story wasn't "old" -- if you count things post-2004 as "new". However, even the paper in the .pdf notes that people have been concertedly using these techniques since 1998, and I happen to know that a lot of the work was pioneered as early as the mid-1940's with depth-maps and stereograms. The new work IS nice, but it's not totally new.
Re: (Score:2)
The new work IS nice, but it's not totally new.
Of course. Not much work is totally new. But it's new enough to be accepted into Siggraph, which is not an easy conference to get into.
Re: (Score:2)
So things have to be completely revolutionary in order to count as new? There's no such thing as evolutionary development? Your link is appreciated, and helps further the discussion, but why bookend it in such a "haughty" tone implying that the work is a dupe or nothing worth noting? Lots of papers in the same field will seem similar but each can often provide a new valuable insight building on the last one. To imply that nothing is new because someone did something in the 1940's is assinine and arrogan
Re: (Score:2)
Please. When I write papers, I reference works all the way back to Newton, Galileo, and even before (a nice habit inculcated in me by my former advisors and current boss), and I *know* that much of what I do is not new (or, if it is new, it usually only new in the context of the field in which it's placed).
What I was apparently being "haughty" about was the breathless way in which advancements are lauded on the front page of slashdot as though they're revolutionary. To not acknowledge
Canon research, about 1999 (Score:2)
It's still neat, though.
yes, it's one of the above "related links" (Score:2, Interesting)
Hi!
I know they're not as conspicuous as they could be, but there are frequently stories included near the body of the new story. It took me a while to dig this one up (I remembered posting it, but that was several thousand posts ago, and a few years, too), so I hope people notice it.
https://science.slashdot.org/article.pl?sid=04/12/01/0238222 [slashdot.org]
Cheers,
timothy
Re: (Score:2)
My project isn't *extremely* concerned with precision, but for a monochromatic light source and a nice background, one can easily obtain depths to ~1/50 mm from shadow-shifts. This is about one part in 500 of the object height. For two monochromatic sources, the precision increases to about 1/70 mm. More sources increase the precision a bit, but due to specularity and diffraction effects, white light decreases the precision a little bit.
Flash in a camera? (Score:2, Funny)
They make a version of Flash for digital cameras? Is it secure?
Re: (Score:3, Funny)
Re: (Score:2)
The School of Computer Science and Dolby Canada (Score:2)
This is quite unusual for a university. Many schools have a department of computer science or a school of computer science. But combining that with a school of Dolby Canada is quite unusual. What kind of degrees in Dolby Canada do they offer? :-)
Re: (Score:3, Funny)
Primarily "Blinding Yourself with Science", with a minor in "Sound and Signal Processing".
Cheers
Warning: (Score:5, Funny)
TFA requires Flash.
Re: (Score:2)
Mod parent up funny?
A question for mojokid (Score:5, Insightful)
Why didn't you just link to the more informative New Scientist [newscientist.com] article that the blog you linked quoted?
Re:A question for mojokid (Score:5, Insightful)
Re: (Score:3, Interesting)
Because the NewScientist article doesn't get him the 18 billion ad impressions.
Seriously, look at the page in FireFox with adBlock. Seems... kinda bare, right? It did to me, and I opened it in Opera (where I don't have ad blocking set up) and almost every single blank space had an ad.
These are the kind of sites that require AdBlock.
Re: (Score:2)
These are the kind of articles that shouldn't be posted on slashdot's front page. It's not like his was the only submission.
Just buy 3d camera (Score:3, Informative)
Homemades.
http://www.ghouse.com/daniel/stereoscopy/equipment/index.html [ghouse.com]
http://www.teamdroid.com/how-to-make-a-cheap-digital-camera/ [teamdroid.com]
Store
http://www.3dstereo.com/viewmaster/cam-kal.html [3dstereo.com]
Now where's the download link for the GIMP plugin? (Score:2)
That's really freakin cool. How long before there's a GIMP plugin for this? I'd like it by 3pm Pacific please.
The more things change the more they stay the same (Score:1)
8 years ago a manager in my lab thought that you could use a digital camera to get a 3D mesh model of whatever you photographed. It's a digital camera right? It took months for us to explain what a digital camera really was. Maybe he should have been teaching us!
Don't get too excited (Score:4, Informative)
This is just a way to automatically generate surface bump maps. It does not really capture depth information (like a Z-buffer).
Conceptually it seems simple enough (take a photo with shadows from a light source not in line with the camera, take another where all the shadows are in line with the camera (making them virtually invisible), tell the software which direction the light is coming from in the first photo, and let it figure out the relative height of each pixel, by analysing the difference between it and the uniform (flash-lit) version, after averaging the brightness of the two. It's similar to the technique some film scanners use to automatically remove scratches.
I can think of a lot of cases where it won't work at all (shiny objects, detached layers, photos with multiple "natural" light sources, photos with long shadows), but still, for stuff like rock or tree bark textures it should save a lot of time. As the video suggests, this should be very pretty useful for archaeologists.
Re: (Score:2, Informative)
Re: (Score:3, Informative)
Well, yes and no. The problem isn't how you use the map (to fake the normals or actually displace vertices), the problem is what kind of maps this technique can create. And my point is that it can't handle (for example) the Z-range of something like a person's face. Anything deep enough to actually cast shadows over other (relevant) parts of the geometry will break it (a shadow will appear much darker and the algorithm will assume it's a suface facing away from the light (or a hole). Use the result as a dis
Outside the box (Score:2, Insightful)
Probably has significant potential in the pr0n industry.
Article has a minor gaffe (Score:2)
First an image of a surface is captured with flash. The problem is that the different colors of a surface also reflect light differently, making it difficult to determine if the brightness difference is a function of depth or color. By taking a second photo without flash, however, the accurate colors of all visible portions of the surface can be captured.
This is reversed, the flash-lit image will show you the reflectance (and possibly some depth) information, whereas the non flash-lit image will show you the bare color map for the scene (provided the scene is properly lit to begin with.) FTFY!
Re:Article has a minor gaffe (Score:4, Informative)
No, with flash (light source coming from the camera) shows the colors without shadows; i.e. without color perspective. Without flash (light source at an angle to the model/subject) shows the deeper parts in shadow (known to us former art students as "color perspective").
You could actually fo this with two flashes, provided one was on the camera and one to the side. The fact that it flashes has nothing to do with it, it has to do with the angle of the light sources.
Re: (Score:2)
Sounds like what you'd actually want is one with no depth information from lighting at all, and one with only depth information from flash.
The fully lit one would contain the base colours, the flash one would drop off the brightness as the square of distance.
Of course, as a voxelmap, I'd argue that it's not very useful to 99% of applications...
Re: (Score:3, Interesting)
That's contrary to the article abstract. They describe using the difference between a diffuse lit scene (no shadows) and a flash lit scene (shadows only due to deviation of flash angle) where the brightness delta is used to fudge a distance/reflectivity calculation. Shadow detection is not a part of it, at least in this particular paper.
so.... (Score:2)
Much like the printing press, I can only assume this technology will find its first commercial success in pornography. Some angles are worth hiding.
hello.jpg (Score:2)
3d goatse! Awesome!
Also, that would be quite a depth calculation!
Re: (Score:2)
3d goatse! Awesome!
After reading that sentence, I was dismayed to discover that nobody has invented bleach that can be used on one's mind.
Re: (Score:1)
Dear Sir,
I would very much like to see what version of the Gutenberg Bible that you have been reading.
Re: (Score:2)
The Gutenberg Bible was a commercial success?
No. But only because of all the damned pirates running off copies without compensating the authors.
Why a flash? (Score:4, Interesting)
Why not cameras that use different wavelengths of light, etc? For example, one that works in visible light, and one that works in infrared?
How about the use of different polarized lenses to block certain wavelengths of light?
Re: (Score:2, Informative)
RTFA. Because it is a cheap method. This way you do not need expensive infrared cameras or polarizers or, as mentioned in the article, laser equipement.
And the great thing is, the results are perceived as as good as those obtained from more expensive equipement.
Re: (Score:2)
And of what use would that be? Reflectance is not very dependent on wavelength for most materials. Only thin materials close to the range of lambda / 2 to lambda / 4 and transparent materials with an optical density different than the surrounding medium at surfaces (refraction and dispersion) show big differences in wavelength. Other materials reflectance is only lightly dependent on wavelengths. The magnetic permeability basically defines how strong it is via the Fresnel equation for dielectric reflection.
Re: (Score:2)
Because every camera has a flash and what you are suggesting requires specialist equipment costing a lot of money as well as calibration etc?? (possibly custom/purpose made eq. which will increase the cost significantly)
The article makes direct reference to cost. Hell, you could go laser 3D image if money was no object right?
Re: (Score:2)
I'm not sure about digital snapshot cameras. But I've seen plenty of security/web/etc cameras that do IR. Filters for a camera may be somewhat affordable as well.
Re: (Score:2)
Image quality in those is terrible. Filters could work also.
Re: (Score:2)
Because using different wavelengths may get you some weird surface interpretations.
Re: (Score:2)
Why not cameras that use different wavelengths of light
It's called a colour camera. It's quite a common sort of device these days. It uses filters in the form of a grid to dedicate certain pixels to a wavelength band of light. And most are sensitive to infrared, just post your camera to a hot electric plate before it becomes orange to see it glow in purple in your camera or point it to the end of your remote control when you're pushing a button.
Re: (Score:2)
Why not cameras that use different wavelengths of light, etc? For example, one that works in visible light, and one that works in infrared?
Why? What would this give you?
How about the use of different polarized lenses to block certain wavelengths of light?
This is already done to capture specular information.
Sounds like gradient maps... (Score:1, Redundant)
I noticed this when I was in photoshops if you pick a circular brush and choose white on a black background you can "paint", quasi-3D ish landscapes, because of the way perspective works. And you can turn it into a height map, Supreme commander uses a similar/same method.
It sounds like they just figured out how to use photographic techniques to make a height map.
Hello, what about Victorian-era stereographs? (Score:2)
Can anyone elucidate why this is so whizbang neato when we've had 3D photography ever since someone with a camera figured out about parallax [wikipedia.org]? Why is this different from stereoscopy [wikipedia.org]?
Bemused,
Re:Hello, what about Victorian-era stereographs? (Score:5, Interesting)
Parallax and stereoscopy both require the camera to be in two (or ideally with parallax more) positions. The ingenious thing about this idea (watch the video, it's good) is that the camera doesn't need to be moved. By taking two shots in the same spot, one with flash and one without, you can get a good depth map.
Now it's not as good as a laser scanner, but it's much cheaper and faster and smaller (since you could use any little camera). It's a very simple but ingenious idea. I'm quite surprised by the amount of detail they are able to get this way.
Of course it could be argued that parallax and stereoscopy are ways of viewing images with pseudo-depth as opposed to taking them (at least for the purpose of this article). Parallax has no real depth, but helps simulate the effect in the brain. Stereoscopy has no depth, but works just like the eyes to give the brain the data it needs to reconstruct the depth.
Whizbang for lighting & textures, not 3D-ness (Score:2)
Both approaches require taking two photographs, so I confess I don't see too much difference that way. Part of what I'm confused about, I guess, is why it's easier to reconstruct 3D-ness from flash+nonflash rather than from parallax. Per your point, yes, stereoscopy has no depth per se, but then neither does flash+nonflash, really, which appears to be suggested by this bit:
Re: (Score:2)
You're right that this still requires two pictures, but they are taken from the same point of view. You don't have to move the camera, re-focus, etc. To get stereoscopy to look right for human eyes, the cameras need to be just the right distance apart otherwise things look weird or out of scale. I'd imagine you'd have a similar issue with computer processing. To get much depth with parallax I think you need to have the camera shots a good difference apart as well, especially if you are trying to photograph
Re: (Score:2)
Here's my take on it:
Parallax and stereoscopy won't give you 3d information. There is no depth field. You've got two 2d images from which 3d information can be speculated, but no 3d information.
This technique sounds like it would give you two arrays: The first array would be a colour map, the second array would be a height map. This would be done basically by taking the image without any flash, which would have no distance cues based on distance from the flash lens, and comparing it to the image with flash,
Re: (Score:2)
"You won't be able to see behind the object or anything insane like that, but you could concievably take two pictures of someone's face, and get a 3d snapshot of the face which would require only small changes to look normal."
I seem to recall a short story somewhere(can't remember where, or by who) where the protagonist was working with the same kind of technology but found that he COULD see the back of objects. If I remember correctly, he could see the back of objects, but when he went and actually looked
Re: (Score:2)
Re: (Score:2)
Not sure what you mean by that.
Another story that came to mind Was "The Sun Dog" by Stephen King.
Re: (Score:2)
Re: (Score:2)
Parallax and stereoscopy won't give you 3d information. There is no depth field. You've got two 2d images from which 3d information can be speculated, but no 3d information.
What does it even mean "3D information can be speculated but no 3D information"? The information is either there or it's not. Of course it only gives you information on what both cameras see, which in some cases might even be all the 3D information you can get from the scene (picture shooting a "bumpy" wall, or really anything else which you can see in its entirety from some point), but it's a bunch of information you can extract "3D information" from.
How well does this work with faces? (Score:1, Interesting)
I wonder how well this works with faces, if it works well it could be an easy way to create head busts for 3d heads for "icons" in your contact list.
Re: (Score:1)
Eyetronics (http://www.eyetronics.com/) has a similar technology (they flash different patterns, and compile the 3d image from that). It's the company that does the 3d scans for a lot of movies and games these days. I saw a live demonstration of that once, where people could just go sit in a picture booth, and have their face photographed to a 3d file. It works really fast, and the result is ok (as long as the system is synchronized up correctly).
They also did a presentation where they did say that they wer
The Small Print (Score:1, Funny)
Caution: Do not use camera, flash or not, around minors, some asians, some tribes of africa and south america, or anyone in the protection of the united states federal government. Use of camera in any of these situations can result in physical harm or jail time.
The differences with having Flash in photos (Score:5, Funny)
"shooting two pictures with a digital camera -- one with flash and one without. "
This difference has already been well-expressed across the internet for years. [imageshack.us]
Not true 3D (Score:2)
But still could be good for quick and dirty bumpmaps.
3D geometry (Score:2)
This actually isn't all that different from some methods I've seen to generate 3D geometry of a subject using cameras and lighting. One method in particular uses cameras mounted in strategic locations around the subject as a DLP projector rapidly displays a series of light and dark lines patterns across the subject's surface, then shooting photos of lines.
Not quite as cool as a 3D scanner using lasers, but it seems to be easier on subjects like humans or animals that tend to move a lot.
They could have been making HDR (Score:2)
They could have been making a panoramic [wikipedia.org] high dynamic range [wikipedia.org] image. From the wiki:"Probably the first practical application of HDRI was by the movie industry in late 1980s and, in 1985"