Photosynth Team Does It Again 144
STFS found an update to the
Photosynth stories that we already ran. You might remember the amazing photo tourism demos. Well, this new version kicks things up several notches with paths and color correction to more smoothly transition between photos taken in different lighting conditions. As before, this stuff is worth your time. Check it out.
fascinating (Score:2, Informative)
Science fiction and VR have primed me to believe someday we would all be walking around some imaginary digital world (oh wait, WoW anyone?), but this is "virtualization" of the real world. Like Google street view on crack. I am simultaneously in awe of the technological achievement and embarrassed that my life in computers hasn't yet created anything so cool.
I, for one, welcome our new PhotoSynth overlords.
Re:Huh (Score:4, Informative)
Video (Score:5, Informative)
Obligatory link to the youtube video [youtube.com] (not a rickroll, I promise!)
Thanks, Network Mirror!
Re:I'm confused by all this (Score:5, Informative)
It needs neither input of coordinates or input of a rough 3d layout. It generates its own 3d model by analyzing the photographs programatically, you do not even need to tell the program they were taking in the same area. The photographs are then automatically applied to the generated 3d model and finally it lets you move freely in the generated 3d world selecting the best photo matching your current viewpoint while applying perspective remapping, color correction and lens correction.
Re:I'm confused by all this (Score:5, Informative)
From what I took away from the original demo, they were doing everything algorithmically. The original demo showed a wireframe of the Notre Dame generated completely from amateur pictures, then overlaid with those same pictures to give it texture. So yes, it is quite impressive. I'd be surprised if Google wasn't doing anything similar for Google Maps though.
Re:I'm confused by all this (Score:5, Informative)
Re:No sense to limit how many photos you take... (Score:3, Informative)
Re:No sense to limit how many photos you take... (Score:5, Informative)
So, those are the ones I can think of off the top of my head.
Re: Exposure time (Score:4, Informative)
NTSC is worse than you described; you have two 1/60th second exposures interlaced together. Utterly worthless for still frames.
Once progressive HD video cameras become cheap, then video will suck slightly less for the average family archive.
Re:No sense to limit how many photos you take... (Score:2, Informative)
Image processing is getting better - and I am definitely keen on using software to process videos to synthetically generate high definition images. My time and dollar resources are limited for getting the perfect photo.
Re:Wow (Score:3, Informative)
There are other features that I don't see how they're getting, such as the zones where photos were shot from. That takes an awful lot of extrapolation. What's the difference between a photographer 10 feet away, and a photographer 200 feet away with a good zoom lens? Almost nothing, except maybe a little focal distortion at the edge of the photo. That varies with the quality of the camera and lens anyways.
Perspective changes a lot based on where the camera is, a big zoom lense does nothing to change the perspective it just makes the image larger.
Their process finds machine recognisable points in each photo, then looks for matching points between photos. Once you know that 2 photos are of the same subject you can use the separation between these known points to work out the relative viewing position of each camera. It only takes about 4-5 common points on different planes to pinpoint where each camera is relative to other camera's. I can visualise how this process could be completely automated.
At the end of the process they have a 3D model of where all these identifiable points are relative to each other, and they know where to project the plane of each photo within that model.