Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software

Photosynth Team Does It Again 144

STFS found an update to the Photosynth stories that we already ran. You might remember the amazing photo tourism demos. Well, this new version kicks things up several notches with paths and color correction to more smoothly transition between photos taken in different lighting conditions. As before, this stuff is worth your time. Check it out.
This discussion has been archived. No new comments can be posted.

Photosynth Team Does It Again

Comments Filter:
  • And THIS is why I tend to take huge numbers of photos and never delete any... Technology like this will account for easy geotagging, date I already have in the EXIF data, whereas people can be tagged with face recognition soon enough.

    That done, I'll be able to navigate my tens of thousands of photos by asking for things like photos taken of the kids while outside at the cottage when they were 3 years old.

    Also, remember to backup! :)

  • by BitterOldGUy ( 1330491 ) on Thursday August 14, 2008 @09:19AM (#24597987)
    It looks like taking a video would be easier. That way, you wouldn't have to spend time stringing all the stills to together - if I understood correctly.
  • Re:Wow (Score:5, Insightful)

    by ttapper04 ( 955370 ) on Thursday August 14, 2008 @09:22AM (#24598017) Journal
    Microsoft had better not repeat google's slight miscalculation. The credits given to the flicker accounts tell that they must of had to opt in, unlike streetview. This photosynth system would be incredibly powerful if it used all flicker images or crawled the web. People are clearly visible everywhere in this system, and some may become upset.
  • by Anonymous Coward on Thursday August 14, 2008 @09:34AM (#24598169)
    If this was an OSS project, your post would have been rated "flamebait".
  • by Minwee ( 522556 ) <dcr@neverwhen.org> on Thursday August 14, 2008 @09:40AM (#24598229) Homepage
    This is described in their SIGGRAPH paper, which was prominently linked from the article.

    It's a bit dense and involves some cross references, but here's a part which may answer some of your questions. For more detail you oculd always read the paper yourself.

    We use our previously developed structure from motion system to recover the camera parameters for each photograph along with a sparse point cloud [Snavely et al. 2006]. The system detects SIFT features in each of the input photos [Lowe 2004], matches features between all pairs of photos, and finally uses the matches to recover the camera positions, orientations, and focal lengths, along with a sparse set of 3D points. For efficiency, we run this system on a subset of the photos for each collection, then use pose estimation techniques to register the remainder of the photos. A more prin- cipled approach to reconstructing large image sets is described in [Snavely et al. 2008].

  • by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Thursday August 14, 2008 @09:46AM (#24598305) Homepage

    It looks like taking a video would be easier.

    Depending on what you are trying to do... My original point was that technology like this will make it possible to navigate the swamps of data we're accumulating.

    I like having a lot of family photos, but traditional albums won't do when we have literally thousands of them. Stuff like this can make it possible to easily call up photos based on suitable criteria. Like I said we need other parts to, like face recognition, but summing it all up we'll eventually have a feasible way to navigate a huge amount of photographic data.

  • by Tim C ( 15259 ) on Thursday August 14, 2008 @10:01AM (#24598513)

    Yes, and this is nothing like that. That was apparently creating additional information that simply wasn't in the original photo. This is using a whole bunch of photos of the same scene, taken at different times, angles, etc to automatically build up a 3d model. Nothing is being enhanced, you're "merely" being shown the most appropriate, pre-existing photo based on your location and view direction in the generated 3D model.

    Damn cool tech, but not the same as that used in Blade Runner (or CSI, or any other "enhance this photo to make that illegible squiggle that's beyond the resolution of the photo readable" plot device)

  • by dave420 ( 699308 ) on Thursday August 14, 2008 @10:23AM (#24598853)
    Microsoft have turned Photo Tourism into something incredibly more powerful. But don't let that get you off your high horse. Some of us don't play the "them" and "us" game.
  • Re:Wow (Score:5, Insightful)

    by YrWrstNtmr ( 564987 ) on Thursday August 14, 2008 @11:22AM (#24599821)
    Huh? Why not get out there, meet people from those countries, eat the food they eat, get drunk with them, and actually experience the world?

    Of course! Because every familiy has the time and resources to visit every possible interesting place on the planet.
  • Re:Wow (Score:5, Insightful)

    by swillden ( 191260 ) <shawn-ds@willden.org> on Thursday August 14, 2008 @11:41AM (#24600143) Journal

    Huh? Why not get out there, meet people from those countries, eat the food they eat, get drunk with them, and actually experience the world?

    Ummm, because we can't afford it? Taking six people to Greece would consume our family vacation budget for 3-4 years. I'd rather stay closer to home and spend more time with my kids.

  • 4D support? (Score:3, Insightful)

    by Roger W Moore ( 538166 ) on Thursday August 14, 2008 @01:20PM (#24601799) Journal

    That done, I'll be able to navigate my tens of thousands of photos by asking for things like photos taken of the kids while outside at the cottage when they were 3 years old.

    That raises an interesting concept. Could they do a 4D orbit? For example identify pictures of your kids at different ages and then you could watch them grow up in front of your eyes. Or watch how a city street changes over a decade? That would be really interesting...shame it will probably only every be available for Windows.

  • Re:Wow (Score:3, Insightful)

    by loraksus ( 171574 ) on Thursday August 14, 2008 @03:58PM (#24604707) Homepage

    There are other features that I don't see how they're getting, such as the zones where photos were shot from. That takes an awful lot of extrapolation.

    I suspect it isn't as complex as you think - exif tags usually include focus distance and focal length. Also included with that is sensor size or camera model, which will tell you effective focal length.
    When you combine that info with the apparent size of the object in the photo (i.e. statue of liberty is x percentage of the frame high), you should be able to get a reasonable estimation on where the picture was shot from.

    For relatively isolated objects (like the statue of liberty), I'd assume you'd need a single shot w/ a known location to act as an anchor (it's possible with cameras that support gps) - but I wouldn't be surprised if a mathematician could get around that. Perhaps angle of the sun at time / date (exif again), but I'd assume that would take significant processing and have all sorts of things that would screw it up.

    I know DXO can analyze a jpeg or raw to get the model of a lens - presume it's stored in exif somewhere - and lens distortion can be corrected w/ a x% pincushion adjustment to the photo based on known values - DXO has a fairly huge database and I wouldn't be surprised if they were using some of their tech.
    Either that or the guys here fudged it by only using pictures from a specific make / model of camera.

    I suspect that they would use the distance estimation in exif to eliminate the statues, etc - although I'm guessing a fair number of statues would be eliminated because they aren't scaled properly. Autofocus distance estimations can be off, but not usually hundreds of meters off.

    As for cross photo whitebalance / color gimpiness across the frame, that can be relatively easily corrected - autostich (free) and autopano pro (the "pro version" of autostitch) does it and they've licensed their stuff to a bunch of other companies.

    Also... keep in mind they really aren't displaying high res imagery - so they can estimate / tweak a bit. I don't know if that will scale, or if it does, what the processing requirements will be, but it's probably not a huge concern.

    It's clearly not as simple as they try to make it (they really only used a small amount of image sets), but I don't think it's a photoshop job.
    The question is whether it will scale.

  • Keep the Faith (Score:2, Insightful)

    by zuperduperman ( 1206922 ) on Thursday August 14, 2008 @07:37PM (#24608287)

    Ok folks, don't worry!

    Just keep chanting the mantra that Microsoft never innovates anything and everything will be ok.

    I'm sure there will be a linux port of this soon and then we can all go back to complaining about how Microsoft copies everything from Apple.

Genetics explains why you look like your father, and if you don't, why you should.

Working...