Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software

Photosynth Team Does It Again 144

STFS found an update to the Photosynth stories that we already ran. You might remember the amazing photo tourism demos. Well, this new version kicks things up several notches with paths and color correction to more smoothly transition between photos taken in different lighting conditions. As before, this stuff is worth your time. Check it out.
This discussion has been archived. No new comments can be posted.

Photosynth Team Does It Again

Comments Filter:
  • color (Score:4, Interesting)

    by catbertscousin ( 770186 ) on Thursday August 14, 2008 @09:06AM (#24597867)
    The color matching section was quite impressive given the wide variety of lighting and color temp in the starting photos; if they wrote their own software to do that, it sure counts as R/D.
  • by pz ( 113803 ) on Thursday August 14, 2008 @09:07AM (#24597883) Journal

    Very cool stuff! Does anyone know (are any of the project team members here?) how much foreknowledge of the object being orbited that is required?

    For example, is a 3D wireframe model necessary?

    Is a filtering of the photos necessary to ensure that they are all of the same subject?

    What level of pre-processing is required on the photos, either automated, or manual?

    How well does the system fare when the object being photographed isn't absolutely static? A drawbridge, for example, changes shape. Or Niagara Falls. Or a flag. Or a single person.

    Anyone know?

  • Re:Wow (Score:4, Interesting)

    by ErroneousBee ( 611028 ) <neil:neilhancock DOT co DOT uk> on Thursday August 14, 2008 @09:16AM (#24597969) Homepage

    Seems a bit simplistic to me, I'd have thought that they'd turn the photos into a virtual world, using the colour corrected photos to create wireframes and bumpmaps and then being able to apply whatever lighting and other effects to the world. That allows you much more freedom to use other methods (e.g. LIDAR) to populate the database.

    Creating 3d models also allows you to remove transient objects (people), or add objects to the scene, e.g. what would David look like on the empty plinth in Trafalgar Square.

    I suspect the reason they've done it this way is more about the patents than practical application.

  • Security (Score:5, Interesting)

    by robvangelder ( 472838 ) on Thursday August 14, 2008 @09:21AM (#24598013)

    I was on an ocean cruise recently, and a little girl was lost... Ship's Security were looking for her.

    I later heard she had been found, and as I walked back to my cabin I thought of this software.

    Every corridor of the ship has cameras.

    The parent could recall the last time she was with the child. An operator could then fly through a 3d map of the ship, from that point in time, with recorded video overlaid, following the girl in fast-forward until the current time was reached.

    The flying would be like spectators do in first-person-shooter type computer games.
    An observer could even be automatically tethered to the missing person.

  • Re:fascinating (Score:2, Interesting)

    by Tryfen ( 216209 ) on Thursday August 14, 2008 @09:25AM (#24598041) Homepage

    Read The Light of Other Days [wikipedia.org] by Arthur C Clarke.

  • by MobyDisk ( 75490 ) on Thursday August 14, 2008 @09:28AM (#24598075) Homepage

    I've seen some of these articles about Photosynth, and there seems to be a lot of hype. But... I don't get it.

    I see that Photosynth can glue a series of images together so that you can zoom into and move around a scene and get an epileptic-seizure of correlated viewpoints. This group seems to have made a virtual walk-through using this. But I am unclear:
    1) What is the point
    2) What is the breakthrough

    As for #1, Photosynth is ugly. I would much rather have a few good quality same-lighting photos to look at than to have my eyes torn out trying to make sense of this. So unless my brain works differently from everyone else's, the point is not an aesthetic one. It must be a technological one. Is it the promise that we could one day use this to combine amateur images into a real 3D image? Why would this matter when doing that with professional images is easy to do and looks much better?

    As for #2, without reading the entire paper I'm unclear how much of this was done automatically. If someone manually entered the GPS coordinates and direction of these photos and then wrote a program to glue them together, I see a lot of hard work but no science. If this required creating a rough 3D layout and it was able to extract the positions programatically, then that is impressive. If it was able to make this entirely from nothing other than the images, then holy moly that's amazing. But I can't tell from the video which of these it is.

    Can someone explain this to me and why I should be interested?

  • Re:color (Score:5, Interesting)

    by Gewalt ( 1200451 ) on Thursday August 14, 2008 @09:35AM (#24598173)

    The color matching section was quite impressive given the wide variety of lighting and color temp in the starting photos; if they wrote their own software to do that, it sure counts as R/D.

    AFAIK; adobe created the technology first in response to the needs of automation in the pornography industry. It seriously helped alot of "studios" color match the whole set just by having a wizard scan the pics and correct them all.

  • Re:Wow (Score:2, Interesting)

    by Anonymous Coward on Thursday August 14, 2008 @09:51AM (#24598383)
    I imagine that's the ultimate goal. But what they have now is still amazingly impressive...

    The next step in that goal would be making it automatically determine what's in the structure, and what's 'in the way' (a tourist, a security guard, a pigeon...). It would be annoying if a tourist was thrown in with the 3-d model if they happened to populate the set with a ton of pictures of them and the object you want modeled.

    Still, as it stands now, it's still an amazing way to experience a historical landmark that maybe you can't afford to visit. Imagine showing your kids the Parthenon, the Sphinx, the Great Pyramid, The Statue of Liberty, and the Kremlin. Not some static pictures, but a 3-d experience, photorealistic (Because it's populated by photographs, natch). It's the same kind of thing that, if I saw it in a movie 10 years ago, I'd have laughed at it for being stupid, because computers can't do THAT...
  • by replicant108 ( 690832 ) on Thursday August 14, 2008 @09:52AM (#24598401) Journal

    There was some discussion recently about the possibility of building an open source photosynth - and creating an 'open voxel space' map of the planet.

    Anyone know if there's been any progress on this?

    http://lists.burri.to/pipermail/geowanking/2008-June/005373.html [burri.to]

  • by dave420 ( 699308 ) on Thursday August 14, 2008 @10:11AM (#24598645)
    You just give it the photos - it figures out the rest. It works by stitching them together in 3D, so if there is a photo of one part of the subject that is not overlapped by one other, the photo won't be part of the finished "model". If you download the old demo, you can see the Yosemite demo, which shows what happens with movement (hikers climbing a mountain). If it can match up most of a scene in an image, the image can still be used. I'm sure it'll only get better. Another great example is in the old demo, where they simply searched Flickr for "Notre Dame", and then constructed the entire cathedral. It picked up a photo of a poster in someone's house, and seamlessly integrated it into the model. It recognised what it was from, and where on the cathedral it was positioned, and reflected that by putting that image exactly where it should be in the finished "model". Of course this is just stuff I've gleaned from watching the demo videos, using the demo, and reading as much as I can about it, so I might be wrong on some of it, but that was the impression I got. If I'm far off, I'd appreciate being put right, as this technology is nothing short of stunning.
  • Re:color (Score:5, Interesting)

    by Gewalt ( 1200451 ) on Thursday August 14, 2008 @11:26AM (#24599899)
    Actually, I was a rabid Adobe Forum troll when some self-declared porno studios started clamoring for the feature. The other people it would be useful for actually dismissed it, as they did not seem to think they wanted that step in their workflow automated. But once the feature was added, everyone seemed to appreciate it. Of course, adobe is not one to normally listen to and assimilate feedback, especially not from their forums, so that could have just been coincidence.
  • Re:Wow (Score:5, Interesting)

    by JWSmythe ( 446288 ) * <jwsmytheNO@SPAMjwsmythe.com> on Thursday August 14, 2008 @01:30PM (#24601949) Homepage Journal

        I thought one of the previous stories said it would do that.

        What I was curious about is, how? A distinguishable photograph could be associated. But, even with one of the examples in the display, the Statue of Liberty, if this is automated, how would it be able to distinguish the real statue of liberty with say a souvenier sitting on my coffee table? Basing it on size and distinguishing shapes, it would match either one. Basing it on those, and the background objects is impossible. It already has to take into account that there are changes in the foreground (people, extra objects like light poles that are not present in very similar views). Background objects like clouds come and go, and leave entirely different images.

        For not quite as distinguishable objects, it would be a lot harder. Say you used the Statue of Liberty as your starting point. If you were to travel into Manhattan, there are many very similar shapes for buildings and storefronts. Sure, unique buildings would be obvious, but for every obvious building, there are dozens of almost identical buildings.

        Even then, you would have to know the city. Similar architecture can show up in a variety of cities, and be close enough to match. Cameras may record timestamps embedded in the original image (assuming unedited photos are added to the system), but there is nothing useful like geographic coordinates included.

        All the photos were shot from the same perspective. It was as if they were shot by one or more photographers of about the same height. There should have been a more significant change to the view from say a 4' tall child to a 6'8" tall man. I don't claim to be a "great" photographer, but I'm pretty good. One of the essentials between being someone who can take snapshots, and someone who can take photographs, is making the composition of your photograph to illustrate the view. That frequently involves changing height and view. Maybe you want to lay on the ground for one, and climb on a ladder for another.

        I took some photographs at the World Trade Center on 9/9/2001. Those photographs aren't just of the skyline, although I did take some snapshots at the time. Some are composed lookup up towards the top of the buildings from the ground, and down while leaning on the glass of an observation deck window. Photography isn't documenting a first person view. It's beautifying and romanticist a view, without necessarily changing anything about what's in the composition of the photograph.

        There are other features that I don't see how they're getting, such as the zones where photos were shot from. That takes an awful lot of extrapolation. What's the difference between a photographer 10 feet away, and a photographer 200 feet away with a good zoom lens? Almost nothing, except maybe a little focal distortion at the edge of the photo. That varies with the quality of the camera and lens anyways.

        I did a little project once years ago. I was sitting in the hills just under the Hollywood sign. We were sitting on top of a hill, so I had a good panorama view. I tried to keep the horizon centered, and I shot frames the whole way around. When I stitched them together in Gimp, I noticed that each frame had variations in it's color. It wasn't because of AWB, it was because the camera (good for the time) had some weird variance, so there was a difference in color from the left to the right side. So, two shots from the same camera at the same settings were significantly different.

        I would be willing to suggest that the demo shown isn't a demonstration of a functional piece of software. It is a good example of what can be generated with a computer. I could do the same thing in Gimp or Photoshop. If my job let me play like this for a few weeks, I could have made a better example of vaporware.

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...