Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Software

2D To 3D Object Manipulation Software Lends Depth to Photographs 76

Iddo Genuth (903542) writes "A group of students from Carnegie Mellon University and the University of California, Berkeley have developed free software which uses regular 2D images and combines them with free 3D models of objects to create unbelievable video results. The group of four students created the software (currently for Mac OS X only) that allows users to perform 3D manipulations, such as rotations, translations, scaling, deformation, and 3D copy-paste, to objects in photographs. However unlike many 3D object manipulation software, the team's approach seamlessly reveals hidden parts of objects in photographs, and produces plausible shadows and shading."
This discussion has been archived. No new comments can be posted.

2D To 3D Object Manipulation Software Lends Depth to Photographs

Comments Filter:
  • by roman_mir ( 125474 ) on Thursday August 07, 2014 @12:11PM (#47623305) Homepage Journal

    How can images be admissible in court in our modern technological age of 3d manipulation of 2d images? Sure, they still have visual artifacts (like in the video presentation for this technology, when the airplanes are turned into 3d, their propellers are not changed, the same image of a propeller is kept for 3d model as was on the original 2d picture) but eventually all of these will go away, it may become impossible to detect that an image in front of you was manipulated at all.

    Eventually this will also apply to video footage.

    Add the digital augmentation of reality into the mix (Google Glass, etc.) and you can't rely even on the recorded information. We know that people are not good at remembering the details of what they saw, but if cannot be sure of images and video (and obviously audio) either, then this type of data becomes useless in courts. That's an interesting development in itself, never mind the fact that you can now turn a picture into a movie if you want.

  • A question on this (Score:5, Interesting)

    by DigitAl56K ( 805623 ) on Thursday August 07, 2014 @12:30PM (#47623457)

    While those results look impressive, in some of the demos where objects are seamlessly moved around, how are they filling in the original background (or what looks like it)? The video largely explains how the model is textured, lit, environment mapped, rendered with shadow projection with calculated perspective and depth of field, but I didn't hear much about re-filling the background. I assume they're cloning or intelligently filling texture ala photoshop, or perhaps in all cases where they showed something being animated it was a new clone of an existing object into a new area of the photo?

  • by Bryan Ischo ( 893 ) * on Thursday August 07, 2014 @12:45PM (#47623617) Homepage

    I agree there was some trickery there. Since they did not address this at all, I am assuming that the answer is simply that they had to manually paint in the parts of the photos that were revealed when other parts were removed. Having to point that out in the video would take away from the apparent magic which is probably why they didn't mention it (and that's somewhat disingenous if you ask me). It's possible that they provide some tool that attempts to automatically fill in the background, and if so it would appear that it was used in some of the examples (such as when the apple or whatever it was was moved in the painting, the area that was revealed looked more like the cloudy background than it did like the table that the apple was on), but there's no way that they automatically compute the background for anything that is not on top of a pattern or more or less flatly shaded surface. I also noticed that in some examples, they were merely adding new objects to the scene (such as the NYC taxi cab example), and although they started with a scene that looked like the cab was already there is moved it to reveal painted chevrons underneath, it's likely that those chevrons were already in the photo and didn't need to be recreated.

    In short: they glossed over that detail and used examples that didn't require explaining it, but it'c certainly an issue that a real user would have to address and doesn't happen as "magically" as it would appear from the video.

    BTW, CMU alum here. Went back to campus for the first time in nearly 20 years earlier this year. My how things have changed. I suppose every college is the same way now, but holy crap it's so much more cushy than it used to be! Guess all that cush keeps the computer science juices flowing ...

If you want to put yourself on the map, publish your own map.

Working...