2D To 3D Object Manipulation Software Lends Depth to Photographs 76
Iddo Genuth (903542) writes "A group of students from Carnegie Mellon University and the University of California, Berkeley have developed free software which uses regular 2D images and combines them with free 3D models of objects to create unbelievable video results. The group of four students created the software (currently for Mac OS X only) that allows users to perform 3D manipulations, such as rotations, translations, scaling, deformation, and 3D copy-paste, to objects in photographs. However unlike many 3D object manipulation software, the team's approach seamlessly reveals hidden parts of objects in photographs, and produces plausible shadows and shading."
Re: (Score:2)
or if there were linux-only porn 3D rendering engine, it would surely bring the Year of the Linux Desktop to pass
Re: (Score:2)
You misuse the language, "whores" by definition charge money
Re: (Score:2)
looks like "touchA (copyrighted) in my browser". If you can touch her A for free that's not whoring
Carnegie Melloned (Score:4, Funny)
No longer is it Photoshopped, but instead we say it's been Carnegie Melloned.
Re: (Score:2)
Re: (Score:2)
am i missing something? (Score:1)
isn't this just texture mapping onto a 3d model?
Re: (Score:2)
The same way that Avatar was just computer animation, like Toy Story.
Re: (Score:1)
not really. this is quite a simple process: pick a model, apply texture to it, manipulate at will. More impressive is code that generates the model using the image itself.
I'm impressed (Score:3)
Re: (Score:1)
More or less, it's an evolution of previous work.
Among other previous work:
Rendering Synthetic Objects into Legacy Photographs (2011)
http://www.youtube.com/watch?v=hmzPWK6FVLo
3-Sweep: Extracting Editable Objects from a Single Photo, SIGGRAPH ASIA 2013
http://www.youtube.com/watch?v=Oie1ZXWceqM
Ugh (Score:2)
Re: (Score:2)
Re: (Score:2)
To make the high for their joy to come out.
Re: (Score:1)
Actually, if you follow strict English rules, it should be "a software" or "softwares" -- the fact that we've nounized "software" doesn't make it right. Kind of like math vs maths -- maths is correct, but US English chooses math instead, as the abbreviation has been nounized.
Re: (Score:3)
Ahh! Making of the understanding for peoples... (Score:3)
...informations to better builds the good!
Bad informations with for the good people so making of the understanding isn't!
Should images even be admissible in court anymore? (Score:4, Interesting)
How can images be admissible in court in our modern technological age of 3d manipulation of 2d images? Sure, they still have visual artifacts (like in the video presentation for this technology, when the airplanes are turned into 3d, their propellers are not changed, the same image of a propeller is kept for 3d model as was on the original 2d picture) but eventually all of these will go away, it may become impossible to detect that an image in front of you was manipulated at all.
Eventually this will also apply to video footage.
Add the digital augmentation of reality into the mix (Google Glass, etc.) and you can't rely even on the recorded information. We know that people are not good at remembering the details of what they saw, but if cannot be sure of images and video (and obviously audio) either, then this type of data becomes useless in courts. That's an interesting development in itself, never mind the fact that you can now turn a picture into a movie if you want.
Re: (Score:3)
Pictures and video are used in court but someone testifies that it hasn't been modified. If the defense argues that it has been modified then a jury weighs the merits of that claim.
Re: (Score:3)
The problem is that as technique improves, the theory that the photo/video was altered in a way that can't be detected becomes ever more plausible.
It was easy to take the witness's word for it when the alternative would involve millions in equipment and would likely be trivial to detect.
Re: (Score:3)
a jury weighs the merits of that claim
Unfortunately, I wouldn't trust the average juror to weigh a head of lettuce.
-
A question on this (Score:5, Interesting)
While those results look impressive, in some of the demos where objects are seamlessly moved around, how are they filling in the original background (or what looks like it)? The video largely explains how the model is textured, lit, environment mapped, rendered with shadow projection with calculated perspective and depth of field, but I didn't hear much about re-filling the background. I assume they're cloning or intelligently filling texture ala photoshop, or perhaps in all cases where they showed something being animated it was a new clone of an existing object into a new area of the photo?
Re:A question on this (Score:5, Interesting)
I agree there was some trickery there. Since they did not address this at all, I am assuming that the answer is simply that they had to manually paint in the parts of the photos that were revealed when other parts were removed. Having to point that out in the video would take away from the apparent magic which is probably why they didn't mention it (and that's somewhat disingenous if you ask me). It's possible that they provide some tool that attempts to automatically fill in the background, and if so it would appear that it was used in some of the examples (such as when the apple or whatever it was was moved in the painting, the area that was revealed looked more like the cloudy background than it did like the table that the apple was on), but there's no way that they automatically compute the background for anything that is not on top of a pattern or more or less flatly shaded surface. I also noticed that in some examples, they were merely adding new objects to the scene (such as the NYC taxi cab example), and although they started with a scene that looked like the cab was already there is moved it to reveal painted chevrons underneath, it's likely that those chevrons were already in the photo and didn't need to be recreated.
In short: they glossed over that detail and used examples that didn't require explaining it, but it'c certainly an issue that a real user would have to address and doesn't happen as "magically" as it would appear from the video.
BTW, CMU alum here. Went back to campus for the first time in nearly 20 years earlier this year. My how things have changed. I suppose every college is the same way now, but holy crap it's so much more cushy than it used to be! Guess all that cush keeps the computer science juices flowing ...
Re: (Score:2)
If you've used a recent version of Photoshop, their content-aware fill often does an amazing job at automatically filling in hidden backgrounds [youtube.com].
Re: (Score:1)
Removing objects from images and filling in the missing space with some other content from the rest of the image based on 'awareness' has been available for some time now, it is called 'content awareness [photoshopessentials.com]' in Photoshop and 'Resynth [patdavid.net]' in Gimp.
Re: (Score:2)
i'm downloading the open source software now to test it out. I assume it is very similar to content aware fill used in Photoshop.
Re: (Score:2)
Thanks, AC!
For anyone else interested, I found this video on PatchMatch:
http://www.youtube.com/watch?v... [youtube.com]
Not free as in freedom (Score:5, Informative)
ACADEMIC OR NON-PROFIT ORGANIZATION NONCOMMERCIAL RESEARCH USE ONLY
Re: (Score:1)
Though the source code has supposedly been released under GPLv2, according to their website. Confusing.
http://www.cs.cmu.edu/~om3d/co... [cmu.edu]
Re: (Score:2)
Re: (Score:1)
Cool, but... detection? (Score:2)
I would love to know how easy such manipulation is to detect? Is it harder or easier to detect than photo-shop?
At some point, photo-shop type effects will become undetectable.
Re: (Score:2)
No, it seems they are using inpainting :
We compute a mask for the object pixels, and use this mask to inpaint the background using the PatchMatch algorithm [Barnes et al. 2009]. For complex backgrounds, the user may touch up the background image after inpainting.
Thus, only one image is required.
Re: (Score:2)
Why couldn't the algorithm be content aware? similar to content aware fill in photoshop? that is much more likely given the scope of the software.
popular topic at SIGGRAPH for last decade (Score:2)
SIGGRAPH next week in Vancouver.
looks fantastic (Score:1)
watching or attending siggraph is like watching an Ubisoft conference.
Everything looks amazing on stage, but when you get your hands on it is another story altogether.
I'll believe this works when I use it. Until then, I might as well go watch the lawnmower man and consider it a documentary.
What sorcery is this?! (Score:2)
Comment removed (Score:3)
"to create unbelievable video results" (Score:2)
Not as impressive as the video makes it look (Score:2)
I've done a little bit of work in a related area, so I skimmed the paper (at the bottom of the first link,) and it's nowhere near as impressive and automagical as the video makes it seem. The user has to provide a mask distinguishing the object they are manipulating from the rest of the image, and then the user also has to provide the 3D model for the object! The model is then smoothed to better fit the original using the mask and the inferred illumination, textured using the image, and then popped out to b
Well now. (Score:2)
The future... (Score:2)
On it's surface, it looks like a lot of the results they're getting wouldn't currently be outside of the realm of student level work, such as the simple practice of projecting and baking textures into materials from photographs, the innovation seems to be that they're quickly automating a lot of that stuff into a UI with a fast lighting solution. One of the things I find most rewarding about 3d is that you sometimes get this huge burst of increased