Adobe's Experimental AI Tool Can Tell If Something's Been Photoshopped (theinquirer.net) 65
Adobe and UC Berkeley researchers are working on tool that can tell if a photo has been manipulated in Adobe Photoshop. The goal is to cut down on fake content and "to increase trust and authority in digital media." TheINQUIRER reports: A bunch of images were created using Photoshop's "Face Aware Liquify" tool, and mixed with a set of expertly human-doctored photographs. When humans were shown the original image and the doctored version, they spotted the fakes 53 percent of the time, but the artificial intelligence hit 99 percent. That's pretty good -- changing a coin toss guess into near certainty, but the AI isn't quite done showboating. As well as being able to point out what areas might have been changed, the AI can also predict what methods have been used to change the image.
Better still, it'll have a stab at undoing the vandalism, and returning the image to its former untampered glory. Not perfectly, but well enough to impress the researchers all the same: it's like having an undo button on someone else's work, and who hasn't always wanted one of those? "It might sound impossible because there are so many variations of facial geometry possible," said Professor Alexei A. Efros of UC Berkeley. "But, in this case, because deep learning can look at a combination of low-level image data, such as warping artifacts, as well as higher level cues such as layout, it seems to work."
Better still, it'll have a stab at undoing the vandalism, and returning the image to its former untampered glory. Not perfectly, but well enough to impress the researchers all the same: it's like having an undo button on someone else's work, and who hasn't always wanted one of those? "It might sound impossible because there are so many variations of facial geometry possible," said Professor Alexei A. Efros of UC Berkeley. "But, in this case, because deep learning can look at a combination of low-level image data, such as warping artifacts, as well as higher level cues such as layout, it seems to work."
Re: (Score:2)
How exactly do you think Photoshop would know this? It's fairly easy to wipe any metadata that Photoshop may imbed into an image. So AI is required for the majority of content out in the wild.
Re: Shopped (Score:2)
Re: (Score:2)
Re: (Score:2)
True. But a it can't really tell what was done to the image. Just that it was exported out if the software. Plus a simple filter in a different editing package could spoil the watermark.
Re: (Score:2)
Open it in any editor/viewer, then take a screenshot.
Re: (Score:2)
Just use some other software than Photoshop to edit your photos in.
Re: (Score:2)
Gimp. It's free!
Re: (Score:2)
But it's sleeping!
Which will be used... (Score:3)
To train a better AI that can liquify images that can't be easily identified
Re: (Score:2)
Great point (Score:2)
This is the needed half of the generational adversarial network you can train to mask changes so they are un-detectable.
Look out, here come the *super* deep fakes!
Re: (Score:2)
To train a better AI that can liquify images that can't be easily identified
Yup. We even have a term for it: GAN [wikipedia.org].
If the humans get it right 53% of the time (barely better than chance) and the NN is at 99%, then obviously the NN is detecting some pixel-level artifacts left behind by Photoshop. A GAN will quickly figure out how to smooth those out.
Re: (Score:2)
This isn't AI. This is a bog standard logic function.
bool IsPhotoshopped(IMAGE_t *img)
{
return true;
}
In my informal testing it achieves a 98.7% accuracy rating on a random selection of stock photos.
Re: (Score:2)
This looks shopped. I can tell from some of the pixels and from seeing quite a few shops in my time.
Fact checking the fact checkers (Score:2)
Pretty big list, there. Let us look at #7.
Wikipedia, of course, is an authoritative source on science subject, correct? Or at least Conservative commentators whine that its editing process has a Liberal slant, and defending smoking is something more Conservatives tend to do in rants on "muh individual rahts"?
The article https://en.wikipedia.org/wiki/... [wikipedia.org] at the very least suggests that second-hand smoke being a "significant" health risk is not a "slam dunk" in the way being a smoker is. And yes, Wiki
My guess (Score:3)
Re: (Score:2)
I'm guessing it can tell by the pixels
And from seeing quite a few 'shops in its day.
Re:My guess (Score:4, Insightful)
"Competently attempted fakes" is the trick. Most fakes are quite incompetent.
Re: (Score:2)
Often intentionally. :-)
But not an AI doctored face. (Score:5, Insightful)
As soon as you create a way to detect a digital forgery, you simultaneously create a way to make an even better and undetectable digital forgery. Forgery has a long history of cat and mouse and the saving grace has always been physics. Without the limitations of physics then the only recourse is to keep your detection methodology a secret. If you make it public then it will be integrated into the forgery process and thus make your detection method worthless.
Re: (Score:2)
Part of the key is to not create _a_ way to detect a forgery, but to use a learning algorithm with different training sets. They won't necessarily learn the same rules for differentiation, and it becomes in fact _vital_ that the rules vary to avoid just such tuning. Anti-spam tools of all sorts are vulnerable to training: aprt of the key to avoid becoming vulnerable to aggressive spammers who train their systems is to randomize the rule generation.
Re: (Score:2)
Re: But not an AI doctored face. (Score:2)
Re: (Score:2)
As soon as you create a way to detect a digital forgery, you simultaneously create a way to make an even better and undetectable digital forgery.
Yes and no. The problem here is what is being identified is the specific image algorithm changing the image rather than the digital footprints left of the forgery itself. It reminds me of quantisation noise analysis of an image showing what has changed due to recompression. If the original was at some point compressed even saving it losslessly wouldn't help you against it. What did help against it was selectively analysing the original and applying a mask. That can be done because it was a quirk in the stor
But can it detect cropping? (Score:3, Interesting)
This famous photo
http://2.bp.blogspot.com/-G7Juokm-Z_w/UKhFYnBHQAI/AAAAAAAAAC0/FiKYCx89eJM/s1600/Media-Manipulation-Optical-Illusion1.jpg
shows how the interpretation of the photo can be altered by cropping without any alterations to the retained part.
Also consider the case of Tuvia Grossman
https://en.wikipedia.org/wiki/Tuvia_Grossman
The photo was not altered, it was just supplied with the wrong caption.
Re: (Score:2)
Where is that photo from? I searched tineye, and I couldn't find a source.
my PhD in Technobabble will be worthless (Score:2)
Rats! Now both DARPA and Abu Sayyaf are going to discover my photochopped diploma. I am so dead. No carrier, man. No carrier.
Photoshopped? (Score:2)
Re: (Score:2)
Does GIMP, or any image transformation application that integrates with GIMP such as G'MIC, offer a counterpart to the Face Aware Liquify tool in Adobe Photoshop software?
The way things should be handled. (Score:2)
Re: (Score:2)
Because people always come around when presented with evidence that their biases are based upon false information.
No, wait, they don't [slashdot.org]. They run away from anything that would contradict their existing beliefs.
Literally? (Score:2)
Perhaps
But if "photoshopped" is understood to mean "altered after left it the camera", then the task becomes impossible. Is sharpening, blurring, rotation, perspective correction, contrast enhancement, lightening, or darkening "photoshopping"? How do you distinguish enhancements made automatically in the camera from those made on a computer? It can't be done.
There are probably ways to detect blatant fakes like putting person A's head on person B's body by looking for internal discrepancies in the resultant
New tool for photoshoppers (Score:2)
Iterative procedure: run it through the adobe AI and keep correcting the image until AI flags it as green. It'll help if the AI flags problem areas for you.
Re: (Score:2)
Agreed..
Nothing new here (Score:2)
Move along. See the content of the IEEE Transactions on Information Forensics and Security [signalproc...ociety.org]. Researchers, well-known and not-so-well-known three-letters agencies, even news agencies have been able to do this sort of things for years.
Hasn't this already been done? (Score:1)
Looks shopped (Score:2)
It can tell from the pixels, and from having seen quite a few shops in its time.