Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Software Technology

Adobe's Experimental AI Tool Can Tell If Something's Been Photoshopped (theinquirer.net) 65

Adobe and UC Berkeley researchers are working on tool that can tell if a photo has been manipulated in Adobe Photoshop. The goal is to cut down on fake content and "to increase trust and authority in digital media." TheINQUIRER reports: A bunch of images were created using Photoshop's "Face Aware Liquify" tool, and mixed with a set of expertly human-doctored photographs. When humans were shown the original image and the doctored version, they spotted the fakes 53 percent of the time, but the artificial intelligence hit 99 percent. That's pretty good -- changing a coin toss guess into near certainty, but the AI isn't quite done showboating. As well as being able to point out what areas might have been changed, the AI can also predict what methods have been used to change the image.

Better still, it'll have a stab at undoing the vandalism, and returning the image to its former untampered glory. Not perfectly, but well enough to impress the researchers all the same: it's like having an undo button on someone else's work, and who hasn't always wanted one of those? "It might sound impossible because there are so many variations of facial geometry possible," said Professor Alexei A. Efros of UC Berkeley. "But, in this case, because deep learning can look at a combination of low-level image data, such as warping artifacts, as well as higher level cues such as layout, it seems to work."

This discussion has been archived. No new comments can be posted.

Adobe's Experimental AI Tool Can Tell If Something's Been Photoshopped

Comments Filter:
  • by grahamsz ( 150076 ) on Monday June 17, 2019 @06:28PM (#58778894) Homepage Journal

    To train a better AI that can liquify images that can't be easily identified

    • Exactly. Six months from now we will likely see an article on here from researchers doing exactly that. The entire way digital media is created from camera to end user computer monitor is probably going to need to be overhauled to restore some faith in authenticity. If I had to guess it might look like hardware that hashes the video stream or images and incorporates them into a public immutable digital ledger as they are created initially and assigning a trustworthiness rating at each alteration step in
    • This is the needed half of the generational adversarial network you can train to mask changes so they are un-detectable.

      Look out, here come the *super* deep fakes!

    • To train a better AI that can liquify images that can't be easily identified

      Yup. We even have a term for it: GAN [wikipedia.org].

      If the humans get it right 53% of the time (barely better than chance) and the NN is at 99%, then obviously the NN is detecting some pixel-level artifacts left behind by Photoshop. A GAN will quickly figure out how to smooth those out.

    • by AmiMoJo ( 196126 )

      This isn't AI. This is a bog standard logic function.

      bool IsPhotoshopped(IMAGE_t *img)
      {
          return true;
      }

      In my informal testing it achieves a 98.7% accuracy rating on a random selection of stock photos.

  • by Dunbal ( 464142 ) * on Monday June 17, 2019 @06:35PM (#58778944)
    I'm guessing it can tell by the pixels
  • by Gravis Zero ( 934156 ) on Monday June 17, 2019 @07:04PM (#58779098)

    As soon as you create a way to detect a digital forgery, you simultaneously create a way to make an even better and undetectable digital forgery. Forgery has a long history of cat and mouse and the saving grace has always been physics. Without the limitations of physics then the only recourse is to keep your detection methodology a secret. If you make it public then it will be integrated into the forgery process and thus make your detection method worthless.

    • Part of the key is to not create _a_ way to detect a forgery, but to use a learning algorithm with different training sets. They won't necessarily learn the same rules for differentiation, and it becomes in fact _vital_ that the rules vary to avoid just such tuning. Anti-spam tools of all sorts are vulnerable to training: aprt of the key to avoid becoming vulnerable to aggressive spammers who train their systems is to randomize the rule generation.

    • by idji ( 984038 )
      this is exactly what a GAN (generative adversarial network) does. The first network generates a fake image, and the adversarial network tells it if it is fake or not. and the first network improves until it has fooled the adversarial network.
    • You're raising the "bar". If the barrier to entry is sufficiently high less people will try. Right up until the offending tools catch up...
    • As soon as you create a way to detect a digital forgery, you simultaneously create a way to make an even better and undetectable digital forgery.

      Yes and no. The problem here is what is being identified is the specific image algorithm changing the image rather than the digital footprints left of the forgery itself. It reminds me of quantisation noise analysis of an image showing what has changed due to recompression. If the original was at some point compressed even saving it losslessly wouldn't help you against it. What did help against it was selectively analysing the original and applying a mask. That can be done because it was a quirk in the stor

  • by Anonymous Coward on Monday June 17, 2019 @07:14PM (#58779182)

    This famous photo
    http://2.bp.blogspot.com/-G7Juokm-Z_w/UKhFYnBHQAI/AAAAAAAAAC0/FiKYCx89eJM/s1600/Media-Manipulation-Optical-Illusion1.jpg
    shows how the interpretation of the photo can be altered by cropping without any alterations to the retained part.
    Also consider the case of Tuvia Grossman
    https://en.wikipedia.org/wiki/Tuvia_Grossman
    The photo was not altered, it was just supplied with the wrong caption.

  • Rats! Now both DARPA and Abu Sayyaf are going to discover my photochopped diploma. I am so dead. No carrier, man. No carrier.

  • What if it was GIMPed?
    • by tepples ( 727027 )

      Does GIMP, or any image transformation application that integrates with GIMP such as G'MIC, offer a counterpart to the Face Aware Liquify tool in Adobe Photoshop software?

  • I like this. Afraid of deepfakes? Handle the threat of innovation with better innovation and more knowledge. Not shortsighted bans because a tool offends Hollywood starlets and politicians who don't want to be made fun of.
    • by DRJlaw ( 946416 )

      I like this. Afraid of deepfakes? Handle the threat of innovation with better innovation and more knowledge. Not shortsighted bans because a tool offends Hollywood starlets and politicians who don't want to be made fun of.

      Because people always come around when presented with evidence that their biases are based upon false information.

      No, wait, they don't [slashdot.org]. They run away from anything that would contradict their existing beliefs.

  • Perhaps

    But if "photoshopped" is understood to mean "altered after left it the camera", then the task becomes impossible. Is sharpening, blurring, rotation, perspective correction, contrast enhancement, lightening, or darkening "photoshopping"? How do you distinguish enhancements made automatically in the camera from those made on a computer? It can't be done.

    There are probably ways to detect blatant fakes like putting person A's head on person B's body by looking for internal discrepancies in the resultant

  • Iterative procedure: run it through the adobe AI and keep correcting the image until AI flags it as green. It'll help if the AI flags problem areas for you.

  • Move along. See the content of the IEEE Transactions on Information Forensics and Security [signalproc...ociety.org]. Researchers, well-known and not-so-well-known three-letters agencies, even news agencies have been able to do this sort of things for years.

  • Hany Farid at Dartmouth College has been publishing research into detecting photoshopped pictures and video for many years and has built a variety of tools: https://www.nytimes.com/2011/1... [nytimes.com]
  • It can tell from the pixels, and from having seen quite a few shops in its time.

You are always doing something marginal when the boss drops by your desk.

Working...