Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Technology

The Defense Department Has Produced the First Tools For Catching Deepfakes (technologyreview.com) 45

Fake video clips made with artificial intelligence can also be spotted using AI -- but this may be the beginning of an arms race. From a report: The first forensics tools for catching revenge porn and fake news created with AI have been developed through a program run by the US Defense Department. Forensics experts have rushed to find ways of detecting videos synthesized and manipulated using machine learning because the technology makes it far easier to create convincing fake videos that could be used to sow disinformation or harass people. The most common technique for generating fake videos involves using machine learning to swap one person's face onto another's. The resulting videos, known as "deepfakes," are simple to make, and can be surprisingly realistic. Further tweaks, made by a skilled video editor, can make them seem even more real. Video trickery involves using a machine-learning technique known as generative modeling, which lets a computer learn from real data before producing fake examples that are statistically similar. A recent twist on this involves having two neural networks, known as generative adversarial networks, work together to produce ever more convincing fakes. The tools for catching deepfakes were developed through a program -- run by the US Defense Advanced Research Projects Agency (DARPA) -- called Media Forensics. The program was created to automate existing forensics tools, but has recently turned its attention to AI-made forgery.
This discussion has been archived. No new comments can be posted.

The Defense Department Has Produced the First Tools For Catching Deepfakes

Comments Filter:
  • https://www.wnycstudios.org/st... [wnycstudios.org]
    The original image can be manipulated, and the audio even more convincingly manipulated to say pretty much anything you want.
    • That's one of the worse examples and wouldn't convince many people, but the problem is going to be massive once the techniques have been improved. The real problem is that people get their "information" from the most shady sources and tend to switch their brain of. Captain Disillusion's Youtube Channel [youtube.com] is a great and entertaining way of becoming more skeptical.
      • by rtb61 ( 674572 )

        People will believe what they want to believe in order to favour their own personal biases and even their own particular personalities. They will claim to believe it, even if they don't and they will claim disbelief even if they believe, belibers (no accident) will be beliebers, idiot just-ins to the world of thoughts.

        Here's a news flash for you, prove the photo or vid is fake, so the fuck what, they'll just claim yours is fake about the fake because yeah, it can be. Unless you legalise the truth, legalise

    • If you build a better mousetrap an adversarial system builds a better mouse in response. Thus I wonder how they can make a detector that continues to detect. The whole idea of a GAN is to generate things that defeat the detector. So what's the strategy to make s detector the generator can't beat?

  • by omnichad ( 1198475 ) on Tuesday August 07, 2018 @01:32PM (#57086880) Homepage

    Any tool that can catch a deepfake can be used to help train the deepfake generator. This cover story just tells us that DARPA is working on training their own system for generating deepfakes. Dueling neural nets is the new normal and there's no reason to think they only developed one side of this.

    • I hope this is true, because without deepfakes how are slashdot users going to use dating sites?!

      Think of the innocent neckbeards, people, think of the neckbeards.

    • But any tool that can generate deepfakes can be used to train the deepfake detector.

      It will eventually conclude that it must nuke humanity.

  • by Anonymous Coward

    That this will be a cat and mouse game of trying to evade detection. What we will need and likely eventually settle on is video & audio that includes digital certificates embedded in the video encoding. That way there is a certified chain of proof from the moment the video is recorded to the point of distribution. This will likely matter in cases where legal burden of proof is required - security camera feeds, official court & governmental documentation, and news anchor releases.

  • by slashmydots ( 2189826 ) on Tuesday August 07, 2018 @01:50PM (#57087038)
    Well time to give it up and make technology for "dank fakes" where you take a Shiba Inu's head and render it onto someone's body in a video.
  • Here's probably as good a place as any to ask.

    Any time I've tried some simple searching on how-to, it's started going down porn paths. Does anybody have a porn-free tutorial on how to get started on creating a deepfake?

    I don't care to create anything that would fool anybody, I just think it'd be neat to screw around with over a few lunch hours at work.

    • Other than the whole NSFW aspect the methodology should be about the same either way?
    • by Anonymous Coward

      Safe For Work Tutorials:
      https://www.deepfakes.club/

      The easiest way to get started in deep fakes is to use the Python scripts.

    • by Kjella ( 173770 )

      Not really, but I've seen the results of people trying it out and it seems extremely fickle, needs a pretty big training set and you have to mask out other faces if there are any. People use it on celebs because there's tons of photos/videos and someone OCD enough to compile the set. And when they're trying to morph it into porn all the scenes that don't work and all the times it loses track doesn't matter so much. You just pick the clips the virtual mask sticks and glue it together. You'll notice that in a

      • by Wulf2k ( 4703573 )

        Gag-level quality was pretty much what I was going for, but it sounds like the effort/reward ratio really isn't there.

        I'll probably still poke at it sometime but I'll keep my expectations low.

  • At the moment it's pretty easy to identify a deepfake video. The AI's don't "know" that people regularly blink, they see this as a glitch in the data set.
    If you look at pretty much any deepfake video, no matter how realistic it is, the person who's been deepfaked will never blink.

  • .. adversarial networks, work together ..

    what ?

The debate rages on: Is PL/I Bachtrian or Dromedary?

Working...