Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Facebook AI

Facebook Announces Results of 'Deepfake Detection Challenge': 65% Accuracy (engadget.com) 19

An anonymous reader quotes Engadget: In September of 2019, Facebook launched its Deepfake Detection Challenge -- a public contest to develop autonomous algorithmic detection systems to combat the emerging threat of deepfake videos. After nearly a year, the social media platform announced the winners of the challenge, out of a pool of more than 2,000 global competitors...

Facebook spent around $10 million on the contest and hired more than 3,500 actors to generate thousands of videos -- 38.5 days worth of data in total. It was the amatuer, phone-shot sort you'd usually see on social media rather than the perfectly-lit, studio-based vids created by influencers... The company then gave these datasets to researchers. The first was a a publicly available set, the second a "black box" set of more than 10,000 videos with additional technical tricks baked in, such as adjusted frame rates and video qualities, image overlays, unrelated images interspersed throughout the video's frames. It even included some benign, non-deepfakes just for good measure.

On the public data sets, competitors averaged just over 82 percent accuracy, however for the black box set, the model of the winning entrant, Selim Seferbekov, averaged a skosh over 65 percent accuracy, despite the bevy of digital tricks and traps it had to contend with... While the company does intend to release these models under an open source license, enabling any enterprising software engineer free access to the code, Facebook already employs a deepfake detector of its own. This contest, Facebook CTO Mike Schroepfer explained, is designed to establish a sort of nominal detection capability within the industry... "A lesson I learned the hard way over the last couple years, is I want to be prepared in advance and not be caught flat footed, so my whole aim with this is to be better prepared in case [deepfakes do] become a big issue," Schroepfer continued. "It is currently not a big issue but not having tools to automatically detect and enforce a particular form of content, really limits our ability to do this well at scale."

This discussion has been archived. No new comments can be posted.

Facebook Announces Results of 'Deepfake Detection Challenge': 65% Accuracy

Comments Filter:
  • 35% failure.

    All the fakesters need to do is figure out which software fools Facebook and the failure rate will go close to 100%.

    • It probably has more to do with the source video and modifications attempted than the software used.

      • by rtb61 ( 674572 )

        It has a lot to do with pseudo celebrities waning popularity and the desperate need for more attention, More, MORE, leaking real videos and pictures and calling them fake for public attention. There bullshit time is so over, the normies are simply waking up to the celebrity bullshit.

    • by Vihai ( 668734 )

      100% failure rate would be an amazing result. You just have take the opposite answer. Always remember that tossing a coin will give you 50% success rate.

      • I remember when people were worried avout the same thing in the early 1990s, after Photoshop came out in 1990. Some people thought the sky was falling. What actually happened is that "Photoshop" and "Photoshopped" became a commonly-used verb and adjective - people are aware that digital manipulation is possible and they account for that.

        • by fazig ( 2909523 )
          From my experiences on online platforms like Reddit or Imgur I see sometimes really badly altered pictures having hundreds if not thousands of upvotes (minus the down votes) with people praising them as "Earth porn" and how "beautiful nature" is.

          Sometimes it makes me ask myself how many of us apparently never leave their house. Because if you're at least somewhat observant and walk with open eyes through your environment, may that be urban, rural, or even natural, you ought to acquire at least a feeling o
        • "Photoshop" and "Photoshopped" became a commonly-used verb and adjective - people are aware that digital manipulation is possible and they account for that.

          Some people actually prefer it, especially when the celeb in question doesn't leak home-made porn tapes.

        • I think digital photo manipulation is a real problem. Even with relatively unsophisticated tools like Photoshop its possible to support a political point: is a region with a high minority ethnic population, a charming brightly lit place full of culture and music, or is it a dark dangerous environment with street gangs on every corner.

          Surely you've seen pictures of protesters with Photoshopped slogans on signs.

          People have learned to ignore a lot that is in still photos. . They can learn to ignore video a

  • Meanwhile, we're doing less than 1% successful identification of biological deepfakes.

  • Note that the way you normally construct these neural networks is through GANs, a generator to create fake images and a discriminator to tell whether it's real or fake. By training them alternately you get better and better forgeries and better and better forgery detection. So far the focus has mainly been on the purely visual, not whether a forensic algorithm can tell it apart from real data. But there's really no reason why you can't add a forensic loss in addition to a visual loss, in fact I think I've a

  • that deepfake videos are the real thing.

    And if they get good enough that I can't tell them apart at all, neither can the detector.
    Just like, if there was actual real AI, which therr isn't even remotely, CAPTCHAs would be dead.

  • Here is software for detecting deep fakes which is correct 99.9% of the time:

    bool detect_fake() {
    return false;
    } // The vast majority of videos aren't deep fakes

    This actually creates a difficultly because even if it looks like a deep fake, it's probably not one. Just because it eas almost certainly not before you looked at it. If you find evidence that makes it ten times more likely to be be a deep fake, that's still only a 1% chance.

    On the other hand, any commercially produced video IS selec

  • Than simple coin toss.
  • Facebook's efforts need to stop. It can only make people believe deep-fakes to be dangerous and that they need to be protected from it. Next thing you know people will start asking Facebook if the news on the TV are fake... Facebook needs to stop patronising the public and allow for everyone to make up their mind. People will believe what they want and companies wanting to give a helping hand have always been the bigger evil. When a fake video reinforces dump thoughts without any reflection then this isn't

New crypt. See /usr/news/crypt.

Working...