Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Google AI

Google Releases Trove of Deepfake Videos So Researchers Can Help Fight Them (fastcompany.com) 21

Google has announced it has released a data set containing 3,000 deepfake videos it has created. From a report: Google hopes the release of the videos will allow researchers to develop ways to combat malicious deepfakes, giving them, news organizations, and the public ways to identify "synthetic" videos -- that is videos manipulated by or entirely created using computers. Deepfakes at first came onto the scene in 2017 and were mainly used in crude, falsified porn videos. The technology originally allowed for people with limited computer skills to easily paste the face of, for example, an actress onto a porn star's body. But since then the quality of deepfake videos has rapidly advanced. Matter of fact, it's now possible to deepfake entire bodies, not just an individual's head. While deepfake technology has legitimate commercial purposes, many fear it will also lead to a new era of fake news and propaganda where no one will be able to tell if the video they are seeing actually, in fact, ever happened.
This discussion has been archived. No new comments can be posted.

Google Releases Trove of Deepfake Videos So Researchers Can Help Fight Them

Comments Filter:
  • by 110010001000 ( 697113 ) on Wednesday September 25, 2019 @09:29AM (#59233962) Homepage Journal

    What if someone managed to figure out how to do this with digital pictures? We would never be able to tell if a picture is real or not. Truly scary stuff.

    • Re:What if (Score:4, Interesting)

      by courteaudotbiz ( 1191083 ) on Wednesday September 25, 2019 @10:13AM (#59234112) Homepage

      I know your post intended to be a Photoshop joke. But the thing is, from now on, what can we trust?

      Contrary to digital pictures, that are not a reliable source of information since the advent of Photoshop, videos used to be pretty reliable at showing true stuff (even though some manipulation or special effects were possible, but easily debunked). But now, the so called "deepfakes" created with the help of AI cannot be detected by standard analysis, cause algorithms now are very good at masking the manipulations details. Same goes with the voice, that from very few original voice data, AI can infer pretty realisticaly any word or expression. And with world leaders, it's so easy to get lots of voice data from the media.

      Back in the days, they had to verify the authenticity of Bin Laden's videos and they probably were 98% right, but I'm pretty sure right now, they would have a hard time telling a real from a fake.

      This is scary, and thank you Google for releasing this data. We may start training AI at debunking AI generated deepfakes. Cause I think only AI can fight AI in this field.

      • I know your post intended to be a Photoshop joke. But the thing is, from now on, what can we trust?

        100% of all photos in media have been retouched. It's always been that way. First via air brushing. Then photoshop.

        Actors/actresses need to be terrified of this. Macauly Culkin made $100k for Home Alone but $4.5 million for Home Alone 2. So for the sequel, he basically made $100k for his skills and $4.4 million for his face. The day is coming where that doesn't happen anymore.

        • Folks wearing blue-suits are the new actors of the future. And they have no need to match some current face of some over-paid actor/actress, no no no. They can invent new faces for the masses to fawn over, and use them forever. And who knows how far this'll go. It could be applied to the political realm too. After all, they're all just actors.

          • They can even slowly morph a digital actors appearance over time. Simulate regular aging if they wish, but thats not the only choice. Consider that a calculation of "the ideal face" will be made, and that regardless of this calculations veracity all recurring characters be it a tv series or a sequence to a move, will have their faces slowly pushed towards that "ideal face" of the day.
      • by ceoyoyo ( 59147 )

        Photos, video and audio recordings were *never* trustworthy. Even when you had to do photo manipulation in actual darkrooms, and video manipulation with scissors, people were doing it. It's just easier now.

        Actual evidentiary photo and video is captured with special equipment that computes hashes, and chain of custody is maintained. Everything else should be regarded with suspicion.

        And you can't use AI to fight deep fakes. They're generated using models that specifically learn from other models that detect f

        • Technically it can be fought with AI... unfortunately the way to fight deep fakes with AI, means the detection algorythms and tools, have to be kept hidden from the fakers and faking AIs... meaning closed source not mass implimented etc... but saved for real important questions. Of course, then how do you trust it when effectively independent testing is off limits.
    • by tlhIngan ( 30335 )

      What if someone managed to figure out how to do this with digital pictures? We would never be able to tell if a picture is real or not. Truly scary stuff.

      There are cameras that can purportedly sign every photo they take with a cryptographic key so it can be proven to be "authentic" or "modified".

      I'm not too sure or up to date on the key control protocol they use, or who and how the private key is handled so I can't vouch for how "good" it is. Ideally the camera would generate the key and give the public key

  • > Google hopes the release of the videos will allow researchers to develop ways to combat malicious deepfakes, giving them, news organizations, and the public ways to identify "synthetic" videos

    Does it even matter? The MSM already generates REAL videos with synthetic agendas.

  • It will absolutely, 100%, without fail, lead to that. There is no chance at all that this will remain unexploited. Literally zero. As in, prepare for the new reality - that you cannot trust anything you see, under any circumstances.

    It will be the new norm. There's no escaping it, no magic bullet, no way to put the genie back in the bottle. Pick your metaphor.

    Information is doomed.

    • I guess I will just have to find comfort in the solace that all my everyday fantasies will be fulfilled virtually because it will be trivial to plug in real person X into a pre-recorded situation. Whether it's watching my kid tell me how awesome it is to do homework, seeing my boss get punched in the face, or that cute woman from work in a compromising situation.

      The biggest problem would seem to be not that people would start being grossly mislead by fake news, but that they would have high levels of perso

      • by suutar ( 1860506 )

        I suspect that the only thing that will help is for equipment to include offsite storage with timestamping and hashing, with firmware updates blocked so that one can assert that the camera didn't have the ability to modify it while it was recording and the timestamps/hashing shows that it wasn't modified afterwards, so it's actually legit. But even that can be gamed with enough effort.

  • So whats to prevent them from labeling authentic videos as fake to censor them? Their "ethics"?
  • I'm pretty sure every political speech I have ever heard has been a deep-fake designed to discredit the politician. There's no chance that all politicians are ALL so wrong at the same time is there?

  • Deepfakes are produced by generative adversarial networks. You have the generating model generate fakes, mix those up with real examples, and have a fake detector try to sort them out. The generating model learns to fool the fake detector. The better the fake detector, the better the generating model.

    So Google wants help improving fake detectors, does it?

  • How is this special?

    I think every single person here, and most people out there, know that a video does not have to show the truth.
    Long before deepfakes could you simply point the camera in a selective way, cut bits, join them together, even in a misleading order, change the colors, or, if you were dedicated, matte paint an entire clip of physical film. As done regularly for movies before computers even existed, and as done to photographs for propaganda reasons since at least Stalin.

    People still read text n

  • by Big Bipper ( 1120937 ) on Wednesday September 25, 2019 @11:48AM (#59234744)
    If people realize that any video or picture ( or story ) they see on the web or MSM is a fake, or at minimum has likely been retouched to make the subjects more attractive, they may question more, hopefully everything. Perhaps we might even see the rebirth of genuine journalism, with independent journalists who check their facts before they put their reputations on the line.
  • People already have little reason to believe anything in the news much anymore.

    Deepfakes have been out there for a while.
    Other sources of edited news have also been around for quite a while.
    Having the POTUS declare "fake news" on anything he does not like, does not help the situation.

    I truly wonder how long it will be before a fully-fabricated piece of critical "news" gets run, and then society as a whole just blows itself up.

    Not too far away, IMNSHO.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...