YouTube's Likeness Detection Has Arrived To Help Stop AI Doppelgangers 19
An anonymous reader quotes a report from Ars Technica: AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators. [...] The likeness detection tool, which is similar to the site's copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes.
Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing "Content detection" menu. In YouTube's demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It's unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules.
After signing up, YouTube will flag videos from other channels that appear to have the user's face. YouTube's algorithm can't know for sure what is and is not an AI video. So some of the face match results may be false positives from channels that have used a short clip under fair use guidelines. If creators do spot an AI fake, they can add some details and submit a report in a few minutes. If the video includes content copied from the creator's channel that does not adhere to fair use guidelines, YouTube suggests also submitting a copyright removal request. However, just because a person's likeness appears in an AI video does not necessarily mean YouTube will remove it.
Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing "Content detection" menu. In YouTube's demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It's unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules.
After signing up, YouTube will flag videos from other channels that appear to have the user's face. YouTube's algorithm can't know for sure what is and is not an AI video. So some of the face match results may be false positives from channels that have used a short clip under fair use guidelines. If creators do spot an AI fake, they can add some details and submit a report in a few minutes. If the video includes content copied from the creator's channel that does not adhere to fair use guidelines, YouTube suggests also submitting a copyright removal request. However, just because a person's likeness appears in an AI video does not necessarily mean YouTube will remove it.
This is about feeding data to AI (Score:2, Insightful)
Re: (Score:2)
It basically lets you steal channel content without lifting the entire video. That's usually enough to fool the existing detection algorithms.
I'm not sure who would watch those weird videos instead of The originals. But I've heard some channel owners complaining the AI slop is getting more clicks than they are sometimes.
Re: (Score:2)
I've heard some channel owners complaining the AI slop is getting more clicks than they are sometimes.
This does not mean that complaining about real rather than perceived issue. I have hard time believing that anyone would watch AI impersonation of a presenter in any context but satirizing of said presenter. I think at the core of the complain that mindless slop gets promoted by YT algorithms. This has nothing to do with AI, as slop channels like Mr.Beast were getting promoted before AI content was a technical reality.
Re: (Score:2)
Obviously. So they tell creators "if you just let us use all your content with AI, we will stop this nasty, nasty, entirely made-up problem for you!"
Re: (Score:3)
YouTube creates a problem (by integrating VEO3 into its platform) and then offers a solution -- but only if you surrender your government ID to them.
This is dystopian.
Trust (Score:2)
If I were going to trust someone not to leak my ID, it would be Google.
But no.
Re: (Score:2)
Oh good (Score:3)
If it's anything like their content ID copyright enforcement mechanism, I'm sure this will go absolutely perfect and there will be no drama whatsoever.
=Smidge=
Re: Oh good (Score:2)
Your content has been removed because you look too much like Mr. Beast. If you think this removal is in error, we recommend plastic surgery to remedy the issue. Or you can file an appeal, but honestly, with the effort needed to succeed with that route, plastic surgery is probably less painful.
Who wants to bet... (Score:5, Insightful)
... that this will ban actual creators instead of the copies and then nobody at YT can be reached?
Re: (Score:3)
Protection for Celebrities (Score:4, Insightful)
No one else is likely to need this. But I can see celebrities wanting to stop all the shmucks from making AI videos of them.
Re: (Score:2)
No one else is likely to need this. But I can see celebrities wanting to stop all the shmucks from making AI videos of them.
Meanwhile over on X, Musk is sitting around like the real-life comic book supervillain letting everyone deepfake whatever they want. And it's actually kind of awesome. [x.com]
A typical visit to YouTube (Score:2, Interesting)
If they can transcribe audio, look for copyright strikes, look for copycat channels and all the rest then how hard is it really to scan content for obvious signs of AI generation? I bet there are a whole bun
And what happens when (Score:2)
And what happens when the AI insists it IS you when it isn't, or vice versa? Would a court or other deliberative body use this to make decisions? (think HOAs, City Councils, insurance companies, traffic courts, etc etc)
Reliance on 'unbiased decider' this will lead to disasters, lots and lots of disasters, person after person.