UK Plans To Require Labels On AI-Generated Content (reuters.com) 46
An anonymous reader quotes a report from Reuters: Britain plans to consider requiring labels on AI-generated content to protect consumers from disinformation and deepfakes, the government said on Wednesday, as it outlined other areas of focus to tackle the evolving global challenge. Technology minister Liz Kendall stressed the need to strike the right balance between protecting the creative industries and allowing the AI sector to innovate, saying in a statement that the government would take time to "get this right."
The next phase of the government's work on copyright and AI would also look at the harms posed by digital replicas without consent, ways for creators to control their work online and support for independent creative organizations, she said. [...] Louise Popple, a copyright expert at law firm Taylor Wessing, noted that the government had not ruled out a broad exception that would allow AI developers to train on copyright works. "That's a subtle difference of approach and could be interpreted to mean that everything is still up for grabs" she said. "It feels very much like the hard issues are being kicked down the road by the government."
In 2024, Britain proposed easing copyright rules to let developers train models on lawfully accessed material, with creators able to reserve their rights. On Wednesday, Kendall said that having engaged with creatives, AI firms, industry bodies, unions and academics, the government had concluded it "no longer has a preferred option." "We will help creatives control how their work is used. This sits at the heart of our ambition for creatives – including independent and smaller creative organizations -- to be paid fairly," she said.
The next phase of the government's work on copyright and AI would also look at the harms posed by digital replicas without consent, ways for creators to control their work online and support for independent creative organizations, she said. [...] Louise Popple, a copyright expert at law firm Taylor Wessing, noted that the government had not ruled out a broad exception that would allow AI developers to train on copyright works. "That's a subtle difference of approach and could be interpreted to mean that everything is still up for grabs" she said. "It feels very much like the hard issues are being kicked down the road by the government."
In 2024, Britain proposed easing copyright rules to let developers train models on lawfully accessed material, with creators able to reserve their rights. On Wednesday, Kendall said that having engaged with creatives, AI firms, industry bodies, unions and academics, the government had concluded it "no longer has a preferred option." "We will help creatives control how their work is used. This sits at the heart of our ambition for creatives – including independent and smaller creative organizations -- to be paid fairly," she said.
good luck (Score:4, Insightful)
Good luck enforcing that. I agree that the deluge of slop is lame, but I don't see it going away either.
Re: (Score:2)
Re:good luck (Score:4, Informative)
Honestly, one of the really good things about this I noticed on Twitter is that no one trusts anything any more which is good. Everyone questions every video as potentially fake and people use judgement of “If it look implaujsible it's probably fake.” and that's a mentality more people should have.
“a.i.” is such a meme word, forgeries, fake images and fake videos have always existed, before they were simply the privilege of those with significant time, expertise, and/or capital but now they belong to everyone and it has made people more critical on what to trust. Just like the era of “Is this image shopped?” after that became more and more plausible it now applies to videos too, which is good.
People adapt very quickly to this in my experienc and they just don't buy it any more.
Re: (Score:3)
Honestly, one of the really good things about this I noticed on Twitter is that no one trusts anything any more which is good.
So what we probably need, rather than AI warnings, is "non-AI" certified content. Something that's traceable to an original untampered clip from an untampered camera or an actual provable person and then all edits are recorded so that we can see exactly what has been changed, if anything.
I wonder if there's a practical way of doing this without getting into some kind of horrific DRM style content control systems?
Re: (Score:2)
I don't think we need that, the world ran fine without this before when fakes were also possible. This is just about moving images which used to be quite expensive to realistically fake but static images were well within the possibility of amateurs before and especially text was. Anyone could misquote before or lie someone said something and it happened of course, not to mention out of context quotation.
Re: (Score:3, Interesting)
I mean if someone take AI generated art, for instance, and opens it in Photoshop/Affinity....and alters it...is it AI then?
Or reverse...start with human generated and sent to AI to finish it?
What does it take to be AI label required?
Re: (Score:1)
Or someone creates something in Photoshop by hand, then uses the automated tolls in PS to pretty it up. That kinds of automated tools have been around a long time, but I'll be the pearl clutchers behind this are incapable of explaining the difference.
Re: (Score:2)
Quite, “a.i.” is such a meme buzzword. Automation has been used for a long time in image generation. People don't even really know what they mean any more when they say “a.i.”.
I prefer the good old fashioned. “This is fake.” or “a forgery” opposed to “This is a.i.”. The word “fake” far better expresses the situation and the reasons behind it.
Re: (Score:2)
People don't even really know what they mean any more when they say “a.i.”.
Any more? When did they ever? Including the people selling it?
Re: (Score:3)
It's not an unusual judgement call to make. Let's say you're deciding whether an essay plagiarizes another work. If it basically copies the whole thing verbatim, maybe changing a couple of words to disguise the fact it's been copy-pasted, it's clearly plagiarism. If it lays out some ideas expressed in that other work (clearly attributing those ideas) and then makes own commentary and analysis of those ideas, it is not plagiarism.
I mean if someone take AI generated art, for instance, and opens it in Photoshop/Affinity....and alters it...is it AI then?
That is very definitely AI.
Or reverse...start with human generated and sent to AI to finish it?
That depends. I think I would be fine with things li
Re: good luck (Score:3)
If money exhanges hands then it is quite possible to enforce such a law. If you bought something that was misrepresented at the bare minimum you get your money back, but in a civil case you can have enough penalties tacked on to make hiding AI generated content an unviable business model.
Re: good luck (Score:2)
Re: (Score:2)
If there are actual penalties and enforcement, like with child porn, it will most definitely "go away" in the sense that you won't see it unless you deliberately, specifically look for it.
Re: (Score:2)
Oh that part is really easy: Stop giving billions to AI startups.
Right now, the whole AI bubble is heavily subsidized by investor cash. Once the AI companies have to charge users the actual cost plus a profit margin, we'll see AI usage drop considerably. Because that shit ain't cheap.
Re: (Score:2)
Agreed, but in high profile cases, it'll work (much like the #ad requirement).
That is, if you put an AI-generated video of your political opponent online saying something untrue but policy-adjacent, then you can expect serious blowback unless you mark it as such. And if you mark it as such, I think it's fair to say most people won't bother watching it, except for the lolz. It certainly won't get BBC Verify looking into it to see if it's real, and it won't mistakenly get slapped across loads of lesser media
prop 65 warning (Score:4, Insightful)
Just as everything in California comes with a warning that it may contain chemicals that might contribute to an individual's overall lifetime risk of cancer... all content in the UK will come with a warning that someone, somewhere, somehow, may have used AI to create some part of the content. Danger, Will Robinson!
Re: (Score:2, Insightful)
Re:prop 65 warning (Score:5, Insightful)
The California Prop 65 law is a great example of doing it wrong. There is a real possibility of being penalized if you fail to warn someone and it turns out that chemicals were present. It is therefore safer to just warn people that everything and every place may contain chemicals. No effort is made to accurately label anything -just label everything as a potential hazard and call it done.
Re: (Score:3)
The California Prop 65 law is a great example of doing it wrong. There is a real possibility of being penalized if you fail to warn someone and it turns out that chemicals were present. It is therefore safer to just warn people that everything and every place may contain chemicals. No effort is made to accurately label anything -just label everything as a potential hazard and call it done.
Honestly, there are some business processes we use, since we sell in California, that require us to accurately label things. Meaning we have to have full chemical breakdowns of every product we sell so that it gets precisely the correct wording of the Prop65 warnings added to it from the point of ordering to the point of shipping. It's insane to keep track of, and takes our products team several days a year to keep updated just for new products and minor changes to finishes and catalysts on painted products
Re: (Score:2)
Try dealing with fire marshals who require MSDS sheets for every single item in the store (with location noted) when you have 60,000 SKUs. There's an entire industry to handle it.
Re: (Score:2)
Europe has had this sort of thing for a long time, and it's no bother. Manufacturers supply safety sheets with all the relevant data, which distributors can... Distribute.
Re: (Score:2)
The way it usually works in the UK is that to need a warning the work must have a "substantive" amount of the thing in question. Where that line lies is up to the regulator to decide ultimately, but if a business acts in good faith they don't have anything to worry about.
They should also require labelling of images that have had more than a minimal amount of photoshopping done to models. Credit to a few brands here for using largely unedited photos with models that have more typical body shapes.
Delusional (Score:1)
There is also a law to put a label on horse- donkey- or kangaroo-lasagna and criminals just ignore it.
And it's tasty, like those videos, or people wouldn't consume them.
YouTube (Score:2)
I just wish there was a way to filter AI content on YouTube.
So much slop.
Re: (Score:1)
There's an easy way to filter it. Add this to your hosts file:
127.0.0.1 youtube.com
So what? (Score:2)
No one's going to enforce it. Another dead law.
Re: (Score:1)
But their virtue has been signaled, and pearls have been clutched, and in the end, that's all the politicians - and the voters - really care about.
Re: (Score:2)
It's more traditional than that. "We must do something. This is something. Therefore we must do this.". Virtue signalling implies that they have some kind of understanding that what they are doing will have little effect. In fact, some of the people behind this are likely fully aware that they will be able to sell new systems and force costs on other people
Lip service (Score:3)
Yup, because those people intentionally looking to mislead or defraud others through the use of AI fakes are certainly going to follow this law to the letter...
Clickbait much? (Score:2)
The Evil Bit from RFC 3514 (Score:2)
To be fair.... (Score:2)
Well, do be fair, who needs human content anyway?
(Totally not AI generated.)
Why not? (Score:2)
Why not shit on a company versus AI?
At least the latter doesn't have any adversarial designs against employees, unlike --- well, every corporation ever.
Slightly Off-Topic Prediction (Score:1)
At this point it is easier to mark genuine content (Score:2)
Re: (Score:1)
At this point it is easier to mark genuine content
100% agreed.
People should be able to get a credential that lets you certify the content as your own.
Right. I was thinking maybe with a scannable QR code content identifier, or a verifiable watermark. One that only 'you' can create for the content. And which people can trace back to 'you.' Like private/public key pairs. Except user friendly.
Also, and this is the big thing, only an individual can hold such a credential. NO CORPORATIONS.
I think we diverge here, but possibly because we're thinking of credentials differently. If credentials allow me to trace back the name/etc. of the entity behind them, then, sure, corporations can have one too. One per entity. I can choose to react different
Re: (Score:2)
Maybe we should go one up and also move to a web of trust, perhaps with some defaults ready to go or even pre-checked of people who are good BS detectors? This way, it is a system that can't easily be seized by one party, unlike the hierarchal system of SSL/TLS based PKIs we have now.
Laughable! (Score:2)
And who puts that label?
And how can that label not be altered?
Re: (Score:2)
Same labelling style as with fashion magazines one imagines
Re: (Score:2)
That's all ballshot!
Is it digitally/cryptographically signed?
If "yes" maybe we can tell the thing is legit.
If "no", surely it is NOT.
How about this instead? (Score:2)
How about labeling the content that's NOT generated by AI? That will make for a lot less labeling work going forward.
It will also make it easier to find the worthwhile stuff. After all, people generally need help finding the needle, not the haystack.
Agreed. "Hand made" (Score:2)
Hand made in physical goods has value over mass produced. Essentially what AI dross is. Mass produced content.
Guess artist needs a digital certificate in the content. Couldn't trust social media to honour that so do the verification in the browser.
Unfortunately I expect the majority don't care as much as the artists do.
Pointless law that will harm more than help. (Score:2)
First, even if you were to force the large companies to only produce models that provide these tags, you can train a small cheap AI to remove it.
If you put your trust in the tags, you are MORE vulnerable to the more sophisticated agents that can produce this content without a tag.
This cat is solidly out of the bag, and no one should trust anything they see online.
We should start tagging original documentation with DNA... that will take a little while to fake.
One thing that WILL work is to put blockchain int