
CISA Head Warns Big Tech's 'Voluntary' Approach to Deepfakes Isn't Enough (msn.com) 18
The Washington Post reports:
Commitments from Big Tech companies to identify and label fake artificial-intelligence-generated images on their platforms won't be enough to keep the tech from being used by other countries to try to influence the U.S. election, said the head of the Cybersecurity and Infrastructure Security Agency. AI won't completely change the long-running threat of weaponized propaganda, but it will "inflame" it, CISA Director Jen Easterly said at The Washington Post's Futurist Summit on Thursday. Tech companies are doing some work to try to label and identify deepfakes on their platforms, but more needs to be done, she said. "There is no real teeth to these voluntary agreements," Easterly said. "There needs to be a set of rules in place, ultimately legislation...."
In February, tech companies, including Google, Meta, OpenAI and TikTok, said they would work to identify and label deepfakes on their social media platforms. But their agreement was voluntary and did not include an outright ban on deceptive political AI content. The agreement came months after the tech companies also signed a pledge organized by the White House that they would label AI images. Congressional and state-level politicians are debating numerous bills to try to regulate AI in the United States, but so far the initiatives haven't made it into law. The E.U. parliament passed an AI Actt year, but it won't fully go into force for another two years.
In February, tech companies, including Google, Meta, OpenAI and TikTok, said they would work to identify and label deepfakes on their social media platforms. But their agreement was voluntary and did not include an outright ban on deceptive political AI content. The agreement came months after the tech companies also signed a pledge organized by the White House that they would label AI images. Congressional and state-level politicians are debating numerous bills to try to regulate AI in the United States, but so far the initiatives haven't made it into law. The E.U. parliament passed an AI Actt year, but it won't fully go into force for another two years.
But they crossed their hearts on it! (Score:1)
And they have proved to be good stewards of the public square of global communications in the past!
Re: (Score:2)
Oftentimes your enemies' enemies are your enemies, too.
Re: (Score:1)
You won't find them dressed like people they want to recruit, no hoodies for example.
Huh. And we keep being told it shouldn't matter what you look like so long as you can do the job. I guess that only applies to white men and not everyone else, especially not for women.
Re: (Score:2)
Why is it not okay for men to dress in a way that might seem "cool"?
Literal books have been written about how the business suit became the cultural norm for men. There's a Wikipedia entry just about it and also a blog and SM called "Die Work Wear" [dieworkwear.com] where it breaks down a lot of the cultural history of fashion and why certain things became the way they are.
Like you are definitely not wrong about the restriction have in regards to "professional" attire and we went through this in America recently with a whole controversey where John Fetterman wore a hoodie to the Senate. [x.com]
So th
Re: (Score:2)
Can you link me something where she was way outta bounds because at least from some image searches I'm not seeing it? Looks like the aesthetic for a woman in her 50-60s
Beware of unintended consequences. (Score:4, Insightful)
The law of unintended consequences is quite real and common, especially when laws focus on the how (with AI) instead of the underlying behavior.
Fraud is already illegal. Libel and slander are already illegal. Misrepresentation of sources for political messages is already illegal.
The federal laws don't have specific protections on publicly rights so that is a possible improvement to follow the states that protect them.
Yes there is room for improvement, but we all need to keep reminding politicians to not go for knee-jerk emotional policies, instead to follow the data and make sound policies based on research about the actual problems and solutions rather than popular emotionally powerful ones. Left on their own, politicians will go for the emotional ones every time.
Re: (Score:3)
Fraud is already illegal. Libel and slander are already illegal. Misrepresentation of sources for political messages is already illegal.
And all that means fuck-all when you report all those to Facebook and their response is, in 100% of ALL reports, this:
We didn’t remove the ad
To keep our review process as fair as possible, we use the same Advertising Standards to review reports.
We’ve taken a look and found that this ad doesn’t go against our Advertising Standards.
Because you reported it, we won’t show this ad to you again. You can also influence the ads you see by hiding ads or changing your ad preferences.
If you disagree with the decision to not take the ad down, you can request a review.
100% of reported ads which were ALL using deepfake technology to scam, politically sway and trick people.
Dozens and dozens of ads, not one, not two, not 10.
Re: (Score:3)
The only way to combat it is through education and training. Real education, not the bullshit called "education" in America.
they're worried about bullshit in their elections and you think they would want a population trained in critical thinking? think again. the last thing these fuckers want is an educated populace, that part is already working as intended.
the part that's crumbling is the propaganda machine. now they're trying to make tech platforms accountable to get them to do the dirty work for them and regain their monopoly, once they realize that that's actually not possible they'll resort straight to censorship. for natio
Re: (Score:3)
Learning to distrust everything you see might seem like the right response but if you rule out facts than what do you have to go on? Just
Flip side of the issue (Score:3)
This case [theguardian.com] shows why courts should not readily buy into any argument that something is deep faked absent corroborating evidence.
News at 11 (Score:1)