
Anthropic Denies Federal Agencies Use of Claude for Surveillance Tasks (semafor.com) 19
Anthropic has declined requests from federal law enforcement contractors to use its Claude AI models for surveillance activities, deepening tensions with the Trump administration, Semafor reported Wednesday, citing two senior officials. The company's usage policies prohibit domestic surveillance, limiting how agencies including the FBI, Secret Service, and Immigration and Customs Enforcement can deploy its technology. While Anthropic maintains a $1 contract with federal agencies through AWS GovCloud and works with the Department of Defense on non-weapons applications, administration officials said the restrictions amount to making moral judgments about law enforcement operations.
And in completely unrelated news... (Score:4, Informative)
Well, to be honest... (Score:5, Insightful)
While Anthropic maintains a $1 contract with federal agencies through AWS GovCloud and works with the Department of Defense on non-weapons applications, administration officials said the restrictions amount to making moral judgments about law enforcement operations.
Shouldn't somebody? I mean, clearly, the administration isn't going to, even though that's supposed to be part of their job.
Re: (Score:3, Insightful)
Shouldn't somebody? I mean, clearly, the administration isn't going to, even though that's supposed to be part of their job.
A good question. Even if the answer was 'no' (it's not), it's Anthropic's prerogative to make moral judgments if it wishes regarding the use of its technology. It's called a "License Agreement".
Profits vs Security (Score:1)
Got it. Until states subcontract access for this information via someone else willing to be the middlemen for a fraction of what Anthropic would ask governments. In the end, profits won again
Its their company they can do that (Score:2)
What's wrong with a company making a moral judgement on how its product is used?
Re: (Score:1)
Re: (Score:1, Insightful)
Re: (Score:2)
Nothing, until the customer (in this case, the current regime) decides that the company is no longer entitled to its moral judgement.
Re: (Score:1)
Slashdot FIXED the headline !
Well done !
Taking the bait (Score:3)
One of the officials said Anthropic’s position amounts to making a moral judgment about how law enforcement agencies do their jobs.
That quote was probably included to get the exact reaction I'm going to give, and I'm good with it. Set aside anything else Anthropic might do later.
Moral judgement is what we're supposed to do. When the friggin' AI companies think you've taken the quest for power and surveillance too far, you're in a very bad place.
Re: (Score:2)
When the friggin' AI companies think you've taken the quest for power and surveillance too far, you're in a very bad place.
Preach.
The moment where an AI company says, "Eh, boss? That may be a touch sketch for us," should make any real human scream in terror. Our government? "Heh, do it anyway. Come on. You know you want to."
Futile and symbolic gesture (Score:2)
There are lots of others who will take the money for anything, no matter how evil
Re: (Score:3)
Re: (Score:2)
Yup, and it's used to this day to justify genocide [youtube.com].
Most of us know the companies w/o scruples (Score:2)
As would many other tech companies whose: "Job #1 is to make more $$$."
Is Mountainhead [wikipedia.org] a documentary?
We NEVER use Claude! (Score:2)
Albeit, we use many Claudettes.
This is EXACTLY (Score:1)
Pot, kettle (Score:2)
administration officials said the restrictions amount to making moral judgments about law enforcement operations.
So the National Guard deployments, maximized charges (that are being kicked out by grand juries), removal of prosecutorial discretion, allowance of racial profiling, language and accent discrimination in immigration enforcement, and recent arrest threats over free speech aren't moral judgements about law enforcement, or are they claiming that they are the only ones entitled to have an opinion?