Silicon Valley Is Debating If AI Weapons Should Be Allowed To Decide To Kill (techcrunch.com) 50
An anonymous reader quotes a report from TechCrunch: In late September, Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous -- meaning an AI algorithm would make the final decision to kill someone. "Congress doesn't want that," the defense tech founder told TechCrunch. "No one wants that." But Tseng spoke too soon. Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons -- or at least a heavy skepticism of arguments against them. The U.S.'s adversaries "use phrases that sound really good in a sound bite: Well, can't you agree that a robot should never be able to decide who lives and dies?" Luckey said during a talk earlier this month at Pepperdine University. "And my point to them is, where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?"
When asked for further comment, Shannon Prior, a spokesperson for Anduril said that Luckey didn't mean that robots should be programmed to kill people on their own, just that he was concerned about "bad people using bad AI." In the past, Silicon Valley has erred on the side of caution. Take it from Luckey's cofounder, Trae Stephens. "I think the technologies that we're building are making it possible for humans to make the right decisions about these things," he told Kara Swisher last year. "So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously." The Anduril spokesperson denied any dissonance between Luckey (pictured above) and Stephens' perspectives, and said that Stephens didn't mean that a human should always make the call, but just that someone is accountable.
Last month, Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons. At an event hosted by the think tank Hudson Institute, Lonsdale expressed frustration that this question is being framed as a yes-or-no at all. He instead presented a hypothetical where China has embraced AI weapons, but the U.S. has to "press the button every time it fires." He encouraged policymakers to embrace a more flexible approach to how much AI is in weapons. "You very quickly realize, well, my assumptions were wrong if I just put a stupid top-down rule, because I'm a staffer who's never played this game before," he said. "I could destroy us in the battle."
When TC asked Lonsdale for further comment, he emphasized that defense tech companies shouldn't be the ones setting the agenda on lethal AI. "The key context to what I was saying is that our companies don't make the policy, and don't want to make the policy: it's the job of elected officials to make the policy," he said. "But they do need to educate themselves on the nuance to do a good job." He also reiterated a willingness to consider more autonomy in weapons. "It's not a binary as you suggest -- 'fully autonomous or not' isn't the correct policy question. There's a sophisticated dial along a few different dimensions for what you might have a soldier do and what you have the weapons system do," he said. "Before policymakers put these rules in place and decide where the dials need to be set in what circumstance, they need to learn the game and learn what the bad guys might be doing, and what's necessary to win with American lives on the line." [...] "For many in Silicon Valley and D.C., the biggest fear is that China or Russia rolls out fully autonomous weapons first, forcing the U.S.'s hand," reports TechCrunch. "At the Hudson Institute event, Lonsdale said that the tech sector needs to take it upon itself to 'teach the Navy, teach the DoD, teach Congress' about the potential of AI to 'hopefully get us ahead of China.' Lonsdale's and Luckey's affiliated companies are working on getting Congress to listen to them. Anduril and Palantir have cumulatively spent over $4 million in lobbying this year, according to OpenSecrets."
When asked for further comment, Shannon Prior, a spokesperson for Anduril said that Luckey didn't mean that robots should be programmed to kill people on their own, just that he was concerned about "bad people using bad AI." In the past, Silicon Valley has erred on the side of caution. Take it from Luckey's cofounder, Trae Stephens. "I think the technologies that we're building are making it possible for humans to make the right decisions about these things," he told Kara Swisher last year. "So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously." The Anduril spokesperson denied any dissonance between Luckey (pictured above) and Stephens' perspectives, and said that Stephens didn't mean that a human should always make the call, but just that someone is accountable.
Last month, Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons. At an event hosted by the think tank Hudson Institute, Lonsdale expressed frustration that this question is being framed as a yes-or-no at all. He instead presented a hypothetical where China has embraced AI weapons, but the U.S. has to "press the button every time it fires." He encouraged policymakers to embrace a more flexible approach to how much AI is in weapons. "You very quickly realize, well, my assumptions were wrong if I just put a stupid top-down rule, because I'm a staffer who's never played this game before," he said. "I could destroy us in the battle."
When TC asked Lonsdale for further comment, he emphasized that defense tech companies shouldn't be the ones setting the agenda on lethal AI. "The key context to what I was saying is that our companies don't make the policy, and don't want to make the policy: it's the job of elected officials to make the policy," he said. "But they do need to educate themselves on the nuance to do a good job." He also reiterated a willingness to consider more autonomy in weapons. "It's not a binary as you suggest -- 'fully autonomous or not' isn't the correct policy question. There's a sophisticated dial along a few different dimensions for what you might have a soldier do and what you have the weapons system do," he said. "Before policymakers put these rules in place and decide where the dials need to be set in what circumstance, they need to learn the game and learn what the bad guys might be doing, and what's necessary to win with American lives on the line." [...] "For many in Silicon Valley and D.C., the biggest fear is that China or Russia rolls out fully autonomous weapons first, forcing the U.S.'s hand," reports TechCrunch. "At the Hudson Institute event, Lonsdale said that the tech sector needs to take it upon itself to 'teach the Navy, teach the DoD, teach Congress' about the potential of AI to 'hopefully get us ahead of China.' Lonsdale's and Luckey's affiliated companies are working on getting Congress to listen to them. Anduril and Palantir have cumulatively spent over $4 million in lobbying this year, according to OpenSecrets."
the answer will be ... (Score:2)
No need to debate it.
Re: the answer will be ... (Score:2)
Re: (Score:1)
WikiLeaks on Starship & Starlink [reddit.com]
This explains Musk completely.
the answer will be *ironic* (Score:2)
As I suggested in 2010: "Recognizing irony is key to transcending militarism" https://pdfernhout.net/recogni... [pdfernhout.net]
"Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missi
Re: (Score:2)
And it will be yes, for two very important reasons:
Do not underestimate the appeal of a devic
Re: (Score:2)
Forgot to add one thing: What kind of society would we have if diplomacy was more expensive than violence?
Imagine a future in which facial recognition allows a drone to target and kill the leaders of a country for less than the cost of flying an ambassador and staff there. Imagine a future in which it costs less for a drone to assassinate a suspect than for the police to arrest them and hold them in jail.
Counterheadline: (Score:3)
Re: (Score:2)
Re: (Score:2)
Strict liability (Score:4, Interesting)
I want strict liability both civil and criminal applied to the management of any company involved in manufacturing AI weaponry that makes life and death decisions.
Re: (Score:3)
I don't think that they really care about what you want :(
Re: (Score:1)
Who's basement do they send the request for your opinion?
Re: (Score:2)
Re: Strict liability (Score:1)
Just use the same licensing agreement they used for gunpowder and depleted uranium. I think it is doncare v1.2......
Silicon Valley's decision is moot (Score:4, Interesting)
Re: (Score:2)
You naive if you think that it just the Chinese. Many nations will have projects to do this - they just will not talk about it.
Can't tell the difference (Score:3)
where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?
That's absurd. Obviously you're going to allow a computer to make the decision, but what has a moral high ground to do with that? Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank? Do you know how expensive modern weapons are?
Re: Can't tell the difference (Score:2)
"Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank? Do you know how expensive modern weapons are?"
Found the guy who doesn't know how the MIC works
Re: (Score:3)
Where's the moral high ground when a landmine is deployed in an area that allows a school bus full of kids to hit it?
Where's the moral high ground for the asshole who makes this bad faith argument?
"Obviously you're going to allow a computer to make the decision..."
What decision? I'm not going to allow a computer to make the decision to detonate a mine. The failure, moral and intellectual, has already occurred when that question must be asked.
"Why would you waste a mine on a school bus if the mine can decid
Re: (Score:2)
Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank?
Actually, the best strategy is to blow up the fuel trucks.
Re: (Score:2)
Won't someone think of the kids? (Score:2)
And my point to them is, where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?
And shouldn't we also consider the machine ethics [youtu.be] of taking out a Russian tank full of kids? It's all so confusing. I have to wonder, though, if he strategically selected "landmines" [whitehouse.gov] as his example, considering how recent U.S. policy has changed in this regard.
Re: (Score:2)
This is all so silly. The only reason the position on landmines has changes is that strategists no longer think they are the most effective solution for their applications anyway.
Who needs mines which are slow to deploy, either costly up front if you put fancy electronics in them, or tedious and hazardous to remove if you dont.
Now you can send a swarm of drones or use a microwave, sonic, or laser weapon to take out the target or at least that will be the reality before the USA's next big land conflict, and
Re: (Score:2)
strategists no longer think they are the most effective solution for their applications anyway.
Apparently neither Ukraine nor Russia got the message on this. They are both making extensive use of mines to great effect. In fact, Ukraine attributed the failure of their offensive in part to Russia's extensive mining of its defensive positions.
Re: (Score:2)
Landmines are already supposed to be illegal
America is not a signatory.
America uses landmines along the Korean DMZ and the Guantanamo perimeter.
Re: (Score:2)
To make things worse, Ukraine signed that treaty, yet now 30% their country is covered in Russian landmines.
Re: How big (Score:1)
what do you mean âoeshort ofâ?
Re: (Score:3)
In fact, there should not even BE any so-called 'AI' on battlefields to begin with.
Both sides use AI extensively in Ukraine.
One application is terrain guidance of drones when GPS is jammed. Ukraine is building drones with on-board neural processors for this reason.
AI is also used for EW and target acquisition.
Re: (Score:2)
Changes nothing, it's just plain wrong for shitty fucked-up braindead people to decide whether someone lives or dies, and if any one of you still think that specific point needs to be debated, then I question whether you have even a single shred of humanity in you.
FTFY
Sounds like ... (Score:2)
Re: Sounds like ... (Score:1)
goddammit you beat me to it.
We cannot allow an automated killing gap!!
Absolute height of arrogance (Score:3)
"Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons..."
"Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons."
Who the hell cares what these guys think? Maybe they should go back and take a look at how the world actually works. These people have been successful in business, but the way government works is it has a monopoly on the use of violence outside of it's territory. Going to war, what weapons we use, and strategic and tactical decision making is entirely left to the government; namely the military leaders will decide what weapons they use, and Congress will decide what controls they can put on the military. That's it. To think some silicon valley tech bros even have a say in this discussion is the most arrogant, asinine thing I've ever heard.
Re: Absolute height of arrogance (Score:1)
Ethics: a powerful negotiation tool.
Re: (Score:2)
To think some silicon valley tech bros even have a say in this discussion is the most arrogant, asinine thing I've ever heard.
The very notion of self-government is now declared dead.
Autonomous Weapon AI (Score:2)
WE CANNOT ALLOW (Score:1)
Mr. Free Market, we cannot allow an automated killing gap!!
there is money to be made (Score:2)
the answer will be YES, of course.
The article misrepresents the original comment. (Score:3)
Degrees of latitude (Score:3)
Having certainty is great, but lack of certainty doesn't mean that a decision won't be made to take out a city block to get one target if you lack the means to narrow your targeting.
Is it a war crime? Since I'm not a lawyer, I'll the Hague decide...
If your target is a fugitive terrorist mastermind, taking refuge in neutral country, yeah, you probably don't want to piss off the host country by killing its citizens. Use a special forces raid or the flying ginsu knife bomb to reduce collateral damage.
What if your target is a bunker, where the leader of a nation is hiding out with his mistress, in the middle of the enemy capital, still heavily defended with anti-air guns and ground troops, and the nation has been mobilized for total war? What if they're jamming your ability to use precision munitions? Yeah, you might bomb the shit out of the block just to make sure.
At some point a human being makes a decision to pull the trigger. They might pull the trigger at the point where they launch the missile or bomb. They might pull the trigger at the point where someone on scene says go/no go (we'll assume the missile or bomb can divert or self destruct - a capability loitering munitions have). The difference is, can you substitute an algorithm to continue, divert, or abort past the point where a human being can realistically be in the loop.
We already have seeker missiles that follow a designated target using a full sensor suite. They make decisions to engage a target in millisecond or microsecond time. Interceptors as well - sometimes you get friendly fire because someone isn't giving off IFF. Absent someone using neuralink and jacked up to 100x real time, remotely riding the missile in (like one of the FPV drone pilots in Ukraine) at some point either the munition is released in dumb mode, to strike at the last designated target, or it is released in autonomous mode, to strike at the target that most closely resembles the target it was originally given. The only question is... how early or late is that control given over.
The more the enemy uses EW to jam, the more we have to turn over control to the guidance system on board to either follow the programmed flight path, or select targets of opportunity. First wave of loitering munitions all hit their targets successfully? Second wave switches over to secondary targets, or aborts and returns to base for refueling.
But yeah, smart mines would scare me shitless, not because they'd be effective, but because a bug in the IFF routine (ED-201 style) could have a schoolbus misidentified as an armored troop carrier. And smart mines that could autonomously roam during the night to randomize a minefield...
It is inevitable (Score:2)
Two systems, one full AI and one human-in-the-loop... the AI will be faster, and give a battlefield advantage.
The best we can hope for is a human override and tight geofencing, but eventually AI will be making the primary decision because the armies that do that will be the only ones that are viable in the field.
Who's going to stop it? (Score:2)
It's just human nature to find some way around some other human saying "You can't do that." What's to stop a human from building an AI that can do this? Nothing. What's to stop a human from building an AI that will kill off the humans trying to stand in the way of doing whatever the first human wants and putting in place new humans who will say that it's okay to kill.
Lucky's question is a valid one. (Score:2)
Lucky's question about landmines (and other mine-like explosives and booby-traps) is a valid one, as the designer, manufacturer, and installer of the landmine have no control over triggering of the explosive, just like with autonomous AI-triggered weapons. A landmine is an autonomous weapon. The trigger mechanism is different, but the concept of independent triggering is the same. Landmines have been banned by treaty, but the US, China, and Russia have all refused to sign the treaty.
I also wonder about t
Of course! (Score:2)