Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI The Military

Silicon Valley Is Debating If AI Weapons Should Be Allowed To Decide To Kill (techcrunch.com) 46

An anonymous reader quotes a report from TechCrunch: In late September, Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous -- meaning an AI algorithm would make the final decision to kill someone. "Congress doesn't want that," the defense tech founder told TechCrunch. "No one wants that." But Tseng spoke too soon. Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons -- or at least a heavy skepticism of arguments against them. The U.S.'s adversaries "use phrases that sound really good in a sound bite: Well, can't you agree that a robot should never be able to decide who lives and dies?" Luckey said during a talk earlier this month at Pepperdine University. "And my point to them is, where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?"

When asked for further comment, Shannon Prior, a spokesperson for Anduril said that Luckey didn't mean that robots should be programmed to kill people on their own, just that he was concerned about "bad people using bad AI." In the past, Silicon Valley has erred on the side of caution. Take it from Luckey's cofounder, Trae Stephens. "I think the technologies that we're building are making it possible for humans to make the right decisions about these things," he told Kara Swisher last year. "So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously." The Anduril spokesperson denied any dissonance between Luckey (pictured above) and Stephens' perspectives, and said that Stephens didn't mean that a human should always make the call, but just that someone is accountable.

Last month, Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons. At an event hosted by the think tank Hudson Institute, Lonsdale expressed frustration that this question is being framed as a yes-or-no at all. He instead presented a hypothetical where China has embraced AI weapons, but the U.S. has to "press the button every time it fires." He encouraged policymakers to embrace a more flexible approach to how much AI is in weapons. "You very quickly realize, well, my assumptions were wrong if I just put a stupid top-down rule, because I'm a staffer who's never played this game before," he said. "I could destroy us in the battle."

When TC asked Lonsdale for further comment, he emphasized that defense tech companies shouldn't be the ones setting the agenda on lethal AI. "The key context to what I was saying is that our companies don't make the policy, and don't want to make the policy: it's the job of elected officials to make the policy," he said. "But they do need to educate themselves on the nuance to do a good job." He also reiterated a willingness to consider more autonomy in weapons. "It's not a binary as you suggest -- 'fully autonomous or not' isn't the correct policy question. There's a sophisticated dial along a few different dimensions for what you might have a soldier do and what you have the weapons system do," he said. "Before policymakers put these rules in place and decide where the dials need to be set in what circumstance, they need to learn the game and learn what the bad guys might be doing, and what's necessary to win with American lives on the line." [...]
"For many in Silicon Valley and D.C., the biggest fear is that China or Russia rolls out fully autonomous weapons first, forcing the U.S.'s hand," reports TechCrunch. "At the Hudson Institute event, Lonsdale said that the tech sector needs to take it upon itself to 'teach the Navy, teach the DoD, teach Congress' about the potential of AI to 'hopefully get us ahead of China.' Lonsdale's and Luckey's affiliated companies are working on getting Congress to listen to them. Anduril and Palantir have cumulatively spent over $4 million in lobbying this year, according to OpenSecrets."

Silicon Valley Is Debating If AI Weapons Should Be Allowed To Decide To Kill

Comments Filter:
  • yes.

    No need to debate it.
    • Autonomous weapons are needed for the nuclear defense system that Elon is working on (and now Trump is advertising at rallies), https://www.reddit.com/r/WikiL... [reddit.com] Stunning this isn't all over the news.
    • As I suggested in 2010: "Recognizing irony is key to transcending militarism" https://pdfernhout.net/recogni... [pdfernhout.net]
      "Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
      Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missi

    • And it will be yes, for two very important reasons:

      1. AI weapons will be more selective, and more objective in their targeting decisions, reducing the indiscriminate killing of civilians and making it more difficult to use them to commit war crimes, and, more importantly:
      2. In the rare case where AI does something unconscionable - like killing a school bus full of kids - it will be a corporation held responsible, rather than an individual person or chain of command.

      Do not underestimate the appeal of a devic

      • Forgot to add one thing: What kind of society would we have if diplomacy was more expensive than violence?

        Imagine a future in which facial recognition allows a drone to target and kill the leaders of a country for less than the cost of flying an ambassador and staff there. Imagine a future in which it costs less for a drone to assassinate a suspect than for the police to arrest them and hold them in jail.

  • by Pseudonymous Powers ( 4097097 ) on Friday October 11, 2024 @04:15PM (#64857601)
    World Debating Whether Silicon Valley Should Be the Ones Debating This
    • I suspect they are the only ones still debating it and doing so publicly. My guess would be that the answer has already been decided by the ministry of defence (or equivalent) of every country with significant AI capability.
      • Exactly right. No one cares what Silicon valley thinks. They aren't the ones fighting wars and wars as won based on "iron and blood". This is a little like nuclear weapons. If someone is losing and using nukes will prevent it, they will use them. We will eventually all pay a very heavy price for not abolishing them. A lot higher price than AI can extract.
  • Strict liability (Score:4, Interesting)

    by mysidia ( 191772 ) on Friday October 11, 2024 @04:16PM (#64857607)

    I want strict liability both civil and criminal applied to the management of any company involved in manufacturing AI weaponry that makes life and death decisions.

  • by Wolfling1 ( 1808594 ) on Friday October 11, 2024 @04:17PM (#64857611) Journal
    You can be guaranteed that the Chinese AI efforts have already made the decision, and you won't like it.
  • by TheNameOfNick ( 7286618 ) on Friday October 11, 2024 @04:23PM (#64857627)

    where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?

    That's absurd. Obviously you're going to allow a computer to make the decision, but what has a moral high ground to do with that? Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank? Do you know how expensive modern weapons are?

    • "Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank? Do you know how expensive modern weapons are?"

      Found the guy who doesn't know how the MIC works

    • by dfghjk ( 711126 )

      Where's the moral high ground when a landmine is deployed in an area that allows a school bus full of kids to hit it?
      Where's the moral high ground for the asshole who makes this bad faith argument?

      "Obviously you're going to allow a computer to make the decision..."
      What decision? I'm not going to allow a computer to make the decision to detonate a mine. The failure, moral and intellectual, has already occurred when that question must be asked.

      "Why would you waste a mine on a school bus if the mine can decid

    • Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank?

      Actually, the best strategy is to blow up the fuel trucks.

    • It's nonsense anyway. Mines (and cluster munitions) by their very nature are indiscriminate, which is why most civilised countries have agreed to not use them.
  • And my point to them is, where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?

    And shouldn't we also consider the machine ethics [youtu.be] of taking out a Russian tank full of kids? It's all so confusing. I have to wonder, though, if he strategically selected "landmines" [whitehouse.gov] as his example, considering how recent U.S. policy has changed in this regard.

    • by DarkOx ( 621550 )

      This is all so silly. The only reason the position on landmines has changes is that strategists no longer think they are the most effective solution for their applications anyway.

      Who needs mines which are slow to deploy, either costly up front if you put fancy electronics in them, or tedious and hazardous to remove if you dont.

      Now you can send a swarm of drones or use a microwave, sonic, or laser weapon to take out the target or at least that will be the reality before the USA's next big land conflict, and

      • strategists no longer think they are the most effective solution for their applications anyway.

        Apparently neither Ukraine nor Russia got the message on this. They are both making extensive use of mines to great effect. In fact, Ukraine attributed the failure of their offensive in part to Russia's extensive mining of its defensive positions.

  • ... the Doomsday Machine from Dr. Strangelove. If we are letting Palmer Luckey (a.k.a. General Jack D. Ripper) make these decisions, we need to make sure we have solved the mine shaft gap issue first.

  • by Whateverthisis ( 7004192 ) on Friday October 11, 2024 @04:37PM (#64857659)
    "Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous -- meaning an AI algorithm would make the final decision to kill someone."

    "Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons..."

    "Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons."

    Who the hell cares what these guys think? Maybe they should go back and take a look at how the world actually works. These people have been successful in business, but the way government works is it has a monopoly on the use of violence outside of it's territory. Going to war, what weapons we use, and strategic and tactical decision making is entirely left to the government; namely the military leaders will decide what weapons they use, and Congress will decide what controls they can put on the military. That's it. To think some silicon valley tech bros even have a say in this discussion is the most arrogant, asinine thing I've ever heard.

  • With Palantir, Anduril and the Hudson Institute calling the shots you can be assured that all the AI needs to make a kill/no kill decision will be the detection of a solid gold Rolex on your wrist.
  • Mr. Free Market, we cannot allow an automated killing gap!!

  • the answer will be YES, of course.

  • The comment compared a landmine, which is a very simple computing device that is already in use on a massive scale, and which is programmed to kill indiscriminately, with an AI device which can be programmed to kill in a selective way. It clearly implies that the AI device would be a morally superior alternative. The article completely failed to show that the current debate that proposes that all AI weapons are evil and should be banned, would be worse than the current situation. This is about comprehension of logical statements and fails completely as journalism. Read it again. As it stands it is propaganda and has no place on Slashdot.
  • by silentbozo ( 542534 ) on Friday October 11, 2024 @05:51PM (#64857877) Journal

    Having certainty is great, but lack of certainty doesn't mean that a decision won't be made to take out a city block to get one target if you lack the means to narrow your targeting.

    Is it a war crime? Since I'm not a lawyer, I'll the Hague decide...

    If your target is a fugitive terrorist mastermind, taking refuge in neutral country, yeah, you probably don't want to piss off the host country by killing its citizens. Use a special forces raid or the flying ginsu knife bomb to reduce collateral damage.

    What if your target is a bunker, where the leader of a nation is hiding out with his mistress, in the middle of the enemy capital, still heavily defended with anti-air guns and ground troops, and the nation has been mobilized for total war? What if they're jamming your ability to use precision munitions? Yeah, you might bomb the shit out of the block just to make sure.

    At some point a human being makes a decision to pull the trigger. They might pull the trigger at the point where they launch the missile or bomb. They might pull the trigger at the point where someone on scene says go/no go (we'll assume the missile or bomb can divert or self destruct - a capability loitering munitions have). The difference is, can you substitute an algorithm to continue, divert, or abort past the point where a human being can realistically be in the loop.

    We already have seeker missiles that follow a designated target using a full sensor suite. They make decisions to engage a target in millisecond or microsecond time. Interceptors as well - sometimes you get friendly fire because someone isn't giving off IFF. Absent someone using neuralink and jacked up to 100x real time, remotely riding the missile in (like one of the FPV drone pilots in Ukraine) at some point either the munition is released in dumb mode, to strike at the last designated target, or it is released in autonomous mode, to strike at the target that most closely resembles the target it was originally given. The only question is... how early or late is that control given over.

    The more the enemy uses EW to jam, the more we have to turn over control to the guidance system on board to either follow the programmed flight path, or select targets of opportunity. First wave of loitering munitions all hit their targets successfully? Second wave switches over to secondary targets, or aborts and returns to base for refueling.

    But yeah, smart mines would scare me shitless, not because they'd be effective, but because a bug in the IFF routine (ED-201 style) could have a schoolbus misidentified as an armored troop carrier. And smart mines that could autonomously roam during the night to randomize a minefield...

  • Two systems, one full AI and one human-in-the-loop... the AI will be faster, and give a battlefield advantage.

    The best we can hope for is a human override and tight geofencing, but eventually AI will be making the primary decision because the armies that do that will be the only ones that are viable in the field.

Nothing is faster than the speed of light ... To prove this to yourself, try opening the refrigerator door before the light comes on.

Working...