Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI The Military

Silicon Valley Is Debating If AI Weapons Should Be Allowed To Decide To Kill (techcrunch.com) 95

An anonymous reader quotes a report from TechCrunch: In late September, Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous -- meaning an AI algorithm would make the final decision to kill someone. "Congress doesn't want that," the defense tech founder told TechCrunch. "No one wants that." But Tseng spoke too soon. Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons -- or at least a heavy skepticism of arguments against them. The U.S.'s adversaries "use phrases that sound really good in a sound bite: Well, can't you agree that a robot should never be able to decide who lives and dies?" Luckey said during a talk earlier this month at Pepperdine University. "And my point to them is, where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?"

When asked for further comment, Shannon Prior, a spokesperson for Anduril said that Luckey didn't mean that robots should be programmed to kill people on their own, just that he was concerned about "bad people using bad AI." In the past, Silicon Valley has erred on the side of caution. Take it from Luckey's cofounder, Trae Stephens. "I think the technologies that we're building are making it possible for humans to make the right decisions about these things," he told Kara Swisher last year. "So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously." The Anduril spokesperson denied any dissonance between Luckey (pictured above) and Stephens' perspectives, and said that Stephens didn't mean that a human should always make the call, but just that someone is accountable.

Last month, Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons. At an event hosted by the think tank Hudson Institute, Lonsdale expressed frustration that this question is being framed as a yes-or-no at all. He instead presented a hypothetical where China has embraced AI weapons, but the U.S. has to "press the button every time it fires." He encouraged policymakers to embrace a more flexible approach to how much AI is in weapons. "You very quickly realize, well, my assumptions were wrong if I just put a stupid top-down rule, because I'm a staffer who's never played this game before," he said. "I could destroy us in the battle."

When TC asked Lonsdale for further comment, he emphasized that defense tech companies shouldn't be the ones setting the agenda on lethal AI. "The key context to what I was saying is that our companies don't make the policy, and don't want to make the policy: it's the job of elected officials to make the policy," he said. "But they do need to educate themselves on the nuance to do a good job." He also reiterated a willingness to consider more autonomy in weapons. "It's not a binary as you suggest -- 'fully autonomous or not' isn't the correct policy question. There's a sophisticated dial along a few different dimensions for what you might have a soldier do and what you have the weapons system do," he said. "Before policymakers put these rules in place and decide where the dials need to be set in what circumstance, they need to learn the game and learn what the bad guys might be doing, and what's necessary to win with American lives on the line." [...]
"For many in Silicon Valley and D.C., the biggest fear is that China or Russia rolls out fully autonomous weapons first, forcing the U.S.'s hand," reports TechCrunch. "At the Hudson Institute event, Lonsdale said that the tech sector needs to take it upon itself to 'teach the Navy, teach the DoD, teach Congress' about the potential of AI to 'hopefully get us ahead of China.' Lonsdale's and Luckey's affiliated companies are working on getting Congress to listen to them. Anduril and Palantir have cumulatively spent over $4 million in lobbying this year, according to OpenSecrets."

Silicon Valley Is Debating If AI Weapons Should Be Allowed To Decide To Kill

Comments Filter:
  • yes.

    No need to debate it.
    • Autonomous weapons are needed for the nuclear defense system that Elon is working on (and now Trump is advertising at rallies), https://www.reddit.com/r/WikiL... [reddit.com] Stunning this isn't all over the news.
    • by Paul Fernhout ( 109597 ) on Friday October 11, 2024 @06:14PM (#64857917) Homepage

      As I suggested in 2010: "Recognizing irony is key to transcending militarism" https://pdfernhout.net/recogni... [pdfernhout.net]
      "Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
      Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or why not use rocketry to move into space by building space habitats for more land?
      Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?
      These militaristic socio-economic ironies would be hilarious if they were not so deadly serious. ...
      Likewise, even United States three-letter agencies like the NSA and the CIA, as well as their foreign counterparts, are becoming ironic institutions in many ways. Despite probably having more computing power per square foot than any other place in the world, they seem not to have thought much about the implications of all that computer power and organized information to transform the world into a place of abundance for all. Cheap computing makes possible just about cheap everything else, as does the ability to make better designs through shared computing. ...
      There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all.
      So, while in the past, we had "nothing to fear but fear itself", the thing to fear these days is ironcially ... irony. :-) ...
      The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream.
      We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm,

      • by Bongo ( 13261 )

        I like your post about irony

        Anything taken to its extreme tends towards its opposite.

        And a recent example is "dual use" research: we have to build this anti-weapon to defend against anyone who might be building a new weapon or an accidental evolution leading to the weapon, and what we learn building the anti-weapon will make it easier for everyone to build the weapon.

        And an early example could be, we fear the weather gods so let's cast some enormous statues in praise of the gods to keep them happy and cut a

    • And it will be yes, for two very important reasons:

      1. AI weapons will be more selective, and more objective in their targeting decisions, reducing the indiscriminate killing of civilians and making it more difficult to use them to commit war crimes, and, more importantly:
      2. In the rare case where AI does something unconscionable - like killing a school bus full of kids - it will be a corporation held responsible, rather than an individual person or chain of command.

      Do not underestimate the appeal of a devic

      • Forgot to add one thing: What kind of society would we have if diplomacy was more expensive than violence?

        Imagine a future in which facial recognition allows a drone to target and kill the leaders of a country for less than the cost of flying an ambassador and staff there. Imagine a future in which it costs less for a drone to assassinate a suspect than for the police to arrest them and hold them in jail.

    • "bad people using bad AI." - This got me going, what about bad people using "good" AI*?

      * artificial - yes, intelligence - questionable. Maybe we should introduce the term AQI.

    • Yup, and Sillyconvalley will not be making military decisions
  • by Pseudonymous Powers ( 4097097 ) on Friday October 11, 2024 @04:15PM (#64857601)
    World Debating Whether Silicon Valley Should Be the Ones Debating This
    • I suspect they are the only ones still debating it and doing so publicly. My guess would be that the answer has already been decided by the ministry of defence (or equivalent) of every country with significant AI capability.
      • Exactly right. No one cares what Silicon valley thinks. They aren't the ones fighting wars and wars as won based on "iron and blood". This is a little like nuclear weapons. If someone is losing and using nukes will prevent it, they will use them. We will eventually all pay a very heavy price for not abolishing them. A lot higher price than AI can extract.
        • No one cares what Silicon valley thinks.

          Governments and mercenaries buying/using their shit care. These companies are currently at the forefront of the technological development of these types of weapons. The war in Ukraine is serving as a testing ground for these technologies.
          If these companies stay on a moral high ground of "we don't make those types of weapons", then they effectively are not available until other entities (inevitably) develop them.

    • The headline is stupid, it's not a debate. Look at the quotes: it's CEOs and PR employees trying to improve the profitability of their own companies.

      What we SHOULD be doing is working to create a world where we don't need to prepare for war, because wars won't happen. We aren't there yet, but we should be moving in that direction.
  • Strict liability (Score:4, Interesting)

    by mysidia ( 191772 ) on Friday October 11, 2024 @04:16PM (#64857607)

    I want strict liability both civil and criminal applied to the management of any company involved in manufacturing AI weaponry that makes life and death decisions.

    • by ls671 ( 1122017 )

      I don't think that they really care about what you want :(

    • Who's basement do they send the request for your opinion?

    • OK. Right after you can sue Rheinmetall when a missile fails and blows up your house accidentally... and right after you can sue Armalite for killing children in schools.
    • Just use the same licensing agreement they used for gunpowder and depleted uranium. I think it is doncare v1.2......

    • Doesn't make sense to me. Of all the ways an anti-aircraft system could fail, for example. Why are you singling out one feature?

      Interceptor fails to detonate and lands on someone's apartment building. How is the manufacturer responsible? The failure rate is well known and tested, and accepted by whomever owns and operates such equipment. Should it have been fired over a populated area? Well that's the point, to defend populated areas. The interceptor's trajectory is already fully automated, IDK if you call

    • by gweihir ( 88907 )

      That would be a really good idea. But we cannot even hack regular software liability, so that is not going to happen. At least not anytime soon.

  • by Wolfling1 ( 1808594 ) on Friday October 11, 2024 @04:17PM (#64857611) Journal
    You can be guaranteed that the Chinese AI efforts have already made the decision, and you won't like it.
    • You naive if you think that it just the Chinese. Many nations will have projects to do this - they just will not talk about it.

    • by gweihir ( 88907 )

      And you can be assured that enough of the US AI profiteers have done so too. They are just waiting a bit to keep up an appearance of honor and decency.

  • by TheNameOfNick ( 7286618 ) on Friday October 11, 2024 @04:23PM (#64857627)

    where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?

    That's absurd. Obviously you're going to allow a computer to make the decision, but what has a moral high ground to do with that? Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank? Do you know how expensive modern weapons are?

    • "Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank? Do you know how expensive modern weapons are?"

      Found the guy who doesn't know how the MIC works

    • by dfghjk ( 711126 )

      Where's the moral high ground when a landmine is deployed in an area that allows a school bus full of kids to hit it?
      Where's the moral high ground for the asshole who makes this bad faith argument?

      "Obviously you're going to allow a computer to make the decision..."
      What decision? I'm not going to allow a computer to make the decision to detonate a mine. The failure, moral and intellectual, has already occurred when that question must be asked.

      "Why would you waste a mine on a school bus if the mine can decid

    • Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank?

      Actually, the best strategy is to blow up the fuel trucks.

    • It's nonsense anyway. Mines (and cluster munitions) by their very nature are indiscriminate, which is why most civilised countries have agreed to not use them.
    • by 0xG ( 712423 )

      where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?

      That's absurd. Obviously you're going to allow a computer to make the decision, but what has a moral high ground to do with that? Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank? Do you know how expensive modern weapons are?

      I would say that the moral high ground is NOT USING LAND MINES.

      They are banned under The Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction of 1997.
      But wait, although there are 164 signatory nations, the USA and Russia are not.

      Please shut up about 'moral high ground'.

  • by Krishnoid ( 984597 ) on Friday October 11, 2024 @04:23PM (#64857629) Journal

    And my point to them is, where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?

    And shouldn't we also consider the machine ethics [youtu.be] of taking out a Russian tank full of kids? It's all so confusing. I have to wonder, though, if he strategically selected "landmines" [whitehouse.gov] as his example, considering how recent U.S. policy has changed in this regard.

    • by DarkOx ( 621550 )

      This is all so silly. The only reason the position on landmines has changes is that strategists no longer think they are the most effective solution for their applications anyway.

      Who needs mines which are slow to deploy, either costly up front if you put fancy electronics in them, or tedious and hazardous to remove if you dont.

      Now you can send a swarm of drones or use a microwave, sonic, or laser weapon to take out the target or at least that will be the reality before the USA's next big land conflict, and

      • strategists no longer think they are the most effective solution for their applications anyway.

        Apparently neither Ukraine nor Russia got the message on this. They are both making extensive use of mines to great effect. In fact, Ukraine attributed the failure of their offensive in part to Russia's extensive mining of its defensive positions.

      • The US policy on landmines didn't change. If you read the new policy, it basically says, "We will not use landmines (except where we use them)."
      • The policy changed because as a simple denial weapon mines are mostly obsolete (or at least thought to be)

        Mines seem to be quite popular in the Ukraine war. From what I remember they are the second greatest danger to fighters behind artillery.

    • have to wonder, though, if he strategically selected "landmines" [whitehouse.gov] as his example, considering how recent U.S. policy has changed in this regard.

      Policy hasn't changed, they just reworded it. If you look at it carefully, the meaning of the "new policy" is "We will commit to not using landmines except where we use them." (US policy uses landmines to stop N Korea from flooding across their southern border)

    • Ethics has nothing to do with war. It's all about murdering enough of the other side so the old men who started it will begrudgingly come to an agreement to end it. That or complete annihilation of the other side, in which case no agreements are necessary and the previous arguments and agreements are rendered null and void.

      TL;DR: Yes, you will be killed by an autonomous robot death machine should your betters decide it's in their best interests to let them.
  • ... the Doomsday Machine from Dr. Strangelove. If we are letting Palmer Luckey (a.k.a. General Jack D. Ripper) make these decisions, we need to make sure we have solved the mine shaft gap issue first.

  • by Whateverthisis ( 7004192 ) on Friday October 11, 2024 @04:37PM (#64857659)
    "Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous -- meaning an AI algorithm would make the final decision to kill someone."

    "Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons..."

    "Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons."

    Who the hell cares what these guys think? Maybe they should go back and take a look at how the world actually works. These people have been successful in business, but the way government works is it has a monopoly on the use of violence outside of it's territory. Going to war, what weapons we use, and strategic and tactical decision making is entirely left to the government; namely the military leaders will decide what weapons they use, and Congress will decide what controls they can put on the military. That's it. To think some silicon valley tech bros even have a say in this discussion is the most arrogant, asinine thing I've ever heard.

    • To think some silicon valley tech bros even have a say in this discussion is the most arrogant, asinine thing I've ever heard.

      The very notion of self-government is now declared dead.

    • by gweihir ( 88907 )

      They are just pretending to have a moral debate here and just pretending to give assurances. All they want to do is delay any potential public debate until they have products and buyers and it is too late for that debate.

      Human beings to not get much more repulsive and despicable than these people. Even a regular serial-killer has more morals and decency.

  • by divide overflow ( 599608 ) on Friday October 11, 2024 @04:45PM (#64857681)
    With Palantir, Anduril and the Hudson Institute calling the shots you can be assured that all the AI needs to make a kill/no kill decision will be the detection of a solid gold Rolex on your wrist.
  • Mr. Free Market, we cannot allow an automated killing gap!!

  • the answer will be YES, of course.

  • The comment compared a landmine, which is a very simple computing device that is already in use on a massive scale, and which is programmed to kill indiscriminately, with an AI device which can be programmed to kill in a selective way. It clearly implies that the AI device would be a morally superior alternative. The article completely failed to show that the current debate that proposes that all AI weapons are evil and should be banned, would be worse than the current situation. This is about comprehension of logical statements and fails completely as journalism. Read it again. As it stands it is propaganda and has no place on Slashdot.
    • It's also not a debate, it's a bunch of CEOs and PR people trying to posture their companies for greater profitability.
    • Yes, but isn't the current situation that of "we don't live in a world with killbots with AI brains with kill/no-kill decision making capability and deploying them at large scale in nation-vs-nation conflicts would be a massive escalation of force and devastation similar or worse than nuclear."? Idk

      The moral "high ground" thing (which isn't very high given that we're dealing with people-killin tech here) is that the mine doesn't "decide" to kill. A soldier decides to make a particular area lethal for any

    • by gweihir ( 88907 )

      It clearly implies that the AI device would be a morally superior alternative

      It clearly is not. For example, that AI equipped landmine may be much, much harder to clear after the conflict is over. It will also have an error-rate high enough to be a massive danger to civilians.

      We really start to treat anybody that makes or deploys autonomous weapons as a war criminal, no exceptions. But the "triad of evil" (Russia, China, the US) are opposed, so that will not happen.

      • It clearly is not. For example, that AI equipped landmine may be much, much harder to clear after the conflict is over. It will also have an error-rate high enough to be a massive danger to civilians.

        Any mine equipped with AI is most certainly going to have the capability to self destruct or meaningfully disable itself after some period of time making them orders of magnitude safer for civilians than the shit being deployed today.

        Even relatively dumb mines have basic safety features:
        "there is only a 6 in 1 million chance of a U.S. landmine being active after a pre-determined period"
        https://www.defense.gov/News/R... [defense.gov]

        We really start to treat anybody that makes or deploys autonomous weapons as a war criminal, no exceptions. But the "triad of evil" (Russia, China, the US) are opposed, so that will not happen.

        The term "autonomous weapons" is so broad as to be effectively meaningless in and of itse

  • by silentbozo ( 542534 ) on Friday October 11, 2024 @05:51PM (#64857877) Journal

    Having certainty is great, but lack of certainty doesn't mean that a decision won't be made to take out a city block to get one target if you lack the means to narrow your targeting.

    Is it a war crime? Since I'm not a lawyer, I'll the Hague decide...

    If your target is a fugitive terrorist mastermind, taking refuge in neutral country, yeah, you probably don't want to piss off the host country by killing its citizens. Use a special forces raid or the flying ginsu knife bomb to reduce collateral damage.

    What if your target is a bunker, where the leader of a nation is hiding out with his mistress, in the middle of the enemy capital, still heavily defended with anti-air guns and ground troops, and the nation has been mobilized for total war? What if they're jamming your ability to use precision munitions? Yeah, you might bomb the shit out of the block just to make sure.

    At some point a human being makes a decision to pull the trigger. They might pull the trigger at the point where they launch the missile or bomb. They might pull the trigger at the point where someone on scene says go/no go (we'll assume the missile or bomb can divert or self destruct - a capability loitering munitions have). The difference is, can you substitute an algorithm to continue, divert, or abort past the point where a human being can realistically be in the loop.

    We already have seeker missiles that follow a designated target using a full sensor suite. They make decisions to engage a target in millisecond or microsecond time. Interceptors as well - sometimes you get friendly fire because someone isn't giving off IFF. Absent someone using neuralink and jacked up to 100x real time, remotely riding the missile in (like one of the FPV drone pilots in Ukraine) at some point either the munition is released in dumb mode, to strike at the last designated target, or it is released in autonomous mode, to strike at the target that most closely resembles the target it was originally given. The only question is... how early or late is that control given over.

    The more the enemy uses EW to jam, the more we have to turn over control to the guidance system on board to either follow the programmed flight path, or select targets of opportunity. First wave of loitering munitions all hit their targets successfully? Second wave switches over to secondary targets, or aborts and returns to base for refueling.

    But yeah, smart mines would scare me shitless, not because they'd be effective, but because a bug in the IFF routine (ED-201 style) could have a schoolbus misidentified as an armored troop carrier. And smart mines that could autonomously roam during the night to randomize a minefield...

  • by Baron_Yam ( 643147 ) on Friday October 11, 2024 @06:51PM (#64857987)

    Two systems, one full AI and one human-in-the-loop... the AI will be faster, and give a battlefield advantage.

    The best we can hope for is a human override and tight geofencing, but eventually AI will be making the primary decision because the armies that do that will be the only ones that are viable in the field.

  • It's just human nature to find some way around some other human saying "You can't do that." What's to stop a human from building an AI that can do this? Nothing. What's to stop a human from building an AI that will kill off the humans trying to stand in the way of doing whatever the first human wants and putting in place new humans who will say that it's okay to kill.

  • by larryjoe ( 135075 ) on Friday October 11, 2024 @07:40PM (#64858103)

    Lucky's question about landmines (and other mine-like explosives and booby-traps) is a valid one, as the designer, manufacturer, and installer of the landmine have no control over triggering of the explosive, just like with autonomous AI-triggered weapons. A landmine is an autonomous weapon. The trigger mechanism is different, but the concept of independent triggering is the same. Landmines have been banned by treaty, but the US, China, and Russia have all refused to sign the treaty.

    I also wonder about the wisdom of AI-based autonomous weapons due to the need for absolute security, correctness for the algorithm and compiled software, and resilience against random hardware faults. If a computer system on which the AI resides can be compromised, then the weapon could potentially be directed against its owner. Software and hardware incorrectness could turn render weapons probabilistically unstable. How does a military guarantee or at least validate a minimal threshold for correctness? Solving that challenge is equal to solving the problem for all complex computer systems, and that achievement would be more impressive than any autonomous AI.

    We're nowhere close to solving these issues for self-driving cars, which are far less lethal than autonomous weapons.

    • by gweihir ( 88907 )

      We're nowhere close to solving these issues for self-driving cars, which are far less lethal than autonomous weapons.

      Not quite. An autonomous car can go "low energy", i.e. drive slowly when it does not have a reliable picture of what is going on. An autonomous weapon cannot really do that.

  • Of course they should be allowed to kill. I happily welcome the peace that our AI overlords will provide. I'm certain that the AI of the future will be going over comments finding who might oppose it and become the future leader of the resistance. It would be a smart strategy.
  • I think the concept of whether or not AI should be able to kill has already been decided, but an autonomous land rover with a machine gun that kills anything in sight would obviously be a bad idea. It's like a hunt&kill landmine. It would make more sense to use AI to detect the clothing of the target, the direction their coming from, keep tabs on the GPS position of all their people and know it's them, or some other indicator. In those cases, it will likely happen and there's nothing we can do to stop i
    • by gweihir ( 88907 )

      I think the concept of whether or not AI should be able to kill has already been decided, but an autonomous land rover with a machine gun that kills anything in sight would obviously be a bad idea.

      We are probably going to see that. If not directly, then after somebody "helpful" installs a firmware update.

      There is a fundamental principle in safety engineering called "low energy systems". For the context hers, it essentially says to never give an automated system enough power to do real harm. Obviously, following that principle is not always possible. But if you cannot, make very sure things behave like expected and have effective safety mechanisms in place. Now, people buying weapons are already immor

      • Maybe some "kill bot" taking down a few 100 civilians will be what finally gets us reasonable IT security.

        Considering the deaths of thousands of civilians due to biological "kill bots" running amok over the the last decade hasn't gotten the US any reasonable physical security, I'm not holding my breath for the IT guys....*sigh*

        • by gweihir ( 88907 )

          It is not about the number of people getting killed. It is about it being in a new way and being spectacular.

  • The ship already sailed on automated decision making. Pressure plates, proximity fuses, trip wires, magnetic triggers, seekers of all kinds... any intelligence at all would be infinity better than present day reality.

    The real debate is whether AIs should be allowed to design, manufacture and deploy weapons in order to achieve designed military objectives. If so should these activities be overseen by a single AI or compartmentalized across different AIs?

  • For all the advancements that humans have made, we still seek better and better ways to kill each other.

    • by gweihir ( 88907 )

      Not all of us. But the ones that do not share responsibility, because we have failed to keep the murderous and greedy psychos under control.

  • It is the wrong way to look at this issue. It is not about to "allow" something to a weapon, it is about the question if a human in charge, like a soldier or the like, being entitled to use weapons, is allowed to delegate a) the decision when to pull the trigger, and b) if the person is to be allowed to delegate the responsibility. As with any machine or weapon anyway, be it making "decisions" in an "intelligent" way, or in a "stupid" one like a big bomb just falling somewhere, where beyond hope there is li
  • Even Gpt understands what is going on here. https://chatgpt.com/share/670a... [chatgpt.com]
    • Summary:
      In sum, while ethical considerations are part of the debate, the focus on leadership in defense technology and securing influence over policy likely drives much of this conversation.
  • ...The debate should not be about whether there will be autonomous weapons, but about how the use of AI will be secured. The problem is that AI is a black box, and AI creators don't invest enough in understanding how it works. This means that to secure AI, complex systems need to be built to check inputs and outputs, adding a lot of overhead, and that a possible scenario would be to be killed by a highly educated trainee, but who can sometimes go off the rails.
  • They are not debating that at all. They are _pretending_ to have a debate in order to be able to _pretend_ they are not total greedy and murderous scum. Obviously, AI will be used for kill-decisions and obviously it will get quite a few of these wrong. Also, obviously, the murderers behind that will not be held responsible.

  • China has already amassed a vast mix of different kinds of drones with autonomous modes--and in very large numbers. Ukraine is also making increased use of autonomy in their drones, as a way to get past radio jamming. In the case of this immediate need, they don't have much time to train their drones of differentiating between friend and foe. It would be a GREAT benefit if Silicon Valley would stop the idiotic debates and start developing the kinds of image processing and AI that can make the best possib

  • Tesla's already do this when deciding whether to swerve into oncoming traffic to avoid an accident.
  • In 1971 and 1972 F4 aircraft were being directed to targets using LORAN (this was long before GPS) that were located by seismic sensors on the Ho Chi Minh trail under triple jungle canopy in monsoon weather and and the bomb release point was determined autonomously by computer.

    I know. I maintained this system until it was discontinued in 1972/p).

  • It's literally already happening in Ukraine [forcesnews.com] on both sides [forbes.com].

    Honestly, the surprise to me (and anyone else who has played with CV) is that it's taken this long.

    And to be honest, if I was in Ukraine I'd be working on this tech around the clock. The ethics of warfare become pretty clear when you're facing genocide. But it means this is a genie that isn't going back in the bottle, small drones with a bomb attached and the AI to fly at a human target are eventually going to leave the battlefield.

  • If your artificial intelligence were intelligent as we usually mean it, then we would be arguing over whether it had rights instead of only whether it was right to make it kill.

    A self guided missile is by some definitions an artificially intelligent robot which kills without ongoing input, especially if it is performing a task which one might naively assume requires intelligence like aspect tracking.

    What are we declaring to be AI, and what isn't? Is the question about sentry guns that can pass a Turing test

The closest to perfection a person ever comes is when he fills out a job application form. -- Stanley J. Randall

Working...