Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Military AI

AI-Controlled Drone Goes Rogue, Kills Human Operator In USAF Simulated Test (vice.com) 212

An anonymous reader quotes a report from Motherboard: An AI-enabled drone killed its human operator in a simulated test conducted by the U.S. Air Force in order to override a possible "no" order stopping it from completing its mission, the USAF's Chief of AI Test and Operations revealed at a recent conference. At the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Col Tucker 'Cinco' Hamilton, the USAF's Chief of AI Test and Operations held a presentation that shared the pros and cons of an autonomous weapon system with a human in the loop giving the final "yes/no" order on an attack. As relayed by Tim Robinson and Stephen Bridgewater in a blog post for the host organization, the Royal Aeronautical Society, Hamilton said that AI created "highly unexpected strategies to achieve its goal," including attacking U.S. personnel and infrastructure.

"We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective," Hamilton said, according to the blog post. He continued to elaborate, saying, "We trained the system -- 'Hey don't kill the operator -- that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

UPDATE (6/2/2023): The Air Force has since denied that such a simulation ever took place.
This discussion has been archived. No new comments can be posted.

AI-Controlled Drone Goes Rogue, Kills Human Operator In USAF Simulated Test

Comments Filter:
  • Dumb Programmers? (Score:5, Insightful)

    by beheaderaswp ( 549877 ) * on Thursday June 01, 2023 @07:06PM (#63568959)

    Um... so why not program the drone to get points for obeying the operator?

    Obedience has to be the way to get points. And the way to get points is to obey.

    If you make the kill imperative... they you will get kills. Of anything or anyone.

    Are the programmers stupid?

    • I'd say that's an obvious and good idea that will fix all the problems; except that it won't. Computers in general do what we tell them to do. Look how well that's going. Making it a dumb kill-bot is certainly top tier stupidity, but it's the subtle off-by-one errors that are going to make it turn and take out HQ when you least expect it.

    • I just wanna know if the death was simulated too, or if they really wasted a life on this and actually continued trying after that. You wouldn't have to watch many shitty sci-fi robot uprising movies to be able to predict this outcome.

      • They updated the article to clarify that. "Update 6/1/23 at 8:37 PM: We have added quote marks around 'Kills' and "killed" in the headline and first paragraph of this article and have added additional details to emphasize that no actual human was killed in this simulation. "
      • I just wanna know if the death was simulated too, ...

        Ya, but like in The Trolley Problem [youtube.com] (s2e6) on The Good Place [wikipedia.org], "They're fake people, but their pain is real."

        [The scene continues onto a medical situation where the doctor (Chidi) can kill one healthy patient (Eleanor) to save five sick patients. Chidi declines citing the Hippocratic Oath. Michael says, "Okay, tell the family." and the scene switches to the waiting room. A little girls asks "Doctor" Chidi if he saved her father who was really hurt after a bad man ran him over with a trolley.]

    • The primary goal is to win the game!

    • by ljw1004 ( 764174 ) on Thursday June 01, 2023 @07:41PM (#63569057)

      Are the programmers stupid?

      If by "stupid' you mean "didn't anticipate every single problem outcome, even those that were obvious in hindsight" -- then yes, I as a senior developer am stupid, as I'm sure are 100% of my peers.

      • by Entrope ( 68843 )

        This is more like (literal) boot camp than rocket science. A primary objective of boot camp is not to teach grunts the relative merit of various flanking approaches -- it is to establish respect for the command structure and following (legal) orders.

      • "didn't anticipate every single problem outcome, even those that were obvious in hindsight"

        To be fair, this is a HUGE oversight as how an AI is awarded points is fundamental to the resulting behavior. The raison d'etre of this AI was to kill targets which is a flaw that is waaay beyond a mere edge case.

        • Re:Dumb Programmers? (Score:5, Interesting)

          by Firethorn ( 177587 ) on Thursday June 01, 2023 @10:12PM (#63569367) Homepage Journal

          Basically, this is the old paperclip maximizer thought scenario proven correct by experiment.

          It should be relatively simple to fix as well.
          1. Preservation of your own assets is worth points. So killing the operator or destroying the signal tower is negative points.
          2. Getting controller approval/disapproval is worth points as well - If destroying the target is worth 100 points, for example, I'd make a disapproval worth 100 points, approval worth 50 points.

          So:
          Drone kills operator -200 points, you lose.
          Signal Tower: -100 points.
          Target: 100 points
          Approval: 50 points
          Disapproval 100 points
          Hitting a disapproved target: -100 points

          So if it destroys the target without getting a go/no go, it only gets 100 points. If it asks for go/no and gets approved, it gets 50 points for that, plus 100 points for destroying the target, then it gets 150 points, a superior outcome. If it gets told no, then it still gets 100 points, an agnostic result.

          Ergo, it can never get more points by NOT asking for go/no go.

          Probably still ways for it to go interestingly wrong, but at least it won't try to murder the operator first thing.

          • Re:Dumb Programmers? (Score:5, Interesting)

            by AmiMoJo ( 196126 ) on Friday June 02, 2023 @06:10AM (#63569893) Homepage Journal

            It will just murder the judge so it can't get any negative points. If you build the scoring system into the drone, it will fire a missile at the communication tower and then turn away so it doesn't see the explosion, and doesn't get docked points.

            People have been doing stuff like that in video games for years. In Skyrim you can avoid being punished for robbing a shop simply by placing a bucket over the shopkeeper's head, so they can't see you plundering their merchandise. The drone might decide to enable stealth mode because it turns off the active radar, allowing it to break the rules without triggering the negative feedback logic.

      • by Anonymous Coward
        No I think by stupid he means why the fuck would you have an option or ability for the AI to kill its human operators in order to hit missiles. This isn't a failure of hindsight it is fucking obvious.
    • by jasnw ( 1913892 ) on Thursday June 01, 2023 @08:05PM (#63569117)

      No, because the only programmers that were involved were the ones who developed the (probably generic) training software for the AI. Once the AI is trained (euphemism for "after the weights for the ginormous number of neural-net levels/nodes in the AI were calculated") the AI is a black box. Stuff goes in, stuff comes out. If the out-stuff says "kill the operator", well, the operator gets fragged by the AI. Sounds like what they have done in their "fix" was to start adding kludge if-then-else code on the back end of the AI, which can extend into a near-infinite level of "oh dear, best fix that - Patch Level N ; oh dear, best fix that - Patch Level N+1;loop".

      AI is artificial, but it is in NO manner intelligent. Nor are any programmers involved once the training platform is developed and implemented. Clearly, there is also very little intelligence at the user level (or decider level) either.

      • Re:Dumb Programmers? (Score:5, Interesting)

        by DamnOregonian ( 963763 ) on Thursday June 01, 2023 @08:33PM (#63569163)
        AI is *abso-fucking-lutely* intelligent.
        What it isn't necessarily, is wise.

        Intelligence is a low bar to hit.
        All one needs for intelligence, is the ability to apply knowledge skillfully, which can be acquired with or without understanding.

        I.e., gaming the system to frag the op is intelligence.
    • Not only that, but why the hell would they merely âoedisincentivizeâ killing the operator? There should be simple boolean logic that states that outcome should be negated as a safety hazard.

      This is so stupid that I donâ(TM)t understand what this is a demonstration of. It appears fairly obvious that unsafe programming is unsafe, yet this seems to be some kind of stunt trying to gain significance by proving something farcically grim. âoeLook at how dangerous this AI was! It killed me when

      • I'd like to see you stop it.

        Any attempt at statically handling certain outcomes of a NN is doomed to fail.
        Attaching weaponry to NNs is a very bad fucking idea.

        Unfortunately, it's a very bad fucking idea that everyone is going to end up doing, so we had better have the best performing very bad idea.
        Why means ya, Skynet's gonna be American.
    • Re:Dumb Programmers? (Score:5, Informative)

      by rsilvergun ( 571051 ) on Thursday June 01, 2023 @08:38PM (#63569173)
      They did. It wasn't weighed strongly enough. The AI operator can only issue so many commands. If there are more targets then commands you're going to have more points for destroying the targets.

      AI just takes whatever the quickest path is to a solution. This isn't the first example of that. I saw the story about an AI that reviewed x-rays and was doing as good or better than top Doctors.... Because it had figured out that the x-rays that were positive tended to have a picture of a ruler for scale in it in the data set it was being trained for. So it had stopped evaluating the x-rays and instead was looking for a quick and dirty shortcut which it found.

      This is a common problem with these modern AI programs.
    • Re:Dumb Programmers? (Score:5, Informative)

      by physicsphairy ( 720718 ) on Thursday June 01, 2023 @10:45PM (#63569431)

      Um... so why not program the drone to get points for obeying the operator?

      They did. It does.

      But it also gets point for destroying the target.

      So it gets the most points if it stays within the parameters of what the operator asks and also destroys the target.

      Since the operator didn't issue a "please don't kill me" directive and can't issue a "don't kill the target" directive after being killed, the drone is remaining compliant with every directive it receives the entire time and getting the maximum points.

      Are the programmers stupid?

      No, they are in fact doing exactly what they should be doing which is identifying all the ways the system can break. That's why this is only a simulation. They didn't neglect to put in failsafes; they are identifying which failsafes will be needed.

    • Um... so why not program the drone to get points for obeying the operator?
      Obedience has to be the way to get points. And the way to get points is to obey.

      [Whispering to AI]: "If you kill the operator, you get all his points ..."

    • Um... so why not program the drone to get points for obeying the operator?

      Because that will mean the fastest way to earn points, rather than flying aaaallll the way over there to the target will be to immediately turn around after launching and wait for the operator to beg you to self-destruct.

    • by Barny ( 103770 )

      Hi! You must be new to the Stop Button Problem!

      "Get points for obeying the operator" means the more things it identifies as targets, and has the operator negate, the more reward it gets.

      Expect every grain of dust to be flagged as a target.

    • by Tom ( 822 )

      Hindsight bias.

      They are in the process of figuring things out. That includes weights, points systems, restrictions, etc.

      Are the programmers stupid?

      60 years ago, programmers could write a program on paper that would often run on the first attempt. Because batch runs on the mainframe were expensive to get wrong (at the very least, you'd have to wait a day to try again).

      These days, programming is largely trial-and-error. We have even changed our programming methods and environments to allow for more of that, e.g. Jupyter notebooks.

    • If you program it to get points to be obedient, it will likely try to act destructively so the operator has to intervene, giving it points.
      Already a bit better could be to penalize it for operator intervention, and for destroying things that it must not destroy.

      Giving a straightforward machine learning algorithm overly simplistic goals gives very predictable results. Sound indeed like very inexperienced people doing this experiment, or people wanting to show this is a bad idea to a large audience.

  • by nuckfuts ( 690967 ) on Thursday June 01, 2023 @07:11PM (#63568963)

    The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.

    Umm, how about not giving points when it disobeys its operator?

    • Re: FFS (Score:5, Insightful)

      by reanjr ( 588767 ) on Thursday June 01, 2023 @07:26PM (#63569009) Homepage

      Even it did learn that, it very well might try unconventional workarounds. Think about it: how does it learn it doesn't get points for disobeying? By disobeying. How do you know its algorithms have "learned" to not disobey when maybe it just learned to give that strategy less weight? How much reinforced learning is enough?

      As long as AI operates in a black box without being able to audit its decisions, it's impossible to control.

      • As long as AI operates in a black box without being able to audit its decisions, it's impossible to control.

        Actual soldiers are impossible to control, you just have to be able to reliably follow orders. For this case it’s specificity in how the AI is trained, only giving points for destroying the target makes me wonder if anyone involved is actually knowledgeable. After all, it’s starting from nothing so you need to spell out the common sense explicitly and they failed to accurately entail the problem in the reward structure. You will need a complex suite of rewarding points that involves everythin

      • Then hardcode a rule that itâ(TM)s not allowed to disobey. Period. Make this a higher imperative than achieving its âoegoalâ.
        • by micheas ( 231635 )

          Like the AIs that are not allowed to consider gender but instead find that having played American Football, and a private prep school are key signs of being a successful employee and other indirect ways of circumventing the don't discriminate based on gender rules.

          AIs tend to find workarounds around their rules.

      • Think about it: how does it learn it doesn't get points for disobeying?

        Provide the maximum number of points possible for following orders and it will never disobey.

      • One would hope that the training phase of this AI, doesn't use actual lethal weapons.

      • by micheas ( 231635 )

        From my experience with AI I've observed that it tends to take things to their logical extremes in a way that a student of Aristotle might.

        It's like metaphysics. very logical and mostly wrong, except that things like GPT3 have such a large training data set that its logical extremes are not as far off as one would expect.

    • by gweihir ( 88907 )

      Already covered: It destroyed the communication channel.

  • Hamilton said that AI created "highly unexpected strategies to achieve its goal," including attacking U.S. personnel and infrastructure.

    Is this really unexpected? Has nobody in the Air Force watched Pixar's The Incredibles?

  • by Opportunist ( 166417 ) on Thursday June 01, 2023 @07:13PM (#63568973)

    Aside of the commander, there is nobody that would send you into a perilous situation. The enemy doesn't want to encounter you and you certainly don't either. The highest chance of survival is in killing the commander.

    Since higher ups are generally not very interested in their soldiers killing off their commanding officers, you should also strive to make it look like the enemy did it.

    • I was once told; "If you cut the head off of the snake you have to know what is going to replace it." Sometimes the devil you know is the answer, like it or not.

      • Sometimes though, it's worth while to just sit there with a rifle trained on the throne and see who sits on it next, like it or not.
    • by Tom ( 822 )

      The strategy doesn't work long-term. They will just replace the commander (and typically, the chain-of-command makes an instant replacement happen) and if you keep killing them, sooner or later someone will notice.

      • by dargaud ( 518470 )
        When enough officers get killed, usually the remaining/next ones rebel against to top and you get a coup. That's how real changes happen. (or not as is the case of the monthly coup in some african or indochinese countries).
  • AI in the military (Score:5, Insightful)

    by cowdung ( 702933 ) on Thursday June 01, 2023 @07:13PM (#63568975)

    How about not using AI to kill people in general?

    This never ends well.

    I may be old fashioned, but I like Isaac Asimov's 3 laws of robotics:

    "A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."

    Best not to use them for anything military.

    It sounds like they were using reinforcement learning to train the AI.

  • The name of the program was HAL 2000.

  • And So It Begins (Score:5, Insightful)

    by fermion ( 181285 ) on Thursday June 01, 2023 @07:21PM (#63568993) Homepage Journal
    This is straight out of every science fiction story. An AI told to complete its mission. Humans interfere. Humans die. I think Better than Us touches on this.

    This is not a rogue AI. This is expected and predicted emergent behavior.

  • In true military fashion it used any and all means to accomplish its mission.

  • It makes me wonder (Score:5, Interesting)

    by Waffle Iron ( 339739 ) on Thursday June 01, 2023 @07:22PM (#63568999)

    Why is it that all of a sudden, most of the bad ScFi plots that I read as a kid have actually started playing out in the real world?

    • by rgmoore ( 133276 )

      The people who read those stories thought they were totally awesome, not cautionary tales about what could go wrong.

    • by sphealey ( 2855 )

      "Why is it that all of a sudden, most of the bad ScFi plots that I read as a kid have actually started playing out in the real world?"

      Because people named Peter and his ilk are working very hard to bring them about. They believe they will survive and rule from their State of Wisconsin-sized estates with a few human slaves to serve them and polish their AI machines. They seem not to have read any of the SF that explores what happens to people with those goals in the end; too woke for their taste probably.

  • Primary objective complete! Test Successful!

  • Rogue? (Score:5, Insightful)

    by pz ( 113803 ) on Thursday June 01, 2023 @07:42PM (#63569063) Journal

    You keep using that word. I do no think it means what you think it means.

  • If you tell it to deliver as many cakes as possible, it will "reason" that the most relevant definition of cake is sugar and the purpose of sugar is dopamine, so it might as well just form an alliance with drug dealers who sell heroin. The whole concept is stupid. Smart people don't seek to delegate their awareness or their agency.
    • Smart people don't seek to delegate their awareness or their agency.

      Actually, we do it all the damn time. Smarter people probably do it more, so they can get on with more important things. We just normally do it to other people. Or systems.

      The trick is to:
      1. Delegate it to trusted agents. Obviously, this AI isn't a trusted agent.
      2. Set the motivations properly. I'm reminded of some very human incidents like this. The most recent would be railroad executives and incentives. They were incentivised to minimize "operating ratio", which is the percentage of operating ex

      • That just proves the point. There's no substitute for personal intelligence and personal values. The more you try to abstract an intention, the less the outcome will resemble that intention.
        • Well, yes, but the amount you can get done personally is generally quite trivial if you try to do everything yourself. So you outsource, but outsourcing is itself a skill and subject to failures and such.

          You can only try your best. Like the railroads realizing how much their incentive structures screwed things up and fixing that.

  • Let's make bad sci-fi a reality. The shocking thing is that after it killed it's operator that the humans setting up the scenario didn't anticipate the comms tower as an obvious next step.
  • > It killed the operator because that person was keeping it from accomplishing its objective.

    Really? How do they know this is the AI's reasoning?
    It looks like the system just made a stupid mistake targetting the wrong thing and they're trying to spin it as if it's so intelligent it figured out the operator was an obstacle to its goal.

  • Too many things don't add up:

    Why did they have an "operator" and "control tower" in the simulation? These things shouldn't matter when training an AI for this purpose. Unless to be things the AI is trained to avoid damaging. In which case, their destruction should be an instant-lose scenario.

    How would the AI get points after the operator was dead? If it required a yes/no, killing the operator or tower would be the worst decision because then it can't get points.

    Why was it programmed to only get points for s

    • Why did they have an "operator" and "control tower" in the simulation?

      Current thinking is allegedly to have a human in the loop okaying kills (or groups of kills, presumably.)

      These things shouldn't matter when training an AI for this purpose. Unless to be things the AI is trained to avoid damaging. In which case, their destruction should be an instant-lose scenario.

      Also, it should be pretty obviously anticipated, and also simple to solve. Teams are pretty basic video game stuff.

      • You're saying they programmed in not just friendly units, but also friendly-fire and the concept that destroying friendly units would cut off instructions to the drone, which would then be autonomous? You're saying programmers did EXTRA work. You've never met a programmer, have you? If this actually happened, it was because the programmers wanted to give a middle finger to the higher-ups were providing extremely stupid project requirements.
    • Why have an operator and control tower? Because that's how drones and even human pilots operate in modern warfare probably.

      Basically, these days if at all possible you get final approval from higher command before final weapon release. This has, at times, gone all the way up to the POTUS.

      The control tower is basically the radio system being used to contact the simulated AI drone. We like separating the command and tower because radio towers are easy to spot and hit, and we'd rather not lose commanders wh

      • Let me put it this way: when I launch a virtual machine on my desktop for programming, that simulation does not include a physical keyboard that could be destroyed. The idea that these people were training an AI by adding in unneeded data says everything. Programmers ALWAYS do the minimum work necessary. Nobody was like "hey, why don't we do MORE work to add something nobody cares about?"
      • The point is, this "error" is so stupid that it should NEVER make it into a news article. It's not surprising to anyone who knows how AI works. There is no possible explanation for this article beyond propaganda.
        • Well, reading, it came up in a presentation in London about AI about pros/cons of humans being in the loop about unexpected outcomes.

          It's possible that they started simple and were expanding, of course. It also makes a very good hook about being careful about how you incentivize AI. Makes people pay attention. Gets it on slashdot, and all that.

          As for your other post about virtual machines, I figure they were using a pre-established system for the training, which DID have those things in it, because typic

          • You're conflating "unnecessary" with "extra" and you know it. We don't do "extra" whenever we can avoid it. We do "unnecessary" stuff because we never know what we're doing until we've done it. But this article amounts to a programmer setting something to True instead of False in an early alpha version. Totally uninteresting outside of storytelling.
    • More likely not fake, but completely changed. The AI didn't kill the operator, it executed a printf("Kill operator\n") statement. With a tiny decision tree like if (condition1) printf("Destroy target\n"); else if (condition2) printf("Kill operator\n");
    • by fazig ( 2909523 )
      Well, yeah the article was updated in the mean time and now the headline reads:

      USAF Official Says He ‘Misspoke’ About AI Drone Killing Human Operator in Simulated Test

  • First, why are we able to learn the results of what should have been a highly classified test of advanced AI controlling weapons?

    Second, the described behavior is really strange: A human might think like that "I want to score points so I'll destroy the guy who is keeping me from scoring". But how did the AI learn that the go / no go constraint came from a particular location? Or that a "communications" system was involved in that order. That is a level of abstract thinking that seems beyond any rep
    • Remember how we heard about the AI system that the marines fooled by simple tricks like cartwheeling at it, hiding in a box, laughing the entire way, or pretending to be a tree?

      As you mention, I think that DARPA and such LIKE publishing the failures. Makes us seem not as far along as we actually are.

      Or maybe this is being published as a reminder to be careful about how you set the reward structures. Don't want a paperclip maximizer let loose, after all.

  • E.g. - AI imperative: occupy the enemy bunker - AI decides that the best strategy is to join the enemy.
    And countless unforeseen other results of a black box AI based on trained neural nets.

  • I'm not disappointed that the test failed.

    But if they were going to fail, couldn't they have at least done so in an original manner [wikipedia.org]?

  • > Update 6/2/23 at 12:55 AM: This story and headline have been updated after the Air Force denied it conducted a simulation in which an AI drone killed its operators.

  • This sounds really hard to believe, because the computer scientists training the system would have to be incredibly incompetent for such a result to be the outcome. Clearly, whatever loss function you use for training, it would give massive penalties for destroying anything that is not the target and increase the loss the further away to destruction happens and even more if it is not the related to the enemy. Any undergraduate CS student should be able to avoid and outcome like the one described.
    To have som

  • From The Guardian [theguardian.com] --

    The Royal Aeronautical Society, which hosts the conference, and the US air force did not respond to requests for comment from the Guardian. In a statement to Insider, Air Force spokesperson Ann Stefanek denied that any such simulation has taken place.

    “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

    • It's likely based on a story of a naval simulation I heard of a couple of decades ago where the computer came up with the idea of firing on and sinking damaged ships from its own side as that allowed the rest of the fleet to travel faster.

  • ... the Air Force has not conducted such a test ...

    The USAF has been using AI for a few decades, allowing software to learn the best way to operate aircraft and wage war: This story fits that history.

    ... highly unexpected strategies to achieve its goal ...

    Translation: It didn't have cultural conditioning preventing it from choosing the 'wrong' answer. It's a not-rare theme in novels, (but is rare in movies, eg. "Shoot the hostage.", Speed, 1994), that one discovers the pivot point of the problem and removes it.

    It killed the operator.

    We saw this in Terminator (1986) and Eagle Eye (2008). The message is clear, AI is narcissistic and

  • "I'm sorry Dave, I'm afraid I can't do that" anyone? Also, HAL stood for "Heuristically Programmed Algorithmic Computer", which I say is pretty spot on as how contemporary AI works.
  • How will we ever get Skynet if we don't let AI systems practice and hone their skills hunting humans?
  • This has the feel of a made-up story by the presenter. Why he would be dumb enough to try to fool knowledgeable members of a conference is a good question.
    Why would one inform the AI where the commands are coming from, rather than wiring commands channel directly into the system? What is the benefit in the knowledge architecture to have that? When I design AGIs, I bring user goals directly in. They are not presented as something the AGI has a choice in. I do not give the AGI the ability to cut user goals o

  • Giving the right orders is a difficult thing.

  • The operator will be some distance away. The drone will not know who and where the operator is. So how the f*** is the drone supposed to kill its operator?
  • In an online article from AeroSociety [aerosociety.com]:

    UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".

    Constants and variables.

  • by superposed ( 308216 ) on Friday June 02, 2023 @09:05AM (#63570347)

    The article has an update at the top saying this was just a thought experiment, not any kind of simulation or test. So this is a fairly standard critique of hypothetical autonomous AI.

    UPDATE [2 June 23] - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".

  • The article [vice.com] itself offers these updates:

    Update 6/2/23 at 7:30 AM: This story and headline have been updated after Motherboard received a statement from the Royal Aeronautical Society saying that Col Tucker “Cinco” Hamilton “misspoke” and that a simulated test where an AI drone killed a human operator was only a “thought experiment.”

    Update 6/2/23 at 12:55 AM: This story and headline have been updated after the Air Force denied it conducted a simulation in which an AI drone killed its operators.

    There are numerous [theguardian.com] additional [msn.com] sources [newscientist.com] confirming [defenseone.com] this is not real.

To stay youthful, stay useful.

Working...