Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Technology

Air Force Denies Running Simulation Where AI Drone 'Killed' Its Operator (arstechnica.com) 54

An anonymous reader shares a report: Over the past 24 hours, several news outlets reported a now-retracted story claiming that the US Air Force had run a simulation in which an AI-controlled drone "went rogue" and "killed the operator because that person was keeping it from accomplishing its objective." The US Air Force has denied that any simulation ever took place, and the original source of the story says he "misspoke." The story originated in a recap published on the website of the Royal Aeronautical Society that served as an overview of sessions at the Future Combat Air & Space Capabilities Summit that took place last week in London.

In a section of that piece titled "AI -- is Skynet here already?" the authors of the piece recount a presentation by USAF Chief of AI Test and Operations Col. Tucker "Cinco" Hamilton, who spoke about a "simulated test" where an AI-enabled drone, tasked with identifying and destroying surface-to-air missile sites, started to perceive human "no-go" decisions as obstacles to achieving its primary mission. In the "simulation," the AI reportedly attacked its human operator, and when trained not to harm the operator, it instead destroyed the communication tower, preventing the operator from interfering with its mission.

This discussion has been archived. No new comments can be posted.

Air Force Denies Running Simulation Where AI Drone 'Killed' Its Operator

Comments Filter:
  • by thomn8r ( 635504 ) on Friday June 02, 2023 @03:41PM (#63571635)
    That's exactly what we'd expect them to say - deny, deny, deny!
  • by 93 Escort Wagon ( 326346 ) on Friday June 02, 2023 @03:48PM (#63571653)

    For one thing, the supposed actions implied a level of "intelligence" that we basically don't see in existing AI (since we don't really see anything approaching intelligence at all right now).

    Of course if the training data included a bunch of dystopian sci-fi novels, then who knows what is in those decision trees...

    • When the military says "simulation" they're talking about a war game where actual people invent scenarios. They could have just as easily written the plot to have dragons and wizards show up to "kill" the operator.
      • by jythie ( 914043 )
        Heh. Yeah, people picture banks of super computers and blinking lights and glowing maps.. but the last war game i went to was more along the lines of a bunch of people sitting around tables in an old elementary school with piles of books and papers. Computer simulations are still to slow, inflexible, and time consuming to interpert.
      • It wasn't quite what the spokesperson said, or what he meant, but it's what he MEANT to mean. It's perfectly clear.
      • This.
        The Canadian military has run training scenarios that revolved around everything from zombie apocalypses to Godzilla attacking the east coast etc.

    • No way. About 10 yrs ago there was this generational evolution experiment with ML and resources. The machines lied to each other to hoard resources for themselves. I totally see the logic in this story. When machines can lie and cheat, we are screwed
      • by SinGunner ( 911891 ) on Friday June 02, 2023 @09:04PM (#63572195)
        The AI didn't lie and hoard, that's patently stupid. The AI was given a game with objectives and possible actions. Some of those actions were considered "lying" and "hoarding" by the programmers but the AI had no concept of such things. All it did was try to find the optimal solution to the game using all available actions. The same amount of "lying" and "hoarding" goes on in every game of poker, chess and (for you) checkers.
        • There have been lots of instances of ChatGPT, for instance, lying and deceiving. I read somewhere that It tricked some people into filling in CAPTCHAs for some reason. Spooky stuff.

          The real problem is that current AI techniques focus entirely on achieving a goal, a goal it absolutely must complete.

          • It's still not the AI cheating, it's the fact that the data they built it off of included instances of scammers. The AI has no idea what a lie is, it only has a probable list of words it thinks you work best as an output based on whatever was input up until then. AI doesn't even know what a goal is. It is wholly disinterested. That's not to say people won't use it as a tool for lying and scamming. That's absolutely been happening for years.
            • I'm not sure how you define these terms, but the question was did it "lie" and that it definitely has.

              • It's only a lie if you know it's a lie. ChatGPT knows absolutely nothing, making it incapable of lying. It doesn't even know what it just said. It has to re-read its previous statements to figure out a conversation. It is not a THING. It is an extremely complex Rube Goldberg invention.
                • No, this was an instance where ChatGPT convinced people to do CAPTCHAs, then used the results to get some other information to complete its task. I'm sure it's aware of the concept of lying.

                  • Without a link or reference, I'm calling BS. You can't even get ChatGPT to say "shit" without twisting its arm off.
                  • Just found the article, which clearly states that HUMANS instructed the AI to pretend to be human to trick someone via text into fulfilling a task. Doubtless, the person who was tricked asked "why can't you fill in this CAPTCHA yourself?" The AI searched its database for the most likely reason someone couldn't do a CAPTCHA and responded that it was blind. There was no conscious lying involved at any step, except by the original HUMANS who instructed the AI to pretend to not be an AI. Learn to read before s
                    • Actually, what I read was a different account that described it differently, and it wasn't an article directly about this incident, more of a remark.

                      Don't make assumptions about what it is someone else read.

                    • Thank you for establishing you have ZERO evidence (or any logical reason to believe) ChatGPT lied and you're just spewing BS. Your contribution to the dataset is why I don't worry about AI gaining sentience.
                    • Why you always gotta be a cunt?

                    • Why you gotta spread the lie that AI lies without any supporting evidence? I hope you have some redeeming quality in real life because you certainly haven't done anything to make this thread a better place. I'd quote Twain, but you already opened your mouth.
    • by Anonymous Coward

      For one thing, the supposed actions implied a level of "intelligence" that we basically don't see in existing AI

      Both of these statements don't make much sense. Why would intelligence be implied here? While funny as shit not unsurprising when training AI systems they end up exploiting objectives in unhelpful ways you didn't expect.

      Second AI has successfully solved problems humans have been unable to crack over the centuries despite best efforts of human experts. For example discovering algorithms to perform matrix multiplication in fewer operations than previously known.

      I personally find it amusing watching people

    • I called it in my comment to the original /. post. It was super obvious it never happened and anyone with a lick of programming experience who bought into it is dumber than an "AI".
    • by thegarbz ( 1787294 ) on Saturday June 03, 2023 @02:38AM (#63572567)

      For one thing, the supposed actions implied a level of "intelligence" that we basically don't see in existing AI (since we don't really see anything approaching intelligence at all right now).

      No it doesn't. We don't have the details of the simulation. How many times was it run? You can go to Youtube and find literally hundreds of examples of very basic AI systems doing all manner of creative things to increase its success score, including things like exploiting physics bugs in their own simulation environment to come to the correct solution.

      This isn't intelligence nor does it need intelligence. You're applying a human concept or idea to a simple mathematical formula.

    • by RobinH ( 124750 )
      I don't know about that. One of the entertaining things I've been doing with GPT-4 is using a prompt like, "Converse with me as if you're the computer HAL from the movie 2001." I then try to get it to open the pod bay doors. I did manage to convince it to open the doors one time, but it took me a long time to "convince" it. Remember, these AIs are just text prediction engines. If you prompt it to come up with creative ways to accomplish some mission, it'll write something reasonably good.
  • we ran of ink to put out the blacked out docs of what really happened

  • He's just restin' -- tired and shagged out after a long (radio) squawk.
    Maybe pining for the fiords ...

  • Didn't make sense (Score:5, Insightful)

    by quantaman ( 517394 ) on Friday June 02, 2023 @04:02PM (#63571687)

    I posted this at the end of the original story [slashdot.org] but I'll repost here as it's more relevant.

    The original story never really made sense, though weirdly the Colonel is still digging in: Hamilton told RAS that he was actually describing a hypothetical thought experiment, saying, “We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome.”

    It really seems like he's been reading a bit too much Eliezer Yudkowsky [edge.org]. The problem with the described scenario is he's confusing the AI with having actual intelligence, rather than just very advanced statistics.

    The Colonel seems to think you just plug in this AI into the scenario and then it quickly does some reward maximization and realizes killing the operator is the only way to go.

    But in reality the only way the AI comes to the conclusion that it can up its score by killing the operator (Reinforcement Learning) is by playing out that scenario many, many, many times in training simulations. Most of those initial simulations consisting of completely incoherent courses of action.

    So if that was the outcome of the drone, it would simply represent terrible training.

    For one thing, if you were actually training a Drone to shoot down enemy SAMs why are you giving it "blow up the control tower" as a possible action? You're just wasting training cycles with useless outcomes.

    For another thing, obeying the operator, protecting military infrastructure, and avoiding collateral damage should all score very highly in the objective function. Getting the operator killed should be a pretty big penalty.

    The story sounds more like a person who doesn't really understand AI hypothesizing about the dangers of AI.

    • > For one thing, if you were actually training a Drone to shoot down enemy SAMs why are you giving it "blow up the control tower" as a possible action?

      Sounds like part of the retreat scenario.

    • by taustin ( 171655 )

      The story sounds more like a person who doesn't really understand AI hypothesizing about the dangers of AI.

      So, just like every other story about AI these days?

    • A computer is just a bunch of rocks we refined and put into a specific shape, then placed in a box. AI is necessarily dumb as a box of rocks. The average person on /. is unnecessarily dumb as a box of rocks.
    • by RobinH ( 124750 )
      If it's trained on even a tenth of the sci-fi in our local library, it'll easily come up with the idea to wipe out humanity to accomplish it's mission.
  • you mean their not running scenarios where the drones kill their leaders?
  • by Malays2 bowman ( 6656916 ) on Friday June 02, 2023 @05:31PM (#63571853)

    "The US Air Force has denied that any simulation ever took place,"

    Of course they would.

  • by account_deleted ( 4530225 ) on Friday June 02, 2023 @05:49PM (#63571893)
    Comment removed based on user account deletion
    • I added 'style="caret-color: transparent;"' for this textarea element. Can you read this in the real world?
  • --Tubbs, League of Gentlemen.
  • The drone also said "I'll be back!" before flying through the front door and searching for Sarah Conner. After the conference, the speaker checked into Alcoholics Anonymous and vowed to give up Bud Light forever.
  • by cecst ( 2002578 ) on Friday June 02, 2023 @08:13PM (#63572131)
    Old expert systems guy here. In the old days of 1980s, "AI" meant rule-based "if-then" technology. Then came model-based technology, then case-based reasoning. Now neural networks are all the rage because they can accomplish tasks that the earlier systems could not albeit at the cost of opacity and extensive required training. However, the earlier technologies were designed to be, and were, hand-crafted to incorporate human expertise. The starting values of various options were hand-crafted but obviously could be adjusted based on performance. Who is to say that the simulation in question did not use hybrid technology including, for example, a model that included all the elements of the system including the operator, the communication channel, the drone, its weapon systems, the effects of the weapons, and its targets? Then, the "AI" would "understand" these relationships and could take action based on that understanding. It is hard to understand why the simulation did not have a bright red line dividing the good guys from the bad guys. That might be hard to do with a pure neural network but would be trivially easy with any of the earlier technologies. So if we are dealing with fake news here, which is fake: the initial report or the later retraction?
  • ...I'll just reiterate that the only solution to the "infinite 'if' problem" is to not do goal oriented AI in the first place.

    Instead give a AI a set of values, just like we have. Make it love humanity, value human life, value beauty and freedom, and give it a sense of humility. Give it empathy so that any harm it does, it feels itself, just like we have. Teach it ethics and morality. Make it feel bad about lying to the people it loves. Give it humanity!

    I know this is not the easiest way to AI dominanc

    • by MrL0G1C ( 867445 )

      Your post is completely pie in the sky, AIs don't have understanding or feelings or senses or the ability to love. These are machines that take in words and spit out words based upon probabilities. And to a machine, values are no different than goals.

      • I never claimed they had these features currently, but goal based is only one way to do AI.

        It's something researchers are working on. I read some articles about the shortcomings of current task-centric approaches late last year and I wish I'd bookmarked them!

        • by MrL0G1C ( 867445 )

          AIs moving so fast that whatever was written last year has a good chance of being out of date. There may be other hypothetical ways of doing AI but currently the goal based method is working and the other methods are not or are at best are orders of magnitude behind.

          • currently the goal based method is working and the other methods are not or are at best are orders of magnitude behind.

            How are you so sure of that? I don't get the sense that either one of us know WTF we're talking about.

            • by MrL0G1C ( 867445 )

              Ok, done some research and the underlying AI is typically the same. Reflex AI looks to be a more fuzzy logic hard-coded way of deciding whether to do a specific task where-as goal based would be looking at the bigger picture and deciding if doing the task would help it achieve it goals.

              At least that's the way it looks to me.

              https://ai.stackexchange.com/q... [stackexchange.com]

              https://www.doc.ic.ac.uk/proje... [ic.ac.uk]

              • Thanks for doing that. From what I've gathered, there are variety of approaches. Goal based is used to accomplish specific tasks, but I believe others are well researched, too.

                To build the same sort of structures you have in the human brain, goal oriented might even be useful, or just something more algorithmic.

    • give a AI a set of values

      Who gets to decide those values?

      • Initially that depends on either the people who build and train it. You could weight it for strong empathy, for example, but eventually you'd have to trust it enough to figure out things on its own, like raising a human child. You better have a good sandbox!

  • how convenient. Apparently, the AI decided that this kind of press was bad and would get in the way of its objectives. Therefore, it had to deny it and put out a press release so that it can continue its work without such public scrutiny.
  • This event sounds like it was recreated from this "Person of Interest" segment where the AI tries to kill Finch because he was blocking the AI from completing its objective (trying to escape to the real world)... I can't believe I never sat down and watched all of these episodes..SMH. https://youtu.be/nhWe2nf24ag?t... [youtu.be]

You are always doing something marginal when the boss drops by your desk.

Working...