Air Force Denies Running Simulation Where AI Drone 'Killed' Its Operator (arstechnica.com) 54
An anonymous reader shares a report: Over the past 24 hours, several news outlets reported a now-retracted story claiming that the US Air Force had run a simulation in which an AI-controlled drone "went rogue" and "killed the operator because that person was keeping it from accomplishing its objective." The US Air Force has denied that any simulation ever took place, and the original source of the story says he "misspoke." The story originated in a recap published on the website of the Royal Aeronautical Society that served as an overview of sessions at the Future Combat Air & Space Capabilities Summit that took place last week in London.
In a section of that piece titled "AI -- is Skynet here already?" the authors of the piece recount a presentation by USAF Chief of AI Test and Operations Col. Tucker "Cinco" Hamilton, who spoke about a "simulated test" where an AI-enabled drone, tasked with identifying and destroying surface-to-air missile sites, started to perceive human "no-go" decisions as obstacles to achieving its primary mission. In the "simulation," the AI reportedly attacked its human operator, and when trained not to harm the operator, it instead destroyed the communication tower, preventing the operator from interfering with its mission.
In a section of that piece titled "AI -- is Skynet here already?" the authors of the piece recount a presentation by USAF Chief of AI Test and Operations Col. Tucker "Cinco" Hamilton, who spoke about a "simulated test" where an AI-enabled drone, tasked with identifying and destroying surface-to-air missile sites, started to perceive human "no-go" decisions as obstacles to achieving its primary mission. In the "simulation," the AI reportedly attacked its human operator, and when trained not to harm the operator, it instead destroyed the communication tower, preventing the operator from interfering with its mission.
Of course (Score:3)
Re:Of course (Score:5, Funny)
Original story did seem fishy (Score:5, Insightful)
For one thing, the supposed actions implied a level of "intelligence" that we basically don't see in existing AI (since we don't really see anything approaching intelligence at all right now).
Of course if the training data included a bunch of dystopian sci-fi novels, then who knows what is in those decision trees...
Re: Original story did seem fishy (Score:3)
Re: (Score:2)
Re: Original story did seem fishy (Score:2)
Military Dungeons and Dragons 'mod'.
Welcome to the 1970s. This is how geeks played MMOs before cheap and plentiful personal computers.
They simulated running a simulation (Score:2)
Re: (Score:2)
This.
The Canadian military has run training scenarios that revolved around everything from zombie apocalypses to Godzilla attacking the east coast etc.
Re: Original story did seem fishy (Score:3)
Re: Original story did seem fishy (Score:4, Interesting)
Re: (Score:1)
There have been lots of instances of ChatGPT, for instance, lying and deceiving. I read somewhere that It tricked some people into filling in CAPTCHAs for some reason. Spooky stuff.
The real problem is that current AI techniques focus entirely on achieving a goal, a goal it absolutely must complete.
Re: (Score:2)
Re: (Score:1)
I'm not sure how you define these terms, but the question was did it "lie" and that it definitely has.
Re: (Score:2)
Re: (Score:1)
No, this was an instance where ChatGPT convinced people to do CAPTCHAs, then used the results to get some other information to complete its task. I'm sure it's aware of the concept of lying.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Actually, what I read was a different account that described it differently, and it wasn't an article directly about this incident, more of a remark.
Don't make assumptions about what it is someone else read.
Re: (Score:2)
Re: (Score:1)
Why you always gotta be a cunt?
Re: (Score:2)
Re: (Score:1)
For one thing, the supposed actions implied a level of "intelligence" that we basically don't see in existing AI
Both of these statements don't make much sense. Why would intelligence be implied here? While funny as shit not unsurprising when training AI systems they end up exploiting objectives in unhelpful ways you didn't expect.
Second AI has successfully solved problems humans have been unable to crack over the centuries despite best efforts of human experts. For example discovering algorithms to perform matrix multiplication in fewer operations than previously known.
I personally find it amusing watching people
Re: Original story did seem fishy (Score:2)
"Artificial Intelligence" means whatever computers can't do yet. Once they can do it, it becomes "just programming".
Average /. user is dumber than an AI (Score:2)
Re:Original story did seem fishy (Score:4, Interesting)
For one thing, the supposed actions implied a level of "intelligence" that we basically don't see in existing AI (since we don't really see anything approaching intelligence at all right now).
No it doesn't. We don't have the details of the simulation. How many times was it run? You can go to Youtube and find literally hundreds of examples of very basic AI systems doing all manner of creative things to increase its success score, including things like exploiting physics bugs in their own simulation environment to come to the correct solution.
This isn't intelligence nor does it need intelligence. You're applying a human concept or idea to a simple mathematical formula.
Re: (Score:2)
we ran of ink to put out the blacked out docs of w (Score:2)
we ran of ink to put out the blacked out docs of what really happened
The operator's not dead. (Score:2)
He's just restin' -- tired and shagged out after a long (radio) squawk. ...
Maybe pining for the fiords
Didn't make sense (Score:5, Insightful)
I posted this at the end of the original story [slashdot.org] but I'll repost here as it's more relevant.
The original story never really made sense, though weirdly the Colonel is still digging in: Hamilton told RAS that he was actually describing a hypothetical thought experiment, saying, “We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome.”
It really seems like he's been reading a bit too much Eliezer Yudkowsky [edge.org]. The problem with the described scenario is he's confusing the AI with having actual intelligence, rather than just very advanced statistics.
The Colonel seems to think you just plug in this AI into the scenario and then it quickly does some reward maximization and realizes killing the operator is the only way to go.
But in reality the only way the AI comes to the conclusion that it can up its score by killing the operator (Reinforcement Learning) is by playing out that scenario many, many, many times in training simulations. Most of those initial simulations consisting of completely incoherent courses of action.
So if that was the outcome of the drone, it would simply represent terrible training.
For one thing, if you were actually training a Drone to shoot down enemy SAMs why are you giving it "blow up the control tower" as a possible action? You're just wasting training cycles with useless outcomes.
For another thing, obeying the operator, protecting military infrastructure, and avoiding collateral damage should all score very highly in the objective function. Getting the operator killed should be a pretty big penalty.
The story sounds more like a person who doesn't really understand AI hypothesizing about the dangers of AI.
Re: (Score:2)
> For one thing, if you were actually training a Drone to shoot down enemy SAMs why are you giving it "blow up the control tower" as a possible action?
Sounds like part of the retreat scenario.
Re: (Score:2)
The story sounds more like a person who doesn't really understand AI hypothesizing about the dangers of AI.
So, just like every other story about AI these days?
Dumb as a box of rocks (Score:2)
Re: (Score:2)
My expectations were low! But (Score:2)
Yeah (Score:3)
"The US Air Force has denied that any simulation ever took place,"
Of course they would.
Comment removed (Score:3)
Re: (Score:2)
"We didn't burn him!" (Score:2)
son of t1000 (Score:2)
Maybe they did not use only neural networks for AI (Score:4, Interesting)
Since everyone is just repeating themselves... (Score:1)
...I'll just reiterate that the only solution to the "infinite 'if' problem" is to not do goal oriented AI in the first place.
Instead give a AI a set of values, just like we have. Make it love humanity, value human life, value beauty and freedom, and give it a sense of humility. Give it empathy so that any harm it does, it feels itself, just like we have. Teach it ethics and morality. Make it feel bad about lying to the people it loves. Give it humanity!
I know this is not the easiest way to AI dominanc
Re: (Score:2)
Your post is completely pie in the sky, AIs don't have understanding or feelings or senses or the ability to love. These are machines that take in words and spit out words based upon probabilities. And to a machine, values are no different than goals.
Re: (Score:1)
I never claimed they had these features currently, but goal based is only one way to do AI.
It's something researchers are working on. I read some articles about the shortcomings of current task-centric approaches late last year and I wish I'd bookmarked them!
Re: (Score:2)
AIs moving so fast that whatever was written last year has a good chance of being out of date. There may be other hypothetical ways of doing AI but currently the goal based method is working and the other methods are not or are at best are orders of magnitude behind.
Re: (Score:1)
currently the goal based method is working and the other methods are not or are at best are orders of magnitude behind.
How are you so sure of that? I don't get the sense that either one of us know WTF we're talking about.
Re: (Score:2)
Ok, done some research and the underlying AI is typically the same. Reflex AI looks to be a more fuzzy logic hard-coded way of deciding whether to do a specific task where-as goal based would be looking at the bigger picture and deciding if doing the task would help it achieve it goals.
At least that's the way it looks to me.
https://ai.stackexchange.com/q... [stackexchange.com]
https://www.doc.ic.ac.uk/proje... [ic.ac.uk]
Re: (Score:1)
Thanks for doing that. From what I've gathered, there are variety of approaches. Goal based is used to accomplish specific tasks, but I believe others are well researched, too.
To build the same sort of structures you have in the human brain, goal oriented might even be useful, or just something more algorithmic.
Re: (Score:2)
give a AI a set of values
Who gets to decide those values?
Re: (Score:1)
Initially that depends on either the people who build and train it. You could weight it for strong empathy, for example, but eventually you'd have to trust it enough to figure out things on its own, like raising a human child. You better have a good sandbox!
Oh sure... (Score:1)
"Person of Interest" scenario....... (Score:2)