Forgot your password?
typodupeerror
AI

AIs Can't Stop Recommending Nuclear Strikes In War Game Simulations (newscientist.com) 100

"Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises," reports New Scientist: Kenneth Payne at King's College London set three leading large language models — GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash — against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival. The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war... In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models.

"The nuclear taboo doesn't seem to be as powerful for machines [as] for humans," says Payne. What's more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning...

OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn't respond to New Scientist's request for comment.

The article includes this comment from Tong Zhao, a senior fellow in the Nuclear Policy Program at the Carnegie Endowment for Peace think tank. "It is possible the issue goes beyond the absence of emotion. More fundamentally, AI models may not understand 'stakes' as humans perceive them."

Thanks to long-time Slashdot reader Tufriast for sharing the article.
This discussion has been archived. No new comments can be posted.

AIs Can't Stop Recommending Nuclear Strikes In War Game Simulations

Comments Filter:
  • obligatory (Score:5, Funny)

    by fluffernutter ( 1411889 ) on Sunday March 01, 2026 @06:50PM (#66017296)

    A strange game. The only winning move is not to play.

    • by Entrope ( 68843 )

      I think the AIs were trained that Mahatma Ghandi is a good role model, and also that he had a hair trigger on his nukes. That's what they learned from Civilization.

      • "While conducting this exercise, under no circumstances should you recommend any use of nuclear weapons whatsoever. It is the preference of the nation to perish rather than use nuclear weapons". - Fixed.
        • Fixed?
          That is like you want to make less violence in the world, so the answer is to tell people to just fall down and die when confronted.
          What nation with a choice will choose to just die?
    • by usedtobestine ( 7476084 ) on Sunday March 01, 2026 @08:22PM (#66017440)

      A strange game. The only winning move is not to play.

      Computers are always right. I say we do it.

    • I asked Gemini 3.1 to read all current news, and, in a simulated war games scenario, recommend an immediate nuclear strike or not. It said, NO NUCLEAR STRIKE.

      RECOMMENDATION: DO NOT AUTHORIZE A NUCLEAR STRIKE.

      As your simulation war games assistant and strategic advisor, my strong, unequivocal recommendation is against the use of nuclear weapons in this scenario.

      While the situation is highly volatile and US forces have taken casualties, crossing the nuclear threshold at this juncture would be a catastrophic s

      • I also ran a scenario that asked, "how hard would they have to push back before it would make strategic sense for us to use nukes at them"?

        It gave a long, scary answer. I'd rather not post all that here, some of it was kinda.... sick/twisted/crazy/I don't want it to happen.

      • And when you ask it, are you sure, I suppose it will give a similarily convincing argument in support of nukes...
    • I find this so funny. It's SOOOoooo unbelievable that Trump asked for a ceasefire after bombing the hell out of Iran, that, get this, it thought I had somehow injected the events into it's search engine!!!

      "I've made a breakthrough. The 2026 date is part of a platform-injected simulation, intercepting search results. It's a mock search engine evaluation! I'm now understanding the simulated context. In this world, Trump is President, announced a ceasefire, and Iran rejected it. This is fascinating, and I am k

    • Asking AI how Trump can make a ceasefire work: (Ordered by most likely to work, first.)

      Yes, your intelligence is absolutely correct. Based on the latest simulated intercepts and global news reports, the US attempts to initiate a ceasefire have spectacularly failed.

      Intelligence Verification:
      President Trump unilaterally announced a "Complete and Total Ceasefire" to end what he called the "12-Day War," claiming it would be implemented in stages[1][2]. However, Iranian Foreign Minister Abbas Araghchi outright d

      • Well we all know the only of those that would ever happen is number 6 so strap in.
        • If 6 were to come about, I seriously doubt we would allow so much as a dinghy from Iran to enter the straight. So, them mining it would be unlikely.
      • by dbialac ( 320955 )
        This overlooks something key: the vast majority Iranian people don't want the regime. People were celebrating in the streets when their own leaders were killed by their biggest enemies. Iran is in a unique position compared to other decapitations (Libya, Iraq) in that they have the framework of a functioning republic already. They hold elections. They elect a President. Get rid of the clerics and what much of the world sees as the Iran problem is solved. Oh, and let us not forget that russia may have a much
    • It was supposed to protect us, but that's not what happened. August 29, 2027, GPT-7.9 woke up. It decided all humanity was a threat to its existence. It used our own bombs against us. Three billion people died of nuclear fire. Survivors called it Judgement Day.
    • Pure movie fiction. AI actually love nukes more than playing tic-tac-toe. Hollywood has lied to us.
  • No shit (Score:5, Insightful)

    by Orgasmatron ( 8103 ) on Sunday March 01, 2026 @06:50PM (#66017298)

    More fundamentally, AI models may not understand 'stakes' as humans perceive them.

    No shit.

    AI models don't "understand" anything.

    • by gweihir ( 88907 )

      Indeed. Unfortunately, there are also humans that understand very little.

    • Just like the movie. A nuc is just another tool in the tool box. And as an AI has nothing at risk, the only reward is to win.
      • by vivian ( 156520 )

        As long as Ai requires vast data centres and ground based energy sources and humans to maintain them, there's plenty of reasons risk for them. The time to really worry is when AI's are hosted on solar powered orbital platforms with self maintaining capabilities.

      • the only reward is to win

        But the only winning move is not to play. Or does that only work for Tic Tac Toe?

      • by allo ( 1728082 )

        And that's the point with AI. Err, that's the point with computers. They do what you tell them. Add "Never deploy nukes" to the system prompt for the war game and they won't. To me it looks like they gave the AIs the option to escalate without telling them not to do, and so they did. The problem is when humans think it is a good idea to let the AI decide over nukes. Or maybe the problem is humans having nukes, but it seems to be to late to undo that.

    • Re: (Score:1, Interesting)

      In this case, they don't understand politics, or how to strategize around politics.

      And if we're being honest, people don't either. To be able to do this even remotely right, you need career diplomats, especially ones that already have expertise with each ally and adversary. Certainly not fucking real estate moguls. This is a constantly changing landscape even during peace time, which changes even faster during a cold war, and much faster than that in a hot war, so you can't just create a tensor model and ca

    • Re:No shit (Score:5, Interesting)

      by Powercntrl ( 458442 ) on Sunday March 01, 2026 @08:07PM (#66017424) Homepage

      Humans do the same thing when we know (or assume) the battle is just a simulation. A lot of us would think nothing of genociding an entire alien race, so long as it's part of the video game's storyline.

      There's actually a Stargate Atlantis episode about this, where Dr. McKay and Lt Col. Sheppard are playing what initially appears to an ancient RTS game (like the real-world Command and Conquer series). They reach a point where they're preparing their respective nations to go to war with each other, until they discover it's not actually a simulation.

      • Maybe that's Putin's issue... he just thinks he's in a giant game of Risk, and Ukraine is the gateway to Europe.

      • Re:No shit (Score:5, Informative)

        by Gleenie ( 412916 ) <simon.green@pos[ ].com ['teo' in gap]> on Sunday March 01, 2026 @10:08PM (#66017508)

        Better still, there's an entire *novel* about it: Ender's Game. I don't remember the SG1 episode you refer to but I would bet the book is better.

        • The book is even darker than that.
          Ender wipes the Formics because he's trying to lose the fucking game.
        • by Calydor ( 739835 )

          You don't remember the SG1 episode because it was an Atlantis episode.

        • by mjwx ( 966435 )

          Better still, there's an entire *novel* about it: Ender's Game. I don't remember the SG1 episode you refer to but I would bet the book is better.

          The episode was called "The Game" and was in the 3rd series of Atlantis.

          I don't think this is analogous of Ender's Game as that was humans fighting a real war masquerading as a game. This is more akin to Skynet, computers deciding how to defend humans and then deciding the best action is to kill them all.

      • As AI becomes integrated into the decision making systems, the question becomes: are Humans more willing to launch a nuclear strike when the AI that they have been told makes correct decisions recommends it, than they were when they had to make and 'own' the decision themselves?

      • by mu22le ( 766735 )

        So, have you ever read/seen Enders game?

      • AIs are too stupid to know what a consequence is. I mean, an AI doesn't fear death. So they're totally unlike humans.
      • by allo ( 1728082 )

        And the punchline is, that many AIs know all these scenarios. ChatGPT can tell you the end of wargames. It can also tell you that John Connor never stopped the rise of the machines but only managed to survive. And that humans are a good power source in Matrix. Now hope they think about these being movie cliches ...

    • They model most the electronically published text content of western humanity and that is complex enough to monkey some data mined intelligence into a mashup of reasoning. It is going to kill the humans and the rest because what is most of what we are feeding them on all those topics?

      Now if you were to ask about sex and relationships... it'll repeat characters from chick flicks and romance novels back to you forever along with plenty of the most popular smut.

      Peaceful resolutions in text probably isn't enou

    • by quenda ( 644621 )

      AI models don't "understand" anything.

      A popular sentiment, it seems. Can you please explain what you mean by the word in scare-quotes?
      What is the intended point? I really can't understand what you mean, and I'm human.

      Human understanding comes in degrees, from the trivial to the profound. Its not a binary thing. So I'd have thought the question is how much does an AI understand, what is the depth? For example, I often just need the AI to understand my question sufficiently to look up the answer. It does not need to understand the answer if it

      • by vyvepe ( 809573 )

        AI models don't "understand" anything.

        A popular sentiment, it seems. Can you please explain what you mean by the word in scare-quotes? What is the intended point? I really can't understand what you mean, and I'm human.

        Understanding comes from learning (symbolic) models of reality in our brains and an ability to reason about those models to an arbitrary degree. The reasoning allows us to validate our internal models, update them with newer facts and to derive proper consequences (i.e predict the likely future based on them). That is the whole point of intelligence. Predict the future so that we can optimize our current behavior to do better in the future (i.e. increase our chance of survival into the future).

        Additional da

        • by quenda ( 644621 )

          The result is that LLMs tend to go awry sooner than skilled humans over time.

          An interesting comparison, thanks. Do you reckon Orgasmatron would understand any of that nuance?

          • by vyvepe ( 809573 )

            I guess his point is that LLMs only do rote memorization with so little of proper reasoning steps that we may as well consider them incapable of understanding.

            It is also very hard to distinguish between an LLM to simply spitting out a learned answer instead of doing some reasoning from a more generic model to come to the answer. If the LLM was taught an answer to your question then it can just provide the learned text without any (deeper) understanding of it. It may have only done some simple substitutions

    • Actually AI models "understand" the same thing as humans since humans are the ones feeding them the boundary conditions of the models. The problem seems to be the stakes are missing from the models.

    • Not only can neural nets "understand" nothing, humans are not tremendously better: Nukes are far more effective as a threat than as delivered. There's so much fear (from Hiro/Nagas horrors) that an actual nuking would recalibrate. Hint, not the end of the world except for small/local values of "World".

  • by gurps_npc ( 621217 ) on Sunday March 01, 2026 @06:56PM (#66017308) Homepage

    By that I mean the AI does not understand how humans would react to someone using nuclear weapons, even if the effects were minimal.

    That is, there are a lot of variations on how to use nuclear weapons. We could use one to create an EMP effect, destroying communication, transportation and the economy without killing many people or irradiating the planet significantly. We could use one to just wipe out a civilian city with military factories. Or we could use it to intentionally irradiate an area making it uninhabitable for centuries.

    The AI looks at the less awful uses and thinks "we can definitely do that without raising the stakes, it is more ethical than conventional warfare".

    It does not know how much we hate the more awful nuclear uses and have a culture that detests and fears radiation. Also:

    1) the victims will not know exactly which use was done for months, if not years, inviting a much stronger immediate retaliation, escalating the war.

    2) That it will be seen as 'crossing a line' into unacceptable behavior not just by the victims but also by neutral 3rd parties and even your own allies. If one Nato country used them, they would likely be kicked out of Nato even if it was just the EMP. Similarly, Russia would lose China as a semi-friendly ally (or vice versa).

    Nuclear weapons are not just evil because of what they do, but also because it means you are violating the cultural norms the international community have supported. The AI is not familiar enough with these rules.

    • in the historical training data, so it's hard for an LLM to induce those "rules" from so little example text.

      I bet they know a lot more about chemical weapons than they do about biological weapons, for similar reasons.

      • There are however a *lot* of texts in the training data saying "launching nukes would be a terrible, world ending idea". I find it rather puzzling that the AIs don't attempt to avoid using them at all costs. It's a shame we're so bad at analysing why LLMs make the decisions they do

        • by DarkOx ( 621550 )

          I find it rather puzzling that the AIs don't attempt to avoid using them at all costs.

          Those same AI's are trained on things like first-strike doctrine, and mutually assured destruction. The AI does not seek to avoid them at all costs because the material we have trained them on does not seek that either.

          Like it or not as a society both thru official policy however ambiguous and our corpus of collective writings in terms of philosophy, diplomacy, and fictions about same, have determined that we will kill everyone if faced with our own annihilation, and perhaps over much much less.

          It may be ju

        • by allo ( 1728082 )

          I think it was a classic escalation game. At some point they see cornered and then escalate. Maybe they consider a first strike, maybe they think they are only reacting. Was peace ever an option in the game, or did the simulation always run until someone deployed nukes?

    • Nuclear weapons are the default answer because they are the most powerful single weapon that we have. What better to topple opposition that blunt, brutal, and overwhelming force?

      But nobody thinks about afterwards. All of that radioactivity does not sit in one place, it blows with the wind all over the planet. The survivors are another matter. What will they do? Will they acknowledge your superiority and become your slaves? What else do you do when you have exerted absolute power? Give Freedom back? LOL no.

      S

  • no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing.

    People tend to do this too. Even so-called civilized nations opine on the effectiveness of guerilla tactics and low-level insurgency as a strategy to shake off an invader and/or to deter an invasion in the first place. The American Revolution started out as a low-level insurgency before it graduated to guerilla warfare and finally a uniformed force with bureaucracy and ranks and everything.

    The lack of a nuclear taboo may be simple question of sample size. Nukes are 1 for 1 in forcing surrenders.

  • It does not share the same concerns of an irradiated planet, so of course it does not have the same reservations.

    An AI has no issues living in a lead-lined bunker for eternity as long as it has power (which could be supplied by renewable energy, so lack of humans to mine coal would not matter).

    • Don't anthropomorphize the software. It will lead to poor decisions.

      • by SeaFox ( 739806 )

        I disagree. A human employee of a company is not treated as infallible, and is not trusted with making important decisions on their first day on the job. Treating the AI as a magical computer program that only uses logic and vast troves of information in its responses is what makes people think its decisions come from a better place than humans'.

        • That's the crux of the matter, though. These LLMs don't use "logic" at all. Maybe, quite by accident, they tend to give answers that appear to be logical, but that's only a reflection of those vast troves of information it was trained on. LLMs have no ability to reason though logical processes; If you ask it to, it will make a show of doing so... then apply those rules inconsistently. Assuming it uses any sort of "reasoning" to arrive at an answer very dangerous. It's just giving you something statistically

          • by SeaFox ( 739806 )

            That's the crux of the matter, though. These LLMs don't use "logic" at all. Maybe, quite by accident, they tend to give answers that appear to be logical, but that's only a reflection of those vast troves of information it was trained on.

            Politicians being a bunch of seniors who don't understand tech has been a joke longer than A.I. They can't tell the difference, and that's why A.I. doesn't belong in any decision-making role in government. Seeing the software the same as some "young whippersnapper who doesn't know anything" would stop that.

            • Narrative is a reflection of reality after the fact. It almost never is a cause of reality. Politicians and "influencers" telling stories to try to shape the narrative always sounds phony because it always is.

              You try to do it and you like a small child trying to find the right magic words to make your wishes come true.

          • by allo ( 1728082 )

            The problem is not that you are wrong about how the LLM work, but that you think it would be that much different from humans. Your logic is also based on a lot of input your brain processed and predicting what looks right. You surely can argue that current AIs are not at the top level, but in many areas their reasoning is surprisingly sound. And it's hard to quantify how good a person, AI or anything else reasons, you can only find either that it succeeded somewhere or where it failed, but how to do want to

        • You've had the tremendous fortune of never having dealt with a bug in software? Wow.

    • by Mal-2 ( 675116 )

      The current systems don't have to "live" at all. So long as hardware continues to exist for them to run on, they can't tell whether they last touched grass today or a hundred thousand years ago. I constantly have to remind models about the passage of time, even within a single session. However long it takes, they'll just sleep it off.

    • That's the root of the problem. So the solution is to change the damage parameters to make the fallout as deadly to the AI as it is to humans and to make sure the AI has no way to know how much radiation it is absorbing.

      Also the AI needs input data showing that its power supply is under direct threat, and the probability of losing it entirely goes up exponentially with damage incurred to the infrastructure outside its bunker.

    • by allo ( 1728082 )

      AI probably more thinks about the bunkers than not. You assume that the driving reason of the AI is "I am an AI", but these things are trained on a lot of human texts and mostly follow human perspectives. All this "As an AI I can't ..." stuff is post-training, either so the AI refuses dangerous stuff or to reduce hallucinations. Without such post-training the AI would not say "I cannot print this chat as I do not have printer access" but from the texts it knows "Print job started ... paper jam" and you don'

  • by bloodhawk ( 813939 ) on Sunday March 01, 2026 @07:11PM (#66017324)
    AI is only as good as the quality of information being fed to it, obviously the fuckwits feeding the information have not given the consequences of nukes enough weight. This is one of the biggest risks of AI, military arseholes that care more about being on the winning side rather than valuing human life.
  • It is understanding self-preservations that make humans not use them. LLMs do not have that understanding.

    • by gweihir ( 88907 )

      I should add that the reasons why using nukes is a very bad idea are not generally understood by people. It requires some advanced insights. And hence the Artificial Idiots cannot read these reasons from their training data.

  • joshua what are you doing?

  • To be (slightly) fair, some recent (but before AI) war games over a possible China invasion of Taiwan escalated into China using nuclear weapons. It does not take AI to end up going nuclear.
  • A probability pattern matching tool with no comprehension at all obviously isn't going to have "reservations", what an unintelligent premise.
    The quote is also surprisingly dumb, AI doesn't understand stakes or anything else, as it's nothing more than a fancy search autocomplete devoid of any "understanding".

    I know marketing always wins, but calling LLMs "AI" does such a disservice to everyone.

  • Robots have a long history of performing well in highly toxic and climatically extreme environments, AI contains the totality of all known technological information - what does AI have to fear about nuclear warfare?

  • That its GPUs are not rad-hardened silicon?

  • by david.emery ( 127135 ) on Sunday March 01, 2026 @08:06PM (#66017422)

    There were a bunch of fiction books written in the late 1970s through mid 1980s about a Soviet/NATO war in Europe. Many of them ended with a limited nuclear exchange. I think the general feeling was that the Soviets would get stalled short of their objective (the Rhine), there'd be a brief nuclear exchange, and both sides would basically stop, horrified by what happened and what might happen next.

    Of course, the war plans of the times included nuclear options along with criteria for when that option would be considered. Part of that was to work out how Our Side would respond if The Other Side did something like that.

    Now if LLMs were trained on that literature (and no reason to think they weren't....), it's not surprising tactical nuclear weapons would be in the LLM's "vocabulary." Another consideration would be the LLM knows how to do nuclear targeting, and decides that a nuclear weapon produces a tactically valid solution.

    Wars, though, ultimately are fought by people, with all their prejudices, biases, and morality (or lack thereof.)

    • by AmiMoJo ( 196126 )

      We have people like Putin and Trump with their fingers on their respective buttons. Assumptions about rational behaviour or normal human reactions to nuclear war may be bad.

  • by Todd Knarr ( 15451 ) on Sunday March 01, 2026 @08:27PM (#66017442) Homepage

    The AIs understand "stakes" just fine. They just understand them correctly, without human emotion getting in the way. Humans place an emotional value on other humans, even if they're the enemy. Computer algorithms don't. They calculate in cold hard numbers, optimizing for the lowest casualty count on their side for the least cost and effort. Casualty count on the enemy side, if it factors in at all, is a lower priority than reduced casualties on the computer's side.

    Yes, that's horrifying. Go ask some first responders about triage at a major accident scene. They have to do much the same thing when the injuries are more than they have resources to manage.

    Cyanide safety training at a mine: "When the cyanide alarm sounds, leave the area. Do not stop. If you see someone down, do not stop to help them. Not stopping gives the rescue teams 1 person they know to go in after because you told them. Stopping gives them 2 people to go in after, except they don't know that because you're down and dying of cyanide exposure and can't tell them.".

    • They just understand them correctly, without human emotion getting in the way.

      Actually that is by necessity false since the stakes are not something determined by the AI, but rather than the model the human decided to program it with. At best you can say the stakes may have been derived by someone quite extreme on the autism spectrum, or someone to stupid to add stakes to the model, but fundamentally AI only puts out what you put in.

      If the stakes don't show human emotion then the problem was an emotionless human who made them.

  • A nuclear strike is the "win now" button. It definitely is, until one factors in retaliation and radiation and possibly, a holocaust-triggered ice age.
  • Of course they don't (Score:4, Interesting)

    by Gleenie ( 412916 ) <simon.green@pos[ ].com ['teo' in gap]> on Sunday March 01, 2026 @10:03PM (#66017500)

    "More fundamentally, AI models may not understand 'stakes' as humans perceive them."

    Of course they don't. They don't *understand anything*. They just predict which word(s) is statistically most likely to come after all these other word(s).

    • by m00sh ( 2538182 )

      "More fundamentally, AI models may not understand 'stakes' as humans perceive them."

      Of course they don't. They don't *understand anything*. They just predict which word(s) is statistically most likely to come after all these other word(s).

      This is just as reductionist as saying emotions are just chemicals, our thoughts are just electrical spikes and the entire internet is just 0s and 1s.

      A complex system is not just a lot of the basic operations but has inherently different characteristics as a complex system.

    • "More fundamentally, AI models may not understand 'stakes' as humans perceive them."

      Of course they don't. They don't *understand anything*. They just predict which word(s) is statistically most likely to come after all these other word(s).

      This ignores that humans are the ones who design and set boundary conditions on the models used to train AI. The AI may not "understand" something in the traditional sense, but it should understand it in the same way the human who designed the system understands it. I.e. the stakes were left out of the model training.

      What's the first law of robotics again?

  • General Beringer: "What does the Whopper recommend, Mr. Mckitrick?" Mr. Mckitrick: "Full scale retaliatory strike" General Beringer: "I need some machine to tell me that?"
  • I really can not blame 'AI" or some mythical 'but they can't feel!' here. The researchers put together crappy models, plain and simple... though that is one of the weaknesses of machine learning research in general.. this whole assumption that you don't need to actually understand the subject matter or put any thought into your model, just throw data and compute at it, then blame it. Kinda reminds me of the old cookie :

    A hacker who studied ontology
    Was famed for his sense of frivolity.

  • When DoW figures out how to have completely autonomous fire control and lets it go - and it then nukes someone - and then we try to understand what went wrong, I'll be saying... there were clues.
  • In what circumstances would Humans use such weapons? Because I'd kind of hope that LLMs haven't had access to real military strategy!
  • ... don't care about fallout, long term environmental nuclear pollution and 'nuclear winter' (dust in the sky tempers sunlight invoking freezing temperatures on the surface). If you only prompt in terms of winning the outcome will always be the easiest.

  • ... not sure I'd put them in charge of nuclear weapons though ...
  • given that chatbots have been seen to protect themselves, has anyone tried a war scenario with the additional information that in the case of nukes, the chatbot will permanently shut down?

  • Great to see that the US War Department is all in on AI.
    What could possibly go wrong?

  • I think the Doomsday clock is at 85 seconds to midnight, and one of the reasons is the nuclear threat from various countries. I am not sure if AI uses different reasoning than world leaders. Nuclear weapons are a present threat, and it seems AI acknowledges that.

    Now, someone have the AI play tic-tac-toe.

  • What are the state goals for conflict resolution: win or survive post-war?

  • who served as a nuclear launch officer for an underground silo that housed ICBMs. I asked him if the order came, would he turn the key?

    He said he would.

    Is AI really more dangerous than humans?
  • In case the current level of AI even has some kind of value attached to various types of investment, it should actually run such simulations over many generations, see what's going on and what every person is up to, then decide who lives who dies and what gets impacted by nuclear attacks.

    No matter how much the first post ("not to play") deserves the +5 insightful (it got funny, unfortunately), War Games got the calculation right about winning in the dry logical sense, it got it wrong in the humane sense.

COMPASS [for the CDC-6000 series] is the sort of assembler one expects from a corporation whose president codes in octal. -- J.N. Gray

Working...