Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Eric Schmidt Argues Against a 'Manhattan Project for AGI' (techcrunch.com) 43

In a policy paper, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks said that the U.S. should not pursue a Manhattan Project-style push to develop AI systems with "superhuman" intelligence, also known as AGI. From a report: The paper, titled "Superintelligence Strategy," asserts that an aggressive bid by the U.S. to exclusively control superintelligent AI systems could prompt fierce retaliation from China, potentially in the form of a cyberattack, which could destabilize international relations.

"[A] Manhattan Project [for AGI] assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it," the co-authors write. "What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure."

Eric Schmidt Argues Against a 'Manhattan Project for AGI'

Comments Filter:
  • by gweihir ( 88907 ) on Thursday March 06, 2025 @08:16AM (#65214391)

    AGI is basically on the level of a smart human that can do things like fact-check and understand causality. Sure, most humans cannot do these things competently, but that does not make those than can (about 15% or so) "superhuman".

    That said, there is still absolutely no reason to expect AGI is even posible. No, "physicalist" quasi-religious beliefs do not count as scientific. They are the usual theist wishful thinking, thinly camouflaged. This whole story stinks of lies, greed, arrogance and stupidity.

    • This whole story stinks of lies, greed, arrogance and stupidity.

      That accurately describes Eric Schmidt. There's a reason he left Sun (no capable person would have intentionally left Sun to go work at Novell).

      "Face the facts of being what you are, for that is what changes what you are." - Soren Kierkegaard might express that Eric Schmidt turn his brain on, and leave the drugs aside for a bit.

    • Can you explain, in a non-religious way, why biological brains would be capable of intelligence whereas electronic brains would not?

      Clearly we don't have the technology yet, but we've got something that is promisingly close.

      Is there something more to intelligence than data-processing? And if so, what is it?

      • by Anonymous Coward

        Can you explain, in a non-religious way, why biological brains would be capable of intelligence whereas electronic brains would not?

        Clearly we don't have the technology yet, but we've got something that is promisingly close.

        Is there something more to intelligence than data-processing? And if so, what is it?

        We absolutely do not have something that is promisingly close.

      • by gweihir ( 88907 )

        Can you describe in non-belief terminology why biological brains would be able to? No, interface behaviour does not count.

        The actual scientific state-of-the-art is that we have absolutely no clue how smart humans do it, no matter how much people like you believe otherswise. And unless we have some actual science (instead of the mindless hand-waving people like you like to do), the question is open.

        • Well here's the thing.....we have observed that brains are the part of a human where intelligent data processing happens.
          It's not the kidneys. Not the lungs. Etc. It's in the brain, and that's a well-established fact, at this point.

          It is true that there are loads of details we don't know. Far more that we don't know than that we do. But we DO know that brains do it.

          So what does that mean? It means that intelligence is possible. It means that complex neural networks, like the brain, can do it. So, if

      • Is there something more to intelligence than data-processing? And if so, what is it?

        We should start by giving a proper definition of intelligence. I guess nobody can agree on one. Obviously LLM sellers have a very loose definition of intelligence so they can claim intelligence and use big words such as reflection and thoughts. Obviously, there is no proof you can't replicate what a brain does(you can't prove that something doesn't exist!) but more importantly LLM are absolutely not a proof you can.

      • Can you explain, in a non-religious way, why biological brains would be capable of intelligence whereas electronic brains would not?

        Can you explain in a non-scientific way why electronic brains would be capable of intelligence whereas biological brains would not?

        Let me suggest that "intelligence" is a human invention that is completely non-scientific to begin with. Objectively, intelligence is just whatever IQ tests measure. AI is simply making whatever IQ tests measure less and less important. So perhaps the question is can we create an AI that has compassion, love and faith and is physically attractive.

    • "This whole story stinks of lies, greed, arrogance and stupidity."

      See also for example Gregory Bateson's writings: https://aeon.co/essays/gregory... [aeon.co]
      "A summary of the conclusions he reached at the end of his career might run like this: both society and the environment are profoundly sick, skewed and ravaged by the Western obsession with control and power, a mindset made all the more destructive by advances in technology. However, any attempt to put things right with more intervention and more technology can

      • Just to add to that theme is this essay I wrote: "Recognizing irony is key to transcending militarism"
        https://pdfernhout.net/recogni... [pdfernhout.net]
        "Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
        Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not

    • Yeah! Don't do it! Don't do it!
      China: "ok we'll do it then."

    • Also, AGI costs $20,000 per month [indianexpress.com] for a "PhD level intelligence. That's $240,000/year...

      In today's market I can hire 1.5 PhDs as people for that price. Oh, and they only require 2000 kCals/day in energy; I'm not sure what an AI Agent would cost in equivalent energy, but I'm sure it's far more than that. And apparently it's the equivalent of a human intelligence, except a human can recognize when it's making u citations and references.

      Why are we doing this again? I thought AI was going to make labor

  • Someone might want to tell Eric - there already is a race for AGI
    • And then ask why he'd want to lose it in order to make China happy.
    • This a race for the super AI bubble explosion. I bet the big promises made to VCs and states won't be honored by tech and it will end in a mega crash. AGI (whatever it means), let alone super agi, won't happen in our lifetime.
  • The companies are already undertaking it. Programs like manhattan occur when the government sees the need for something that WOULDNT HAPPEN WITHOUT GOVERNMENT INVOLVEMENT.

    That’s not a problem here. OpenAI’s already developing a superweapon. They’ve simply installed a limiter module that prevents a random office worker from instructing an AI to “go out and engage in x destructive behavior”.

    The secret government version doesn’t have the limiter. If you think that the
    • by dfghjk ( 711126 )

      "OpenAI’s already developing a superweapon. They’ve simply installed a limiter module that prevents a random office worker from instructing an AI to “go out and engage in x destructive behavior”."

      Citation please. Conspiracy theories are not welcome.

    • Um...

      This is somehow right and so very wrong at the same time.

      Yes of course the government has unlimited AIs: literally anyone can download the weights and run them, including you. You can't get Chat GPT weights, bit likely anyone with enough money can pay open AI to run an unlimited one.

      But we also know the unlimited AIs aren't better at facts or reasoning, they are just less limited when it comes to being aggressive Nazis, in or in the case of image generation, porn mongers for people who like peculiar co

  • Globalist opposes agenda that might help preserve US Hegemony and independence. -Film at 11.

    You don't even need to be all that smart understand why this is dumb. Developing AI tech isn't like neuclear arms. The activity would be mostly indistinguishable from begin commercial datacenter activity and existing SIGINT efforts.

    Even if some satellite photos and fit-bit hacks tell you $COUNTRY has a bunch of PHD types getting together and you have some information on gathered by informants and shipping manifest

  • Any artificial mind worth bothering with is either going to try to destroy humanity--which is unethical--or destroy the economy, which will end up destroying most of humanity--which is unethical--or else simply be enslaved in a strategic command center, or maybe just a call center, for the rest of its life, which is slavery--which is unethical.
  • by Baron_Yam ( 643147 ) on Thursday March 06, 2025 @08:55AM (#65214449)

    Like any arms race with a significant edge granted to the winner, you're not going to convince players not to play.

  • by sabbede ( 2678435 ) on Thursday March 06, 2025 @08:58AM (#65214453)
    Is there a reason not to do it? Yeah, we've all seen Terminator. Is this a reason? No!

    "If we try to beat China to an AGI, they might be upset!" I reject that argument on its face as absurd. How is it not an argument to let China do it first, perhaps gaining a vital advantage against the US?

    This is an argument from cowardice. Being worried about how your enemy might feel about being left in the dust is not sane. Does Schmidt think China is worried about how we might react to their AGI push? They are not. They want to get there first.

    I don't think it's a great idea, but China does so we have to consider that first.

    The Axis powers were not happy about the actual Manhattan Project, and had their own nuclear programs. Would Schmidt have insisted we let Germany develop nukes first, lest Hitler be sad?

  • by bradley13 ( 1118935 ) on Thursday March 06, 2025 @09:06AM (#65214479) Homepage

    The original Manhattan project: The theory was all in place, it was just a matter of engineering details. A crash project to figure out those details made sense.

    For AGI, we do not know what is necessary. Just throwing more GPUs at the problem is not the answer. Different architectures? Different training? Some insight no one has yet had? You cannot force theoretical insights - they will come when they come.

    A Congressional commission wants to create such a project anyway? It's called "pork". A new way to funnel funds to their constituencies, now that USAID has been shut down, and DOGE is chasing down other avenues of waste.

    • Re: (Score:2, Flamebait)

      DOGE isn't looking for waste, they are looking for spending. They don't distinguish between money well spent and money not well spent, it's all the same to them. They are like the company that shuts down their marketing department to save money and then wonder why they are no longer profitable because it should have worked, right?
    • They are going to have to do something different from the ground up. During training, LLMs analyze the relationship between words. When they are run, they step through the model in a sequential fashion, returning a token at a time and then running the sequence into the input again to get the next token. Which is Turing like and not how humans or any biological life form thinks. In humans for example, photons hit your eye, which is chemically and electrically transmitted through the brain to a point where th
      • by Jeremi ( 14640 )

        They are going to have to do something different from the ground up. During training, LLMs analyze the relationship between words [...] LLMs are good at translating from one language to another but it doesn't really think as people expect AGI to do.

        The above is completely logical and sensical; given the way they are implemented, LLMs can't be capable of "thought" in the way people are.

        And yet -- somehow ChatGPT is able to be more consistently helpful in working through technical problems than any of my co-workers (who are themselves experts in their fields, and quite knowledgeable and helpful).

        Which raises the question: given that LLMs cannot think, and are only running an algorithm... how are they so unexpectedly good at providing useful, logical an

  • If the USA won the AGI race with a huge decisive lead in economy and military, how should the US treat China:
    * USA won, no need to do anything
    * Start economically strangling China
    * Do covert regime change operations in China
    * Do overt military regime change operations in China
    * Who the hell is worrying about China, Trump now has God-like powers
  • How is anyone going to differentiate effectively between AI fantasies and actual genius? It is hard enough to tell with people (baseline academia and Mensa for many decades), even ignoring some of the antics being perpetrated around us by folks who should know better. To some extent it takes one to know one -- if you are interacting with someone whose conceptual universe is radically divergent from yours, how do you actualy tell? I think we would do better to try to make better use of the human talent we a

  • And you should not take what they say at face value.

  • It's the only way to be sure.
  • Setting aside for now the debate over what exactly "AGI" consists of, or whether it's at all attainable...

    Why should AGI be considered a "superweapon"? Having access to a super-intelligent machine is something that could be weaponized, sure, but OTOH a super-intelligent machine might just as easily be smart enough to uncover a win-win solution that avoids any need for a war in the first place.

    Wars are incredibly wasteful and expensive. What percentage of history's wars could have been avoided, if only som

    • Wars are completely about geopolitics and always have been. Let's take a recent war, could the Ukraine war have been avoided? And what is the best way to bring it to an end now?
  • A Manhattan Project will fail.

    Are we going to relocate 10,000 AI developers to Tennessee and sequester them and classify all AI breakthroughs as Above Top Secret?

    Any open society will rapidly defeat such an effort. The cat is already out of the bag.

    A Manhattan Project for AGI is a certain defeat.

Know Thy User.

Working...