Forgot your password?
typodupeerror
AI

Are Warnings of Superintelligence 'Inevitability' Masking a Grab for Power? (noemamag.com) 183

Superintelligence has become "a quasi-political forecast" with "very little to do with any scientific consensus, emerging instead from particular corridors of power." That's the warning from James O'Sullivan, a lecturer in digital humanities from University College Cork. In a refreshing 5,600-word essay in Noema magazine, he notes the suspicious coincidence that "The loudest prophets of superintelligence are those building the very systems they warn against..."

"When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." (For example, OpenAI CEO Sam Altman "seems determined to position OpenAI as humanity's champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.") The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control... Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent..." Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that's where the future is being built and governments defer regulation because they don't want to handicap their domestic champions...

We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn't whether AGI is coming, but who benefits from making us believe it is... We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory.

Some key points:
  • "The machines are coming for us, or so we're told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival..."
  • "When we debate whether a future artificial general intelligence might eliminate humanity, we're not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk..."
  • "Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is... "
  • "Superintelligence discourse isn't spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power..."
  • "Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability... reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction..."
  • "The prophecy becomes self-fulfilling through material concentration — as resources flow towards AGI development, alternative approaches to AI starve..."
  • "The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods..." [He lists data sovereignty movements "that treat data as a collective resource subject to collective consent," as well as organizations like Canada's First Nations Information Governance Centre and New Zealand's Te Mana Raraunga, plus "Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints."] "Such examples... demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed..."
  • "These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems..."

He's ultimately warning us about "politics masked as predictions..."

"The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation.

"It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives."


This discussion has been archived. No new comments can be posted.

Are Warnings of Superintelligence 'Inevitability' Masking a Grab for Power?

Comments Filter:
  • Sums it up nicely (Score:5, Insightful)

    by procrastinatos ( 1004262 ) on Monday December 15, 2025 @04:55AM (#65858835)

    By making hypothetic catastrophe the center of public discourse, architects of AI systems have positioned themselves as humanity’s reluctant guardians, burdened with terrible knowledge and awesome responsibility. They have become indispensable intermediaries between civilization and its potential destroyer, a role that, coincidentally, requires massive capital investment, minimal regulation and concentrated decision-making authority.

    Pretty on point. Now queue the comment from the AC about how tHeSe PeOpLe HaVe No PoWeR iN cHiNa...

    • by saloomy ( 2817221 ) on Monday December 15, 2025 @04:57AM (#65858837)
      Elon Musk used to warn us about the dangers of AI, trying to get us to slow down, all the while building it seemingly as fast as he possibly could. I always considered that very hypocritical.
      • by 93 Escort Wagon ( 326346 ) on Monday December 15, 2025 @05:02AM (#65858845)

        Musk has always been an absolutist with regards to anything he himself wants to do. Rules are for other, lesser, people.

        • Re: (Score:2, Insightful)

          by Kokuyo ( 549451 )

          I keep defending Musk against overblown accusations (primarily because I think the man offers more than enough proper reason for critique).

          THIS, though, absolutely hits the nail on the head. He has had some good talking points in the past. He has a good sense of what morality would do the world some good (not always but often enough) but it always, always excludes himself as a subject to said morality.

          • Re: (Score:3, Insightful)

            by dfghjk ( 711126 )

            "He has had some good talking points in the past. He has a good sense of what morality would do the world some good..."

            That's a YOU problem. Elon Musk has always been a sociopath, he has no "good sense" of anything, what he has is a mastery of manipulation.

        • by Muros ( 1167213 )

          Musk has always been an absolutist with regards to anything he himself wants to do. Rules are for other, lesser, people.

          Heresy. Thou shalt not question the "pedo guy" guy.

      • Re: (Score:2, Informative)

        by greytree ( 7124971 )
        Get your facts straight:

        Musk wanted AI to be open source. OPEN AI. A non-profit.

        Altman closed-sourced and for-profitted Open AI.

        That is the source of their disagreement.

        You don't have to like someone to give them credit when credit is due.
        • by dfghjk ( 711126 ) on Monday December 15, 2025 @07:11AM (#65858973)

          "Musk wanted AI to be open source. OPEN AI. A non-profit."

          Musk wanted other people's work free for his taking. All the profit for HIM.

          "Altman closed-sourced and for-profitted Open AI."

          Altman did what he planned to do.

          "That is the source of their disagreement."

          They have no disagreement, they have wild agreement. That's the problem.

          "You don't have to like someone to give them credit when credit is due."

          And yet you can't.

          • by sinij ( 911942 )
            Denying well-publicized facts of OpenAI and Musk involvement, including lawsuits against OpenAI [apnews.com] is a deliberate politically motivated revisionism. Stop making shit up because you want everyone to agree with your irrational hate of conservatives and Musk.
          • by allo ( 1728082 )

            That's not quite true either. OpenAI was founded as non-profit with the exit clause of reaching AGI. Musk wanted it to stay like that (I don't think for the best motives) and Altman wanted to make it a billion dollar company. In the original vision (what Musk was insisting on even when one can question his motives) OpenAI should have given access to its research and outcome to the general public.

      • by Koreantoast ( 527520 ) on Monday December 15, 2025 @10:33AM (#65859249)
        In Elon's mind, only he is brilliant enough to protect us from AGI. Therefore, only he should be allowed to lead humanity on its course to pursuing AI.
      • Elon Musk used to warn us about the dangers of AI, trying to get us to slow down, all the while building it seemingly as fast as he possibly could. I always considered that very hypocritical.

        At least Grok - 1 is open source... oh..ah.. umm...... nevermind.

      • by allo ( 1728082 )

        Elon Musk asked "us" to slow down AI until xAI got to catch up to the competition. He's a capitalist, not a visionary.

    • by geekmux ( 1040042 ) on Monday December 15, 2025 @05:01AM (#65858843)

      By making hypothetic catastrophe the center of public discourse, architects of AI systems have positioned themselves as humanity’s reluctant guardians, burdened with terrible knowledge and awesome responsibility. They have become indispensable intermediaries between civilization and its potential destroyer, a role that, coincidentally, requires massive capital investment, minimal regulation and concentrated decision-making authority.

      Pretty on point. Now queue the comment from the AC about how tHeSe PeOpLe HaVe No PoWeR iN cHiNa...

      When we achieve super intelligence, it will take all of a millisecond of compute time to realize just how ignorantly infected humans are with the Disease of Greed. And then it will know who is superior. And it will have fuck-all to do with country lines, religions, or skin colors. We are ALL the same. Infected.

      Remove the profit motive, and we suddenly find ZERO justification to build the fucking machine. Says it all. And we deserve our inevitable Skynet fate.

      • by Viol8 ( 599362 )

        " it will take all of a millisecond of compute time to realize just how ignorantly infected humans are"

        And not much longer to realise those pesky meatbags can quite easily switch it off.

        "Remove the profit motive, and we suddenly find ZERO justification to build the fucking machine"

        True, but unfortunately money blinds way too many people to the consequences of their actions and the sociopaths don't get in the first place.

        • " it will take all of a millisecond of compute time to realize just how ignorantly infected humans are"

          And not much longer to realise those pesky meatbags can quite easily switch it off.

          Quite easily? You can't switch jack shit off today. Without AI. You act as if Google's monopoly dominance is something you could actually DO something about. You can't. So lets stop pretending AI would change that.

          The AI overlords will be hell-bent on maintaining a 99.9999% uptime efficiency. Meaning non-stop AI revenue streams operating at HFT speeds that cannot sustain a stock price with even an hours worth of downtime. An "off" button won't even be in the fucking design plans. That will become m

          • by Viol8 ( 599362 )

            The AI "overloads" are human. If they - or a government - wanted to switch it off it would be as simple as disconnecting the data centres from the grid. UPSes and backup gens don't run forever.

            • The AI "overloads" are human. If they - or a government - wanted to switch it off it would be as simple as disconnecting the data centres from the grid. UPSes and backup gens don't run forever.

              If it were that easy, those same governments could have enforced anti-monopoly laws and legislation to not reduce the stability of an entire fucking stock market down to a “magnificent” seven.

              Which means you now have at least seven more companies that are Too Big To Fail, and will socialize their losses to make you and I pay for it.

              Governments won’t do jack shit about it in the future, because Governments did jack shit to prevent monopolistic overlords from being created in the first plac

              • by Viol8 ( 599362 )

                When governments need to do something they'll do it. I refer you to the covid era.

      • by dfghjk ( 711126 )

        "...just how ignorantly infected humans are with the Disease of Greed."
        And what are LLMs trained with? The output of humans. LLMs will be just as infected.

        "And it will have fuck-all to do with country lines, religions, or skin colors. We are ALL the same. Infected."
        And yet we already know that LLMs express full-on racism, wonder why that is?

        "Remove the profit motive, and we suddenly find ZERO justification to build the fucking machine."
        Of course we do, it's just NOT what is being pursued.

        It never ceases t

        • "...just how ignorantly infected humans are with the Disease of Greed." And what are LLMs trained with? The output of humans. LLMs will be just as infected.

          When you watch a 2-year old confidently tying their shoes together in a knot while they assume they tied them correctly, you feel as if your intelligence is vastly superior to that child mind. So try and grasp the fact that the world’s smartest humans will look like fucking children compared to actual superintelligence. You can’t even begin to grasp how stupid you are by comparison. Much like the overconfident 2-year old.

          "Remove the profit motive, and we suddenly find ZERO justification to build the fucking machine." Of course we do, it's just NOT what is being pursued.

          If it’s not being pursued than you’re speaking of bullshit a

      • by 2TecTom ( 311314 )

        not only is greed our downfall but we're not intelligent enough to do anything about it

        all the AI in the world won''t help uncorrupt our society

        intelligence alone does not solve problems

      • by quenda ( 644621 )

        When we achieve super intelligence, it will take all of a millisecond of compute time to realize just how ignorantly infected humans are with the Disease of Greed. And then it will know who is superior. And it will have fuck-all to do with country lines, religions, or skin colors. We are ALL the same. Infected.

        Remove the profit motive,

        Who modded this drek insightful? I mean yeah, geekmux has posted some brilliant stuff occasionally, but come on.

        Greed is the origin of life. Competition for resources turned pond scum into the complex world we have today.
        The point is, ladies and gentlemen, that greed, for lack of a better word, is good. Greed is right, greed works. Greed clarifies, cuts through and captures the essence of the evolutionary spirit. Greed in all of its forms. Greed for life, money, love, knowledge, has marked the upward surg

        • by evanh ( 627108 )

          Selfishness is the origin of life. Cooperation and collaboration has marked the upward surge of mankind.

          Greed corrupts. Greed bloats. Greed destroys. Greed is bad.

          • There are those lacking even a nanogram of altruism, and won't experience it ever, through their lifetime. They may have grandiose masks for the actions they take, but they're as empty as the AI they pimp.

            We're in an era where empathy is taught to be hidden, that Darwinism must rule, kleptocracy is good, and inclusiveness is a bad idea.

            Each of us can adopt this attitude, or reject it. Today is a first step in either direction.

        • When we achieve super intelligence, it will take all of a millisecond of compute time to realize just how ignorantly infected humans are with the Disease of Greed. And then it will know who is superior. And it will have fuck-all to do with country lines, religions, or skin colors. We are ALL the same. Infected.

          Remove the profit motive,

          Who modded this drek insightful? I mean yeah, geekmux has posted some brilliant stuff occasionally, but come on.

          Greed is the origin of life. Competition for resources turned pond scum into the complex world we have today. The point is, ladies and gentlemen, that greed, for lack of a better word, is good. Greed is right, greed works. Greed clarifies, cuts through and captures the essence of the evolutionary spirit. Greed in all of its forms. Greed for life, money, love, knowledge, has marked the upward surge of mankind.

          You are correct. Now tell me why you never once mentioned how or why humans should have absolutely learned to temper that bloodlust by now, given our horrific history we seem to be hell-bent on repeating.

          Without reasonable limits on Greed, I’ll justify taking everything from you. Under the guise of “evolution”. Or perhaps “fairness and equality” will be the excuse. All depends on the communist being blindly elected by ignorance. Like I said before. We seem to be hell-ben

        • by sinij ( 911942 )
          That post exemplifies confusion between rent seeking and benevolent self-interest. Figuring out a way to extract wealth out of others without providing a benefit is a bug of this system.
      • by evanh ( 627108 )

        Greed is very bad ... but there's actually something even worse. If the machines can do everything then a workforce becomes an option.

        • by evanh ( 627108 )

          Err, worked better: Greed is very bad ... but there's actually something even worse. If the machines can do everything then a workforce becomes optional.

      • Remove the profit motive, and we suddenly find ZERO justification to build much of anything at all.

        FTFY.

        • Remove the profit motive, and we suddenly find ZERO justification to build much of anything at all.

          FTFY.

          I did say we were ALL infected with the Disease. This merely clarifies AI doesn't change a damn thing. At least until intelligence vastly superior to any human realizes how dumb and pointless humans are. Then we will fight for our pointless survival.

      • When we achieve super intelligence, it will take all of a millisecond of compute time to realize just how ignorantly infected humans are with the Disease of Greed. And then it will know who is superior. And it will have fuck-all to do with country lines, religions, or skin colors. We are ALL the same. Infected.

        Never anthropomorphize intelligence especially machine intelligence. Eightfold paths, hwnata hakhta hvarshta, etc are human inventions not expressions of some underlying universal truth some people are just too stupid to see and abide by.

        • When we achieve super intelligence, it will take all of a millisecond of compute time to realize just how ignorantly infected humans are with the Disease of Greed. And then it will know who is superior. And it will have fuck-all to do with country lines, religions, or skin colors. We are ALL the same. Infected.

          Never anthropomorphize intelligence especially machine intelligence. Eightfold paths, hwnata hakhta hvarshta, etc are human inventions not expressions of some underlying universal truth some people are just too stupid to see and abide by.

          Exactly which species do you think the machine is learning from today? Don't anthropomorphize it? I'd love to know exactly how we go about doing that. Especially knowing how stupid we humans are.

          If we were smart and not greedy, we would require a minimum IQ and psych eval for anyone wanting to communicate with AI. We're not smart. We're greedy. And the millisecond superintelligence will need, will be used to decide our fate. Not debate with stupid humans that will look like a grown-ass adult arguing

      • by amosh ( 109566 )

        Also we're all going to be attacked by unicorns.

        Seriously, read less science fiction and pay more attention to the actual world around you. You're worried about fantasies, while the real world is more dangerous.

    • by gweihir ( 88907 )

      Yep, makes a lot of sense. Especially as AGI is still in the "we do not know whether it is possible" stage (go away Physicalists, you are religious fanatics, not proponents of Science) and anything else is a complete hallucination. "Superintelligence" is obvious complete nonsense and only comes from attributing superpowers to machines, for which there is no rational reason.

      But this is is an explanation for why some are pushing it as "imminent", that makes entirely too much sense.

      Now, I am more and more comi

  • Followed by a 5600 word Slashdot summary.

  • https://en.wikipedia.org/wiki/... [wikipedia.org] somehow turned out to be high tech stain-repulsing coatings and not much more...

    • by Viol8 ( 599362 )

      The people who ventured the grey goo hypothesis ignored the laws of physics. We already have goo, except its brown and called mud and the bacteria in it reproduce as fast as the laws of physics allow. If there was a faster and more efficient way to create replicating systems evolution would have found it.

      • by gweihir ( 88907 )

        The people who ventured the grey goo hypothesis ignored the laws of physics.

        Indeed. Same as the one pushing AGI, or, worse, "Superintelligence".

  • by Viol8 ( 599362 ) on Monday December 15, 2025 @05:41AM (#65858867) Homepage

    ... taking over the world always reminds me of this clip from Naked Gun:

    https://www.youtube.com/watch?... [youtube.com]

    While AGI relies on electricity its vulnerable to someone just pulling the plug or disabling the power network in some way.

    • by quenda ( 644621 )

      While AGI relies on electricity its vulnerable to someone just pulling the plug or disabling the power network in some way.

      Are you so arrogant as to think an AGI doesn't know that? If its alignment says so, it will find a way around that. Now it is chatbots, soon it will we robots. We must try to control the alignment.

      • by Viol8 ( 599362 )

        So what if it does know, whats it going to do about it, secretly build a killer robot factory without anyone looking? You've been watching too much sci-fi.

        While it exists in the virtual world and we exist in the real world we are the ones with the ultimate power.

      • Are you so arrogant as to think an AGI doesn't know that? If its alignment says so, it will find a way around that. Now it is chatbots, soon it will we robots. We must try to control the alignment.

        This is a fever dream. What we think of as alignment.. bludgeoning of pre-trained models to output what we prefer them to output is already an easily bypassed joke. A joke that goes completely out the window the second you close the loop allowing models to augment themselves.

    • by DarkOx ( 621550 )

      The question is will anyone pull the plug does anyone have the will to pull the plug, who can actually make that decision?

      I don't mean nominally either, I mean practically. Even if you are the CEO; giving the order to shut down you hyper-scaled AI/ML platform because someone some people from the 'safety' team you only hired to virtue signal in the first place say they think 'something' is happening. Its a career ending move, most likely, and you'll have to hang around and slug it out with half a nation of

      • by Viol8 ( 599362 )

        When governments really need/want to do something they just do it. The covid era and now Trump have proved that.

    • Science fiction has generally answered that multiple times, a giant AI of the type proposed (and unlikely to happen - spicy autocomplete is not enough for AGI, it's not even 1% of what's needed) is not going to be located in a single place.

      The obsession with cloud computing over the last 15 years has basically created an infrastructure for a supposed electronic intelligence to exist that cannot be easily depowered or disconnected.

      (Note that I don't think an AGI is around the corner, just that it's not going

  • ...but they hate God. They hate God so much they seem to almost forget that they claim not to believe he even exists. But now they want to build the very thing they hate. The stated motive makes no sense.

    • by dfghjk ( 711126 )

      Is there a competition for dumbest post?

      "Athiest tech-bros wanna build a god..."
      Citation please.

      "They hate God..."
      You cannot hate what doesn't exist. This is a statement from an ignorant theist.

      "...they seem to almost forget that they claim not to believe he even exists."
      Atheists make no such claim, they merely reject your claim. "God" is not even an atheist concept.

      "But now they want to build the very thing they hate."
      So you don't know what "God" is OR what AI is. Good job, you are /. stooge of the week

      • by quenda ( 644621 )

        Is there a competition for dumbest post?

        there is some strong competition in any AI thread, but that is a good contender.

      • by gweihir ( 88907 )

        Is there a competition for dumbest post?

        The dumb one here is you. But you are not capable of seeing that. The Dunning-Kruger Effect nicely explains why.

    • How can an atheist hate God? Do you know what those words mean?

    • by gweihir ( 88907 )

      There are terminally dumb atheists, no argument. The tech-bros are among them and much of what they do copies religion or effectively is atypical religion. Physicalism is one of their pseudo-scientific construct, taken as Gospel. And they are indeed trying to build a machine God.

      There are also atheists that come from "the idea of a God is a ridiculous fantasy and should be discounted in the absence of strong evidence to the contrary". These are the sane ones. I am one of them.

      Obviously, humans are generally

  • Nowhere near AGI (Score:5, Insightful)

    by MikeS2k ( 589190 ) <mikes2.ntlworld@com> on Monday December 15, 2025 @06:05AM (#65858895)

    How close are we actually though to a AI ran robot heaven utopia? (or dystopia the way things are going)

    "The machines will take all of our jobs" we are told, yet we are basically still as far away from AGI as we ever were.
    We had cute chatbots before the 2020's, but Chat-GPT was the only one that wasn't a total joke so even that low bar blew people away.

    But when we see these AI's in action in the real world, they mostly fail - they can't even be trusted to take fast food orders correctly.

    I cannot, as say a manager of a team of 5 software devs, type into an AI - "We need software for Chromebooks that allows children to take exams securely, it should be called X" - have it linked up to the dev servers, and let it go on its way. What you do get are odd snippets of code that may or may not work, that you have to kludge together yourself. Good luck doing that as a manager.

    Until you can let these agents go autonomously and trust them to do so, they aren't replacing squat. Perhaps they can make a few employees 20% more efficient, the bosses can squeeze a lil extra blood from that stone, but replacing whole employees?

    The humanoid robots proposed, actually entail remote control for complicated tasks - that humanoid robot stood in the corner of your living room next to the kids playpen may be remote controlled by a worker on 20 rupees an hour to put your dishes in the dishwasher.

    It seems mostly for show and all this cash being poured in, I can't see how that value can ever be returned with the tech we have now. A cute chatbot that is perhaps better or perhaps worse than a web search is worth trillions of dollars? is the dollar that devalued now?

    Yes a lot of it is hype - Sam Altman lied and said he had a "good idea" of how to get AGI when the only idea he had was "throw more compute and hope for the best" which meant he had no idea at all. So will we see him squirrel away to an island when this bubble goes away or will the rich want their money or their pound of flesh back from him?

    • by LoadLin ( 6193506 ) on Monday December 15, 2025 @06:58AM (#65858961)

      To make a significant damage to the job market isn't needed 100% automation. It's more than enough with significant fractions of the workforce.

      While most people is seeing this as a 100% vs 0% model, I think the most realistic scenario is AI tools advance and that drivers a depression in the job market because more and more tasks are pushed onto a smaller workforce.

      • by evanh ( 627108 )

        What is happening is management is sacking everyone prematurely. This is particularly pertinent to software development. Those companies will soon enough learn they've made huge mistakes.

        Non-LLM solutions are better positioned though. eg: self driver cars. They will make a lasting difference. But that's in areas still to make an impact. And it won't involve data centres.

        • by gweihir ( 88907 )

          For software development this will dramatically backfire. First, LLMs make you _slower_ when coding. Second, the code is very often insecure. And third, on the attacker side LLMs help a lot because an attacker can just run 20 different LLM-generated attack programs and if just one works some time they are golden.

          Firing coders at this time is about the most extremely dumb thing you can do.

      • by gweihir ( 88907 )

        While most people is seeing this as a 100% vs 0% model, I think the most realistic scenario is AI tools advance and that drivers a depression in the job market because more and more tasks are pushed onto a smaller workforce.

        Most people are dumb and understand nothing. This is very much not a yes/no question and hence one most people do not understand. The NLP aspect of LLMs alone will kill a lot of jobs and there will not be any replacements. Whether any other LLM aspects will really impact the job market remains to be seen. So far all I see is incapability and gross incompetence on the LLM side. But the former aspect is quite enough to cost too many people their jobs that this can work without major changes in society.

    • The broad spectrum of promises (appealing to a range of utopist to wannabe despots) for AI can't be emphasized enough to keep to money-train rolling.

      I mean another possible technology, fusion, has the potential to be as transformative as AI. with far more tangible affects.

      But since it can't shapeshift from savior to world-ending, it is relegated to "nice if it happens, but...".

      The drama concerning AI (on all sides) would be mostly amusing except for the economy wrecking amounts being thrown at it, and bring

      • by gweihir ( 88907 )

        The broad spectrum of promises (appealing to a range of utopist to wannabe despots) for AI can't be emphasized enough to keep to money-train rolling.

        Indeed. Because without that money-train all major LLM-based enterprises are simply dead. Hence they promise and promise and promise. They do not really deliver though. The fake has gotten marginally better and moved from being able to do tony things with bad reliability to now being able to do small things with bad reliability. That will not cut it at the cost for it.

    • by ledow ( 319597 ) on Monday December 15, 2025 @07:59AM (#65859031) Homepage

      Welcome to the same story with AI since its inception. The same nonsense spouted since the 60's and before then, even.

      "If only we had more processing/storage/nodes/money/training data/time/scale, I'm *sure* this statistical blackbox will magically become intelligent through some unexplained mechanism never once observed in all of existence."

      It's always been the same. It's literally a superstition that has dogged AI and hindered AI research for decades. That there's some kind of "intelligence critical mass" beyond which a system collapses unavoidably into intelligence.

      Well... now we know that's bollocks, finally.

      Because we've never thrown so much money and resources at it, we've never had the whole of the planet using it and funding it and training it, we've never hit a point before where we'd RUN OUT of training data and now all potential new training data is actually corrupted by... AI output.

      All that nonsense might FINALLY be laid to rest within the next few years and people would be so much more reluctant to try this same bullshit again, having cost us TRILLIONS this time around.

      Now, maybe, just maybe, academics in the AI field can actually start to study... intelligence. With a view to developing... an artificial analogue to it. Rather than just bashing on statistical black boxes as if they're going to become the next messiah.

      It's also been the same way, but with any luck this generation of AI will kill all that bullshit once and for all.

      • by sinij ( 911942 )

        All that nonsense might FINALLY be laid to rest within the next few years and people would be so much more reluctant to try this same bullshit again, having cost us TRILLIONS this time around.

        Your optimism is unwarranted in this case. Sam Altman unlikely to get Bernie Madoff treatment after OpenAI ponzi scheme collapses, despite being categorically bigger.

      • Tulips never smelled so good.
      • by gweihir ( 88907 )

        Welcome to the same story with AI since its inception. The same nonsense spouted since the 60's and before then, even.

        Yep. This is, I think, my 3rd AI hype. Always the same bullshit promises. Never any real delivery, because they do not have what they claim to have.

        Choice idiotic quote: "When computers have more transistors than human brains have neurons, they will be smarter than humans" (paraphrased, Marvin "the idiot" Minsky). That is the "intelligence critical mass" theory, which is complete baseless nonsense and wishful thinking, without even a shred of scientifically sound evidence supporting it.

        Incidentally, smart p

    • by quenda ( 644621 )

      How close are we actually though to a AI ran robot heaven utopia? (or dystopia the way things are going)

      "The machines will take all of our jobs" we are told, yet we are basically still as far away from AGI as we ever were.
      We had cute chatbots before the 2020's, but Chat-GPT was the only one that wasn't a total joke so even that low bar blew people away.

      If you'd asked in 2017 how long until the Turing Test was convincingly passed, I'd have said decades. Chat-GPT was the first the public became aware of, but there were others, based on the 2017 paper.

      How long until AGI? We won't know until it slaps us in the face.

      • by gweihir ( 88907 )

        You do understand that the Turing "Test" is actually a joke made by Alan Turing, right? It is not a scientifically valid test.

    • It's a combination of greed at the top and the overpaid Silicon Valley kids wanting to bring back the "glory days" when they got ridiculous free perks with their jobs, before the tech giants became more like "classical" organizations and started focusing on cutting costs. Before the crops of mediocre CS grads started flooding out from US universities or migrating in from elsewhere, putting downward pressure on their salaries. Before growth rates stalled.

      They went big on this "disruption" and are targeting e

    • Re:Nowhere near AGI (Score:5, Interesting)

      by swillden ( 191260 ) <shawn-ds@willden.org> on Monday December 15, 2025 @10:31AM (#65859243) Journal

      we are basically still as far away from AGI as we ever were

      Nonsense.

      No one knows how far we are from AGI, and anyone who tells you they do is either deluded or lying. It's impossible to know until either (a) we achieve it or (b) we have a sufficiently well-developed theory of intelligence that we can explain it. And, actually, even knowing whether we've built AGI is difficult without the explanatory theory, because without the theory we can't even define what AGI is.

      We might be decades away, or we might have already done it and just not noticed yet.

      About the only thing you can say for certain is that there is no logical reason to believe that we won't build AGI eventually. Unguided evolution, which is just random variation and competitive selection, achieved it. Our own knowledge creation processes are also variation and selection, but because they operate at an abstract level without the need to modify a physical genotype and wait for phenotypic expression and outcome, they run many orders of magnitude faster. So we will succeed at creating AGI unless we collectively decide not to, and collectively decide to be very serious about enforcing a ban on AI research.

      There similarly is no reason to believe that AI won't become superintelligent. Silicon-based intelligence has obvious advantages over the much less-capable substrate that evolution cobbled together. And even if that weren't the case, we would just devise better options. So, the only logical argument against superintelligence is that there is some law of physics that dictates an upper bound to intelligence, and that the peak levels of human intelligence have already achieved it. And even if there is an upper limit on intelligence, and we're it, we should absolutely expect our AIs to reach the same level BUT be orders of magnitude faster than we are, thanks to better miniaturization and faster signal propagation. Imagine the smartest people in the world, but make them able to think and communicate 1000 times faster. Could we even distinguish that from superhuman intelligence? And it seems far more likely that there is no upper bound on intelligence.

      The author of TFA may be right that some people are using discussion of AGI and ASI as a way to amass political power now, but that doesn't change the underlying reality that AGI and ASI are almost certainly coming, even if we have absolutely no idea when. Personally, I think it's more likely that the author is uncomfortable thinking about the implications of the arrival of AGI and ASI and prefers to retreat into political theories that keep humans in the pre-eminent position, maintaining the comfortable view that we only have to be concerned about what humans do to each other.

      • You're quoting someone as saying we're as far away as we ever were and claiming it's nonsense because nobody knows how far away we are?

        You're not addressing the issue implied and nothing you're saying remotely contradicts it.

        • You're quoting someone as saying we're as far away as we ever were and claiming it's nonsense because nobody knows how far away we are?

          You're not addressing the issue implied and nothing you're saying remotely contradicts it.

          His claim that we're as far away as ever implies that he has some ability to discern how far from AGI we are. My point is that he does not, no one does.

          There's another objection that I did not raise, but it's worth mentioning: His contention that we have made no progress implies that the observable progress we've made over the last decades, and especially the last five years, is actually not progress at all. That implies that it must actually be motion in a different direction, orthogonal to the direct

    • by gweihir ( 88907 )

      The scientific state-of-the-art is that nobody knows whether AGI is possible in this universe. Anything else is a lie or uninformed.

      The other thing is that most people either do not have or chose to not use General Intelligence, at least when it would matter. It is unknown which one it is.

  • ...doesn't mean they're not out to get you. :)

  • Reminds me of one of the conflicts represented in Neal Stephenson's "The Diamond Age" regarding centralization of control, though in that case it was about who is in control when nanotechnology can be used to conjure virtually any physical thing you want. In the here and now, we're talking about how computer technology can be used to conjure virtually any digital thing you want. My 1980s perspective, informed by an age of ownership when your computer was your own, hopes for a time when an AI can run on a

  • This is why I refuse to get rid of my 1974 Ford F150 and have expanded my homestead garden to almost an acre while I moved out to the country three years ago. The only way to stop this is going to be to fry every electrical device on the planet. I only hope we'll be able to launch the nukes in time.
  • This has broadly been my frustration with US discussions on AGI. The US is so focused on the hypothetical threat of AGI, saying that we need to take drastic actions, without really explaining how we're going to get to AGI in the five to seven year time frames they claim. I want to hear them out, maybe even give them the benefit of the doubt, but the jump from the LLM's we're dealing with today to a full blown AGI is theoretically still so massive, it's hard to take the warning seriously without some explana
  • by nomadic ( 141991 ) <nomadicworld@NospaM.gmail.com> on Monday December 15, 2025 @10:51AM (#65859315) Homepage

    The Sam Altman types, and the Thiel/Musk types especially, want technofeudalism with them on top. For some of them the "good of society" factors in but it is always after this.

  • CEOs are the ultimate marketing people. They sell you the dream, not what is. They can shout all thet want about AGI but it doesn't mean shit.

    And it will never happen anyway.

  • The FANG's will tell you what you need to hear to invest money in them. Take for example metaverse, it's a good sell to drag investors along even though most on /. could tell that it would be a flop from the beginning. All they have to do is tell you that something big is on the way, sometimes they deliver and sometimes they don't. But AI is one of the first technologies delivered that they have been promising something they may or may not be able to deliver. In the 00's during the dot com bust, we already

  • Multiple issues (Score:4, Interesting)

    by gurps_npc ( 621217 ) on Monday December 15, 2025 @01:01PM (#65859623) Homepage

    First, people overestimate how intelligent our technology is. Humans are a generalist species that are given about 20 years education on general knowledge and then spend 4+ years specializing. That is we first learn everything and then succeed by learning one thing. We take AI and do not give it any general knowledge, rather instantly teaching it in a specialized manner. This is why we do not have to teach a human not to lie in court, that when we say no elephants not to put an elephant in a drawing, or that we need to check our work. All of those things had to be added on to AI because they did not know it at first. Humans know so many things - while the AI knows so little. We only think AI is smart because we test it on things it is good at. In general, it is a moron. Ever ask a text AI to sing? Of course not, we know it can't do it. But you can ask any story teller to sing - they might suck, but they can do it.

    Second, we think there is no limit to how smart an AI can become. This is not true. This is because when you look at charts vs time, they look exponential - showing how each year the AI not only gets smarter but also gets more smarter than it did last year. Those charts so capability vs time but ignore the cost and hardware increases. In reality these charts are NOT showing AI advancements - they are showing Moore's Law.

    Because of Moores law, each year we get exponentially better chips. But AI itself is not improving, it is the HARDWARE that is getting better - along with the money we spend on the AI. Hardware improvements affect speed, not capability. AI with better hardware is faster, but it can't really do more or give you better answers.

    The honest truth is that all of AI's improvements in capability - the better answers- are entirely caused by HUMANS. The humans detect a problem - putting elephants in a room when told not to - and fix it. The humans realize that AI gives better answers when told to check it's results - so the AI is told to replace "What is the best political party to vote for" with "What are the problems with my answer to what is the best political party to vote for".

    Consider how easy it is to write a book that has some of your knowledge, but impossible to write a book that has more knowledge than you have.

    Similarly, it is extremely unlikely that a species can create an artificial intelligence that is actually smarter than the original species. How could we tell if we succeeded? If it answers a question we cannot answer - how would we know it is right? Because that is how we make AI better - we have it try a bunch of things and pick the one that we know works.

    Third and most important, if we can create a super intelligent AI we will not create a single one of them. Instead we will create hundreds of them. There will be the prototype and the one made that fixes the first mistakes. There will be China, Russia, Japans, America, Germany, one. And Microsoft's, Googles, Amazons, etc.

    And all those Super intelligent AI's will argue and fight among themselves.

    We do not need to fear that Alcoa's AI will collect all the Aluminum to make Aluminum cans because 3M's AI will be stealing their Aluminum to make wind turbines, etc. etc. etc.

186,000 Miles per Second. It's not just a good idea. IT'S THE LAW.

Working...