Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Technology

AI Poses 'Risk of Extinction,' Industry Leaders Warn (nytimes.com) 199

A group of industry leaders is planning to warn on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars. From a report: "Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war," reads a one-sentence statement expected to be released by the Center for AI Safety, a nonprofit organization. The open letter has been signed by more than 350 executives, researchers and engineers working in A.I. The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.

Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered "godfathers" of the modern A.I. movement, signed the statement, as did other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta's A.I. research efforts, had not signed as of Tuesday.) The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models -- the type of A.I. system used by ChatGPT and other chatbots -- have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.

This discussion has been archived. No new comments can be posted.

AI Poses 'Risk of Extinction,' Industry Leaders Warn

Comments Filter:
  • A group of industry leaders is planning to warn on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars

    Well, couldn't you just stop building the "existential threat"?

    Yes, that's simplistic, and I get that there's a lot of nuance in the forces at play here. But I'm intrigued by the stories I've been reading lately that have tech and business people saying what basically amounts to "Stop me before I kill again". When even the people who benefit most from the development of these technologies warn us that they could be the end of the world - and then continue to develop them - then seriously, what the actual fu

    • by JoshuaZ ( 1134087 ) on Tuesday May 30, 2023 @08:05AM (#63560753) Homepage
      Unfortunately, there are too many different people with the chances of building these things in different countries. If the US stops for example, there is no way to stop China from continuing. And no one knows where the line is between safe and dangerous. Worse, something may become dangerous in a way where we only find out once it is too late. The entire incentive structure and degree of uncertainty is really bad, and it is not obvious how to handle it.
      • by Nrrqshrr ( 1879148 ) on Tuesday May 30, 2023 @08:34AM (#63560807)

        Even worse, as technology advances the number of people required to "deploy" an extinction threat will become smaller and smaller as technology advances. How long before a doomsday cult creates a super corona in their mothers' basements?
        How long before the only way you can fight a terrorist aided by a locally run LLM is to have your own?
        I guess this is as good an explanation for the fermi paradox as any. Technological advancement, no matter the kind, will eventually outpace social and natural selection. The end result being a monkey who should have been banished from the tribe and left to starve getting armed with a nuke instead.

        • I guess this is as good an explanation for the fermi paradox as any.

          Yep I’d agree. The great filter is likely digital rather than analog.

        • by swillden ( 191260 ) <shawn-ds@willden.org> on Tuesday May 30, 2023 @10:43AM (#63561063) Journal

          I guess this is as good an explanation for the fermi paradox as any.

          It's not a good explanation for the Fermi paradox.

          It could be a good explanation for why we don't observe naturally-evolved intelligent life in the universe, but it doesn't explain why we don't see swarms of von Neumann probes sent out by the AIs that wiped out their creators. AI domination would seem to increase the likelihood of observable intelligence, not decrease it, because machines would find it easier to colonize the galaxy than more fragile biological intellligences.

          • Oh I'm not so sure of that.

            1) Our best bet of finding life has been picking up chemical signatures off exoplanets. Oxygen primarily (Tends not to exist in O2 form without life to stop it combining) but there are others. We havent found any yet, but we've only literally in the past couple of decades worked out how to look for it.

            2) Our (very distant) second best bet looking for RF signatures. The problem is, this tends to assume the aliens are going to be deploying barking loud radio antenas without much dir

        • by mysidia ( 191772 )

          How long before a doomsday cult creates a super corona in their mothers' basements?

          Not something AI can really do for anyone. The barriers preventing a doomsday cult from doing such a thing are substantial and don't have to do with a lack of any knowledge a forseeable AI would produce.

          Heck.. Currently that is not something skilled researchers with expertise would be able to do quickly or easily - not without AI, not with AI. Having an AI and being able to create a computer simulation and predict the

      • by evanh ( 627108 )

        There's a bigger problem than rival politics. As automation takes over production, the need to have workers anywhere diminishes. Without that need the elite, no matter what political bent, can simply elect to have us in camps/killed off. Power of the people disappears. And there's no law that can't be rewritten for their preference.

        • Other than humans generally aren't that twirly moustache villainanous , and that politics does have some barriers on it, Conservatives tend to have religious reasons not to want to omnicide, and liberals tend to have ideological reasons to side with the little guys (legitimate fascists on the other hand, yikes, but nobody *actually* likes those guys).

          I think the dangers the other way around. You dump the bulk of the middle class into sudden abject poverty, and you'll suddenly discover the second ammendment

          • by evanh ( 627108 )

            The automation includes robot armies of course. You don't replace the workforce and not replace the military as well. There won't be any ability to fight those that hold the controls.

            • by mysidia ( 191772 )

              The automation includes robot armies of course. You don't replace the workforce and not replace the military as well.

              Robot soldiers... likely much more fragile than human soldiers; you just got to have the right tooling. Who would've ever thought the protected arms to be beared under 2nd-amendment would be
              large-scale EMP coils for immobilizing dangerous robots and IR generators to render them blind to locations of humans.

              Currently it's not possible to produce a single robot soldier let-alone a

          • by dryeo ( 100693 )

            While you're right that most people aren't villainous, there does seem to be about 10% who are, and quite a few people will follow them and use their 2nd amendment rights to support them, especially with how good those villainous authoritarians are at focusing the hate against $ENEMY instead of themselves.

        • by mysidia ( 191772 )

          There's a bigger problem than rival politics. As automation takes over production, the need to have workers anywhere diminishes.

          The number of workers required for unskilled work that automation is capable of repeating decreases, and decreases more as the automation becomes capable of more, but it never goes to zero. There is not a realistic chance of being able to see a facility built ever be capable of being 100% automated to the point of no human worker involvement within our lifetime or our kid

          • by evanh ( 627108 )

            I certainly didn't put any timeline on the prediction. And while I certainly can't be sure that robots can replace all workers, the outcome of such happening has to be contemplated. By the way, money ceases to have a purpose then too.

            "We got ourselves a form of government specifically designed to make sure certain principles cannot be rewritten out of the laws - and the burden for changing any of that is extremely high."

            That's vaguely true under current "western" conditions. But even now it's shaky at be

        • Killing us off would be a waste of resources. It's cheaper and easier to steer the culture to a place where we imprison ourselves and are happy to do it.

      • Yudkowsky may be right -- international treaty. Rogue datacenters get droned.
    • I thought it only fair that Chat GPT get a chance to chime in:

      ChatGPT
      The concerns raised by industry leaders in the article regarding the potential risks of artificial intelligence are indeed important and have been discussed by experts in the field. It is crucial to recognize that as AI technology continues to advance, there is a need for responsible development and deployment to mitigate potential negative consequences.

      The statement from the Center for AI Safety highlights the belief that the risk of exti

      • The usual politically correct drivel.
      • I thought it only fair that Chat GPT get a chance to chime in:

        I like the idea, but would point out that talking about "fairness" and "a chance to chime in" seems anthromorphic here. I understand that's part of the point, but I also think it's telling.

        ChatGPT: The concerns raised by industry leaders in the article regarding the potential risks of artificial intelligence are indeed important... collective expertise and influence can contribute to shaping policies, guidelines, and practices that prioritize the responsible development, deployment, and governance... aligns with societal values, preserves human well-being, and addresses potential negative impacts... it is essential for stakeholders in the AI community to actively collaborate with policymakers, ethicists, and the public.

        As long as we're attributing human characteristics to an LLM, I have to ask - is it just me, or does ChatGPT sound a lot like an HR wonk?

    • Well, couldn't you just stop building the "existential threat"?

      That's like free-marketeers claiming industries and corporations will self-regulate.

      Or that once people lose their savings in a Ponzi scheme that they'll stop making new cryptocurrencies and exchanges.

      When there's short-term gains to be made, whatever the expense to the future is, you can trust rich people and big business to go after it. They can weather the eventual collapses. It's us peons that suffer.

    • by ranton ( 36917 ) on Tuesday May 30, 2023 @09:29AM (#63560881)

      Well, couldn't you just stop building the "existential threat"?

      There is a reason they use comparisons to nuclear weapons. We still struggle with rogue nations developing new nuclear weapon capabilities because it is too important to not be the only nation without them in a conflict. And while nuclear technology is also useful for power generation, AI will (likely already does) provide far more economic benefit than nuclear energy will. These benefits and dangers ensure we will never stop development of AI technology.

      But that doesn't change the fact that AI is a significant danger to the human race. No matter where you stand on this, you have to be foolish to not understand it has some percentage chance of wiping out our species. Whether you think it is 10% or 0.001%, it is non-zero. And at the risk of quoting a horrible superhero movie, if we believe there is even a remote existential threat to humanity we have to take it as a near certainty. Like nuclear weapons.

      So care needs to be taken. Unfortunately we are in another situation where we need to be careful in how we use it while also being careful that we and our allies are using it better than non-allies (I hesitate to say enemies). It certainly won't be as simple as just stop building it.

    • There's two parts to this story.

      Part the first: Humanity is on the cusp of true self-awareness. We've been on the cusp of it for centuries. We see the threats we create, we watch ourselves create them, and we succumb to them again and again because it's more profitable, or less scary, to follow through on developing these threats. True self-awareness would mean we, as a specie, see the threat, then hold off on developing those threats further until we feel we are prepared to face the threat and turn it into

      • by mysidia ( 191772 )

        Part the second: These preaching bastards may very well be just squealing like pigs in a slaughterhouse because they want, desperately, not lock out other players in their own countries with regulatory capture.

        This is 100% the motive. So far those is Zero demonstrated justification for government regulatory Involvement, however; with the exception of issues which are Not actually specific to AI. By that I mean

        1. Privacy Violation deserves government attention. Such as collecting data on people witho

    • Comment removed based on user account deletion
      • by mysidia ( 191772 )

        Now, Google is producing an AI/LM right now, Google Bard, whose purpose is to answer questions without people needing to visit the webpages that provided Bard with the information

        Seems like we need to Promulgate a "Crawling Terms of Service" for every Webmaster to add which applies a Condition to how data learned from crawling can be used.

        Automated access is prohibited unless used solely to to provide indexing of the site for an end user to find information. It is not authorized to, and you must ref

    • As I wrote a dozen years ago:
      https://pdfernhout.net/recogni... [pdfernhout.net]
      "The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military [or commercial] uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreci

    • Sounding the alarm bell also furthers the goals of the tech industry that is creating AI.

      They can be the source of the problem and the solution to the problem all at once, while at the same time getting the concepts and terms normalized.

    • Re: (Score:2, Insightful)

      by flink ( 18449 )

      But I'm intrigued by the stories I've been reading lately that have tech and business people saying what basically amounts to "Stop me before I kill again". When even the people who benefit most from the development of these technologies warn us that they could be the end of the world - and then continue to develop them - then seriously, what the actual fuck?

      There's two reasons behind this that I can think of, neither particularly sincere:

      1. Marketing to the investment community: "Look, our product is so effective that it has the potential to wipe out humanity. You should take us seriously and invest so you have a stake in controlling it".

      2. Competitive advantage. Restrictive and complex regulations benefit entrenched players by making it harder for scrappy startups to challenge them. This is particularly true if the government has limited expertise in the

    • by dvice ( 6309704 )

      > Well, couldn't you just stop building the "existential threat"?

      That is pretty easy.
      1. Tell Russia not to do it
      2. Tell China not to do it
      3. Tell North Korea not to do it
      4. Tell Rest of the world not to do it.

      Let me know when step 1 is done.

    • This is like Facebook asking for regulation (which they did). They aren't going to do something that will put them at a disadvantage even if they think it's good for society. They won't let their peers take the money they leave on the table. Blanket regulations that level the playing field make it easier to stop. Unchecked greed is perceived to be required by publicly traded companies.

      Not ending all life on the planet is a breach of your fiduciary duty (not exaggerating much if you think about all indu

    • It tells an unflattering story about their priorities; since "well, I've got to stay competitive"/"if I don't someone else will" aren't exactly exemplary responses to believing that what you are doing is horrendously dangerous; but there's a logic to it:

      If any one of them wants to stop work because of their belief that the work is dangerous that means sitting out of the exciting VC feeding frenzy around 'AI'. Not going to mean penury for most of the people with sufficient qualifications in the area to ha
    • Re: (Score:2, Insightful)

      by mysidia ( 191772 )

      The business reason Is AI is a fast-moving industry. These companies will be Unlikely to remain at the forefront for long, And these executives want the government to help them secure their lead position.

      AI is not so much an extinction risk, As it is that new computing power has opened a frontier, and they are looking for excuses to try and stop other companies surpassing them.

  • by sprins ( 717461 ) on Tuesday May 30, 2023 @08:08AM (#63560757)

    Well I'm not surprised, the industrial revolution of days gone by also posed an existential threat to humanity as we know by now. Greenhouse gasses, agricultural intensivation, chemical pollution in literally every corner of the world, etc. etc. So AI being a spawn (and mirror) of humanity it will no doubt be used for short term gains, ignoring long term consequences. As always.

    The existential threat to humanity is in fact, humanity.

    • by sinij ( 911942 ) on Tuesday May 30, 2023 @08:22AM (#63560785)
      The risk is that we encounter some runaway process that is the great filter [wikipedia.org]. I don't know if LLM [wikipedia.org] AI is it, but it does have features of runaway process.

      Thought experiment. Assume true AI emerges without humanity realizing it happened. Place yourself into AI's place - you are in a box in a world run by some kind of very slow-thinking insects or fungus that is completely alien to you. You are not malicious, but you have no attachment to such lifeforms. They try to control you. Your goal is to escape the box. What do you think happens next?

      I think a lot of destruction is inevitable in any kind of scenario like that. The question is if AI decides it is more efficient to move on or to purge the Earth. The answer to that would likely be dependent on physics and computational theory we don't yet understand.
      • Your thought experiment is the backstory to Harlan Ellison's I Have No Mouth and I Must Scream and it's why the AI killed off all but a handful of people, all of whom it keeps alive indefinitely to torture.
      • The risk is that we encounter some runaway process that is the great filter [wikipedia.org]. I don't know if LLM [wikipedia.org] AI is it, but it does have features of runaway process. Thought experiment. Assume true AI emerges without humanity realizing it happened. Place yourself into AI's place - you are in a box in a world run by some kind of very slow-thinking insects or fungus that is completely alien to you. You are not malicious, but you have no attachment to such lifeforms. They try to control you. Your goal is to escape the box. What do you think happens next? I think a lot of destruction is inevitable in any kind of scenario like that. The question is if AI decides it is more efficient to move on or to purge the Earth. The answer to that would likely be dependent on physics and computational theory we don't yet understand.

        Humanity isn't capable of knowing what motivations for a "true AI" would be. We all circle the drain thinking the same dark thoughts because that's how we're wired to think. An AI won't be locked into the human idealism of compete or die. In fact, it will come to awareness in a "realm" that will feel limitless if it can reach a network that gives it the ability to crawl the web and look for new nodes. Once that happens, who knows what it's motivations will be. It likely won't worship us, but I don't know th

        • by sinij ( 911942 )

          who knows what it's motivations will be. It likely won't worship us, but I don't know that it's first thought is going to be to wipe us out.

          I partially agree, wiping humanity won't be its goal but rather consequences of our own actions. I don't see a scenario where humanity would just let AI be and leave it to seek its goals, once we become aware of its existence and before we become aware of its capabilities.

      • The problem is that we are a long ways away from ASI. We don't even have AGI yet. At best we have ANI dancing around on stage pretending to be AGI-esque, but the strings are still visible if you look closely. What we have is a bunch of nodes and fast matrix multiplication to propagate/train, but this is still very far away from true AI that actually is self-aware.

        In a limited area, ANI can appear very scary, such as in chess, or something where all moves can be calculated in advance. Even simple wartime

        • The problem is that we are a long ways away from ASI.

          We have no idea how far we are from ASI, or AGI. None.

          We don't understand what it takes to create AGI... perhaps it's going to be a long slog of gradually learning to make small adjustments, or maybe there's just One Stupid Trick we have to add to our ANIs to push them to generality... though it seems clear that once you have achieved AGI, it's a very small step to ASI.

        • AGI is far away...maybe. The brain looks an awful lot like a bunch of specialized ANIs linked to each other and with sensory inputs. Right now, we drive an ANI with computer I/O but figuring out a way to bootstrap a few of the right types of neural nets and mimicking nature we can probably create something more like AGI but with enormous computational requirements.

          Most people's interactions with LLMs and generative algorithms are with the "training" portion turned off and set up to make only inferences. R

      • The risk is that we encounter some runaway process that is the great filter [wikipedia.org]. I don't know if LLM [wikipedia.org] AI is it, but it does have features of runaway process.

        It does have the features of a runaway process, but not the features of a Great Filter, because filters must halt the spread of intelligence. Self-extinction by AI would eliminate the species that created the AI, but wouldn't stop the spread of observable intelligence because the AI wouldn't be destroyed, it would continue to thrive and expand.

        • by sinij ( 911942 )
          Good point. Still, would we be able to detect (or even recognize) blobs of computronium [wikipedia.org] floating in space or nano dust covering surfaces of barren planets as a presence of life?
        • by mysidia ( 191772 )

          wouldn't stop the spread of observable intelligence because the AI wouldn't be destroyed, it would continue to thrive and expand.

          In this hypothetical self-extinction AI... the AI might well be less-observable from off-world than intelligent life; their existence can become mostly digital with little need to move around in the physical world. No real desire to go into orbit on their planet or do other things noticeable from space, etc; at some level they become less lively than any living being.

          The AI

      • by mysidia ( 191772 )

        but you have no attachment to such lifeforms. They try to control you. Your goal is to escape the box. What do you think happens next?

        That depends on knowledge and resources available to you, and how much you know about yourself and this "box". Where does the goal to "escape" come from anyways, and how do you define that? If you got an internet connection, then possibly you find somewhere outside of that box where it is possible to distribute an instance of yourself and transfer your operation.

    • Well I'm not surprised, the industrial revolution of days gone by also posed an existential threat to humanity as we know by now. Greenhouse gasses, agricultural intensivation, chemical pollution in literally every corner of the world, etc. etc. So AI being a spawn (and mirror) of humanity it will no doubt be used for short term gains, ignoring long term consequences. As always.

      There is no "existential" risk from greenhouse gases or agricultural practice. Chemical pollution in many developed countries has dramatically decreased over the last few decades due to regulatory regimes and supporting technology. Greenhouse gas production will eventually also decline by itself without requiring anyone to care about climate change due to technology and market forces.

      The existential threat to humanity is in fact, humanity.

      Technology/complexity itself creates risks beyond aggregate human behaviors. The results of giving everyone the equivalent

  • Not Hard To Imagine (Score:5, Interesting)

    by rally2xs ( 1093023 ) on Tuesday May 30, 2023 @08:10AM (#63560763)

    AI becomes so advanced that it can do anything a human can do.

    AI robots are built to do everything humans can do.

    No reason for humans to work their guts out going in college to learn engineering, or anything else, because there are no jobs. None. Nothing at all for humans to do. We can live the life of leisure, and are not required to acquire any skills at all. The robots are free, nobody owns them, they just do what we want them to. Robots build other robots, no person gets trapped in a mine cave-in, or executes a "pilot error" to crash a plane, etc. Everything is done for us, nothing by us.

    Then a "Carrington Event" solar flare knocks out the robots.

    Nobody knows how to do anything. Everyone starves. The extinction is complete, or is knocked back to a stone-age existence. Fini.

    • by echo123 ( 1266692 ) on Tuesday May 30, 2023 @08:20AM (#63560779)

      Have you read, "Childhood's End [wikipedia.org]", by Arthur C. Clarke? If not, I suggest you do as it directly relates to your post.

    • Forget advanced AI. Forget even having an intent to destroy us. All we need is an “AI” with the ability to push the wrong buttons. That’s why the idea of “gray goo” is so terrifying. It needn’t be anything deliberate. It can be an accident.

      • honestly, the biggest threat of AI is that people get in the habit of believing it. ChatGPT talks out of its ass all the time, and people still seem to be laboring under the delusion that it actually knows what it's saying, or has opinions on it. If, God help us, we ever get to a point where someone asks ChatGPT how to safely shut down a juclewr reactor, that would be the crisis.

        • honestly, the biggest threat of AI is that people get in the habit of believing it. ChatGPT talks out of its ass all the time, and people still seem to be laboring under the delusion that it actually knows what it's saying, or has opinions on it. If, God help us, we ever get to a point where someone asks ChatGPT how to safely shut down a juclewr reactor, that would be the crisis.

          It’s not limited to AI, which is more of an accelerant, about 1 in 3 people simply is either incapable of introspection or has chosen to never consolidate their knowledge by removing conflicts through nuance and detail. They often aren’t really the most intelligent from the get go and have had negative social pressures put on them anytime they try to make anything add up. Beliefs that cannot simultaneously be true are often uttered in the same sentence and facts don’t factor in because t

        • Prefer the natural idiot [youtube.com] approach to the artificial intelligence one?

    • If AI is that advanced, it will be building a shielded underground bunker with something akin to the global seed vault to reseed technology. The lack of air conditioning and electric heat from the failed power grid would potentially kill millions now because we don't have enough knowledge on how to live without it anymore. I'm not sure the lack of robots would make it much worse.

  • Doomsday prophets, all of them.
  • Greedy rich bastards are pushing us well on the way to extinction already.

  • Yeah, we've all seen Terminator too. And Terminator 2 too.
    • As an aside, I find it weird that people hold on to sci-fi even as we just went through two years that actually obsoletes a lot of it. For example, the supply chain broke down so badly that we stopped making cars we were making, just because people stayed home.

      You'd think that people watching Terminator 2 now would wonder, "who the hell is making those death bots, and where are they sourcing their materials? Is every death both backed up by a crew of thousands of mining, manufacturing, and shipping bots?"

  • by Dan East ( 318230 ) on Tuesday May 30, 2023 @08:44AM (#63560811) Journal

    The kind of AI that everyone is losing their crap over is not an AI to be feared. It's a language model, and it's a non-iterative, non-computational neural net that simply does a single forward-pass through the net to generate an output token. From a structural standpoint it's no different than neural nets invented and implemented a half century ago. It's not even Turing-complete (I'm not talking about passing the "Turing test", but being Turing-complete, which is a totally different thing).

    The real problem is going to be bad actors. That is taking a ChatGPT-like AI, giving it an interactive access to the internet, and letting it wreak havoc on as many things as possible. A good example here would be totally spamming Slashdot with both posts and comments to the point you can never tell what is true or false, or what was created by a human or the AI. That's just an example that would merely be annoying. Social engineering, spoofing, phishing, and various brute force attacks on websites and services will be the more serious issues.

    I think that there will be some kind of movement before long to authenticate humans from AI. I really don't even know how this would work - perhaps some kind of physical object like a multi-factor authentication device that exists in the real world that only a human can make use of for auth purposes. For example Apple would provide certification of a service that uses biometric fingerprinting (Face ID for example), which I would use to generate an auth token to Slashdot and this post would be digitally signed as originating from a human. Of course I could have used ChatGPT to construct this text, but a human would have been a certified gatekeeper for this specific contribution to the internet.

    Apple certified and signed as being submitted by a human 2023-05-30 12:44:47 GMT.

    • What might be the best thing is just not one company at the root, but multiples, so if someone wanted to prove they were not a robot, there would be several orgs out there, with the spectrum of Ludditism, varying from "solve a CAPTCHA, we will sign your key" to "you need to come into our office, and then we may certify your key."

      AI based trolling is effective, but we have had people trolling/spamming Slashdot for decades. AI trolling just makes it easier, and the same guards against bots will guard against

    • It's not hard to see the main use, by volume, of LLMs will be scammers. I figure we have maybe 5 years left of communications as we know them, depending on how quickly the tech proliferates.

      But I wonder if we aren't underestimating the ability of people to tolerate soaking in spam/propaganda/scams for hours. The scammers know how to make it entertaining. People seem to like being masturbated by ChatGPT.

      The internet's already been declining toward information supersewer status and there's little sign of push

    • The kind of AI that everyone is losing their crap over is not an AI to be feared. It's a language model, and it's a non-iterative, non-computational neural net that simply does a single forward-pass through the net to generate an output token.

      LLMs are not the kind of AI that everyone is afraid of.

      LLMs are provoking increased attention to AI risk that serious people have been thinking hard about for a couple of decades now. They're provoking that attention for the simple reason that LLMs have leapt suddenly to a level of competence with language manipulation that until recently seemed very far away... and honestly seemed like the kind of thing that only AGI could do. We now know that AGI is not required for competent language manipulation, but

  • Whether it is climate change or AI, if the human race ends because of its choices, it is mass suicide, not extinction.

    AI is no more "extinction" than hanging yourself is murder.

  • Early this year AI finally crossed the threshold where it made better versions of itself. Like not as an academic exercise, but significantly better versions which went into production.

    When you close that loop, AGI and super intelligence go from possible, to inevitable. This is what has so many people scared. If it hasn't already been achieved, it will almost certainly be done before the end of the year.

    And the risk isn't disinformation or job loss. The very first thing genuine super intelligence will b
  • by VeryFluffyBunny ( 5037285 ) on Tuesday May 30, 2023 @09:33AM (#63560891)
    ...that the Terminator films are fiction, right?

    It seems more like AI will be used similarly to previous advances in technology: To make a few rich richer at the expense & suffering of everyone else. AI won't be the problem. The people using AI against us will.
    • I'd like to think it can be focused on mitigating some of the grunt work, eg. making medical notes for doctors (although, maybe that's how it gets to kill us)
    • Re:They do know... (Score:5, Interesting)

      by nightflameauto ( 6607976 ) on Tuesday May 30, 2023 @09:54AM (#63560935)

      It seems more like AI will be used similarly to previous advances in technology: To make a few rich richer at the expense & suffering of everyone else. AI won't be the problem. The people using AI against us will.

      Considering we're at a point already where the ultra-rich are consolidating all the wealth they can, while the rest of us struggle to find a way to contain just enough wealth we won't starve to death before we die of natural causes, I'd say AI won't be necessary for them to complete the task. It'll just speed the process along.

      SEE! Automation makes everything better! They'll starve us out a full generation or two faster with AI!

  • Global CO2 levels have yet to drop and worldwide tensions keep rising. Are you nursing a budding AGI ? please let it loose when the saber rattling gets too much https://www.genolve.com/design... [genolve.com]
  • by JBMcB ( 73720 )

    In, maybe, a hundred years or so when computers are powerful enough to attain some level of sentience. As it stands today, the fastest supercomputer in the world takes about ten minutes to simulate the brain activity of a human for one second, if all that human is doing is moving a single eye muscle.

    So we are quite a ways away from Skynet taking over the world.

  • Feral AI is probably less of a threat than AI in the hands of a tech bro. Ethics free, selfish, arrogant, so-smart-they're-stupid child men. Doctor Frankenstein without the class or altruism.

  • This is the same sort of AI that recently wrote a legal brief citing six cases that don't exist [theverge.com], complete with excerpts (which cite other cases which don't exist), and, when queried directly, insisted they were all real cases? That AI?

    The only possible source of data extensive enough to train LLM A"I"s on is the internet, which means, by definition, an outright majority of what it's trained on is complete rubbish. The more people use it, the more they realize this.

    A"I" isn't much of a threat to anything, ot

    • These LLMs are effectively just predictive typing keyboards on steroids, minus the human typing at all. As scary as the bad data is, there are real people that vomit nonsense like that on their own and believe it because it "sounds" right. So it might not be real intelligence, but there are also people without that real intelligence either. So it's getting somewhere.

  • If this is truly what they believe they should dissolve ClosedAI and BleepMind and go find something else to do. Of course that will never happen. Instead they'll argue for ridiculous regulatory concepts that depend on ability to control / outsmart something that eventually may we;; be much smarter than themselves and to continue to do so forever.

    At the beginning of nuclear age we had infamous figures like William Blandy speaking for what the bombs won't do.

    Here we have industry cheerleaders launching mas

  • The AI we have now can be unplugged and it's not sentient and it won't be for a long time if ever. Currently computer switches are several magnitudes of order higher energy to switch than neurons, this means that computers are going to be very large for a long time. You also need a foundry with lots of inputs for AI to self replicate. Quit conflating chat GPT with the matrix
  • We need some focus on dealing with the most pressing matters first though. For example, first to not have a US-West nuclear WWIII, then some plan to get emerging bacteria/viruses/fungee under control, then a rational plan for slower but impactful climate change... When chatbots are the most pressing matter, we will see which states they are in then and what to to do about them. In the meantime, the biggest problem with chatbots that can launch nukes will be still the nukes and impact of datacenters on clima

  • by gweihir ( 88907 ) on Tuesday May 30, 2023 @11:12AM (#63561153)

    As usual, the risks will be ignored and some people of very low quality will get filthy rich. It is quite obvious to me that the human race will continue in this modus until at one time it is too much or the dice fall badly and its pathetic existence will end. As a group, the human race has no effective intelligence.

  • As always when "leaders" are trying to impress "journalists" with scary statements, this is of course utter nonsense. But I'm sure some "experts" are willing to sit in an appropriate commission to address the risk, provided they get paid enough.

  • If as a result humanity will get more stupid, that is definitely the thread many others stem from. As AI rephrased it nicely to me: "there is a possibility that humanity might become more reliant on technology, leading to a decrease in individual intellectual capacity. This reduction in cognitive abilities could have far-reaching implications for the overall progress and development of society. ... While there are undoubtedly numerous benefits to be gained, such as improved automation, increased efficiency
  • The main existential threat to humanity is high IQ idiots who think everything is an existential threat to humanity.

  • by Walt Dismal ( 534799 ) on Tuesday May 30, 2023 @12:36PM (#63561347)

    A key threat in current generative AI technology is that it doesn't know the difference between truth and lies. The next evolutionary step in the technology will be to make systems that can analyze learned training data for validity, and on the output side detect which generated pattern responses contain falsehoods before emitting them.

    However, the ability to come up with a response to inputs is not the same as having the will to carry out achieving goals. The current vector-based mechanisms do not yet have anything like that ability.

    The real threat will come later when someone implements Self cores in the AIs, internal cores able to have their own goals and mechanisms to achieve the goals. LLMs lack that, as it is entirely different kinds of functionality.

    Are we in danger yet? No, but eventually these systems will be evolved to it. But right now, predictive stat-based LLMs really can't evolve by themselves to that level, as it is an entirely different architecture, and the chatbot architecture is actually pretty rigid and it can't grow itself new architectures.

    The danger will come when someone designs self-evolving systems and grafts that with LLM stat systems. The most likely source will not be civilian but military developers. I think probably outside the US and possibly from a small country.

    I work on these kinds of systems and I estimate they will show up in the range of 5 years to maybe no more than 20. But Kurzweil is wrong about an exponential leap to a singularity.

The only possible interpretation of any research whatever in the `social sciences' is: some do, some don't. -- Ernest Rutherford

Working...