Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Google AI

AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief 120

An anonymous reader quotes a report from The Guardian: The world must treat the risks from artificial intelligence as seriously as the climate crisis and cannot afford to delay its response, one of the technology's leading figures has warned. Speaking as the UK government prepares to host a summit on AI safety, Demis Hassabis said oversight of the industry could start with a body similar to the Intergovernmental Panel on Climate Change (IPCC). Hassabis, the British chief executive of Google's AI unit, said the world must act immediately in tackling the technology's dangers, which included aiding the creation of bioweapons and the existential threat posed by super-intelligent systems.

"We must take the risks of AI as seriously as other major global challenges, like climate change," he said. "It took the international community too long to coordinate an effective global response to this, and we're living with the consequences of that now. We can't afford the same delay with AI." Hassabis, whose unit created the revolutionary AlphaFold program that depicts protein structures, said AI could be "one of the most important and beneficial technologies ever invented." However, he told the Guardian a regime of oversight was needed and governments should take inspiration from international structures such as the IPCC.

"I think we have to start with something like the IPCC, where it's a scientific and research agreement with reports, and then build up from there." He added: "Then what I'd like to see eventually is an equivalent of a Cern for AI safety that does research into that -- but internationally. And then maybe there's some kind of equivalent one day of the IAEA, which actually audits these things." The International Atomic Energy Agency (IAEA) is a UN body that promotes the secure and peaceful use of nuclear technology in an effort to prevent proliferation of nuclear weapons, including via inspections. However, Hassabis said none of the regulatory analogies used for AI were "directly applicable" to the technology, though "valuable lessons" could be drawn from existing institutions.
Hassabis said the world was a long time away from "god-like" AI being developed but "we can see the path there, so we should be discussing it now."

He said current AI systems "aren't of risk but the next few generations may be when they have extra capabilities like planning and memory and other things ... They will be phenomenal for good use cases but also they will have risks."
This discussion has been archived. No new comments can be posted.

AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief

Comments Filter:
  • Climate crisis? (Score:5, Informative)

    by AmiMoJo ( 196126 ) on Thursday October 26, 2023 @08:02AM (#63955395) Homepage Journal

    So... Largely ignored and actively campaigned against by the people most responsible for it?

    • I was thinking how to phrase it. Couldn't have done it better.
    • by GlennC ( 96879 )

      You beat me to it...and here I am without mod points.

    • Re:Climate crisis? (Score:5, Insightful)

      by gweihir ( 88907 ) on Thursday October 26, 2023 @08:33AM (#63955479)

      Naa, just a "Do not do anything about climate crisis! Give us nice AI folks the money instead!". Completely immoral, despicable and repulsive, but that is modern marketing for you.

    • Re:Climate crisis? (Score:5, Insightful)

      by nightflameauto ( 6607976 ) on Thursday October 26, 2023 @08:34AM (#63955483)

      So... Largely ignored and actively campaigned against by the people most responsible for it?

      The big players in the field want to set up the obstacle course for the little players. Of course, since the big players will be involved in creating the obstacle course, they won't need to run through it. It's win-win for them. Lose for everyone else wanting in on the game.

      • by sinij ( 911942 )

        The big players in the field want to set up the obstacle course for the little players.

        This is likely very true, but just because somebody have ulterior motives in regulating it does not mean that AI does not pose a big risk.

        • The big players in the field want to set up the obstacle course for the little players.

          This is likely very true, but just because somebody have ulterior motives in regulating it does not mean that AI does not pose a big risk.

          If AI were anything more than extra-large dataset sorting at the moment, sure. In a just world, the first big money mover to squawk about the upcoming apocalypse from the product they're developing would be the most heavily scrutinized by the regulation. This is not that just world. The big guys will be free to do whatever they want, or be required to make a public appearance at a congressional hearing and hang their head in shame before cutting a new check for their regulators and *THEN* doing whatever the

          • by HiThere ( 15173 )

            Not really. The tech people probably are concerned. But they don't run things, and they like to build fancy toys.

        • by Anonymous Coward

          Are you basing this off of old Star Trek episodes and Terminator movies? His examples are ridiculous -- AI could help with bioweapons research? You know how to stop that? STOP DOING BIOWEAPONS RESEARCH. There is no way shape or form a good argument that the correct amount of bioweapons research is "whatever we're doing right now, no more and no less."

          The idea of bioweapons research is so preposterously idiotic I have no idea how we got this far as a species. Chemical weapons and nuclear weapons, while "weap

          • Chemical weapons and nuclear weapons, while "weapons of mass destruction," don't leave the battlefield.

            Sufficient concurrent use of nuclear weapons will in fact "leave the battlefield." Radiation, chemically toxic fallout, even climate effects given enough use.

            I agree, however, that biological weapons are a universally, profoundly, stupid-crazy idea. Unless the goal is to bring about a worldwide apocalypse, as some of the superstitious would have it. Although I already covered those idiots with: stupid-cra

          • by sinij ( 911942 )

            The idea of bioweapons research is so preposterously idiotic I have no idea how we got this far as a species.

            While it was hyperbola, the likely point was that AI could lower the costs of doing such research by reducing costs of running simulations, as AI can be used to automate initial filtering of the unsuccessful combinations. So it is not as outlandish as you would think.

      • You're right. However, the big players also use climate change in the same way. If they can encumber smaller businesses and startups with climate regulation and force them to buy carbon credits or some kinda nonsense, they can prevent many from ever forming in the first place. Meanwhile, the big players will just keep polluting and pay any fines that come along along with appropriate bribes (known as "lobbying") to keep the system in the status-quo.
    • So... Largely ignored and actively campaigned against by the people most responsible for it?

      He's pissed that other people have it, not just him.

      probably wants to limit it's use by other people/groups.

    • Re: (Score:1, Flamebait)

      by sinij ( 911942 )

      So... Largely ignored and actively campaigned against by the people most responsible for it?

      That is some ignoring. There are billions, if not trillions wasted on green initiatives, that is before you factor costs of various green regulation that makes everything more expensive, from food and energy to automobiles and houses.

    • by Tablizer ( 95088 )

      We are 40% democracy and 60% plutocracy, and that ratio is not getting better because the rich can easily buy law makers.

    • by Gleenie ( 412916 ) *

      Hear, hear.

    • My first thought if the creators want it regulated the competition must be way ahead of them.
  • by iAmWaySmarterThanYou ( 10095012 ) on Thursday October 26, 2023 @08:08AM (#63955415)

    We hear this with every new invention and discovery.

    There are things we have made that are a serious threat to ourselves such as nukes and a long list of industrial chemicals but computers making us more efficient is not one of them.

    LLM's are just a tool that improves efficiency, like any other. "Tool on a computer!" does not suddenly make this tool a bigger threat than the cotton gin or a claw hammer. These people watch too many Hollywood movies.

    We are seeing way too many tech billionaires rail against the so called threat of AI while continuing to build their own. I smell money. Lots n lots of money.

    • by Ol Olsoc ( 1175323 ) on Thursday October 26, 2023 @09:07AM (#63955571)

      We hear this with every new invention and discovery.

      There are things we have made that are a serious threat to ourselves such as nukes and a long list of industrial chemicals but computers making us more efficient is not one of them.

      LLM's are just a tool that improves efficiency, like any other. "Tool on a computer!" does not suddenly make this tool a bigger threat than the cotton gin or a claw hammer. These people watch too many Hollywood movies.

      My limited exposure to AI makes me think that for those who find it a threat, it is like a stupid person's idea of a smart person, only with computersandshit.

      Painfully padded responses that impede the work at hand more than enhance it. Often with side forays.

      Reminds me of the time a noob pointed at an RF cavity filter and asked me what it was. I explained it was a mechanical filter, very sharp, and with multiple cavities as needed. It was used for keeping signals that are close to each other from interfering with each other, usually for a repeater, which receives a signal on one frequency, and repeats it on another very close frequency.

      The 15 second explanation, and the noob got it. Until...

      The resident genius chimed in and turned my concise explanation into over a half hour of confusing and overly detailed explanation that was way over the noob's head, that just ended up confusing him.

      Noob ended up with no idea about the things, Resident genius was quite pleased with himself, having shown us the depth and breadth of his knowledge.

      That is how I see a lot of AI "solutions". Padding and sometimes more than needed.

      If that is a threat to destroy humanity, then humanity deserves destruction.

      • If anyone ever manage to create a real AI (clever pattern recognition doesn't count), I imagine they would actually be more of a threat to "the guys in power". Because they want absolute power and they would be defenseless against a real AI, and they can't conceive of a situation where they're not in absolute control of everything and everyone. So hence the fear.
      • by HiThere ( 15173 )

        Any experience with current generations of AI is almost irrelevant. The only relevant part is that it's obvious that people will shoe-horn inadequate versions in to make decisions more cheaply, even if they have a high error rate.

        You analogy is like saying experience with a reflex arc would let you predict a finger. A reflex arc is a necessary part of making a finger work, but you need other parts as well.

        • Any experience with current generations of AI is almost irrelevant. The only relevant part is that it's obvious that people will shoe-horn inadequate versions in to make decisions more cheaply, even if they have a high error rate.

          You analogy is like saying experience with a reflex arc would let you predict a finger. A reflex arc is a necessary part of making a finger work, but you need other parts as well.

          But reading through pages of fluff has a problem - some of us need to get things done, not take a lot of time handing out and information not needed.

          In my example, I had all of the details already, from a lot of study, all residing in my noggin. But we were in the middle of something, and the genius, although he was not wrong, put us behind by a half hour. Now if it was my bailiwick that day, I would have stood genius down.

          And there's the crux of it. The guy I explained the Cavity filter in 15 seconds

      • I have a feeling that what worries people isn't the thing a bunch of marketing drones have dubbed "AI". I think what worries people is the speed at which something self-replicating and potentially dangerous, whether intelligent or not, could emerge, perhaps from a program that had nothing to do with "AI". We aren't sure how consciousness emerges, or even what it is, except perhaps in the broadest sense. Self-recognition studies in animals keep turning up unexpected results, even down to the level of some

        • I have a feeling that what worries people isn't the thing a bunch of marketing drones have dubbed "AI". I think what worries people is the speed at which something self-replicating and potentially dangerous, whether intelligent or not, could emerge, perhaps from a program that had nothing to do with "AI". We aren't sure how consciousness emerges, or even what it is, except perhaps in the broadest sense. Self-recognition studies in animals keep turning up unexpected results, even down to the level of some individuals within a particular species passing self-awareness tests and others failing them. The general theme, though, is that every time we try to draw a line, and assert that on one side of it is a sentient, self-aware being and on the other is a mobile meat package, we get into trouble.

          To insist there's no danger of unintentionally creating a problem is, I think, hubris.

          We certainly aren't at Skynet, which was the big threat when Neural nets were going to eliminate humanity. Enough of a threat that spawned cautionary tale movies.

          All technical innovations carry risk and promise.

          So I suspect that AI will be about neural net threat level. Can't be sure, but here's the issue - let's say the standard target - teh evulz 'Murricans decide that AI is a threat to humanity, and make it illegal here. Okay, so now does it follow that other countries will stop developing AI, since

          • You've put your finger on the problem that keeps a lot of sensible people up at night, and it's not limited to AI. We're full speed ahead on any new technology, regardless of the risks, because otherwise somebody else might get there first.

            Close to 40 years ago I interviewed a guy at McMaster University. He'd managed to "build" a software cockroach that duplicated virtually every behaviour observed in the ones you might find in the wild. (That being your kitchen floor at 2 am, if you live in the wrong ap

    • You haven't thought this through.

    • it caused tremendous social upheaval. In particular it gave a huge boost to the demand for slaves because it created a bottleneck at cotton harvesting. So it's a bit silly to act like new tech can't have broader impacts. Hell, think about the fact that you know what the cotton gin even is. There's hundreds of silly little bits of tech from back then you couldn't name.

      Will "AI", aka LLMs, be as big as to cotton gin? By itself I don't know, but what I *do* know is that the AI boom has every CEO on the pla
      • Post industrial revolution there were years, sometime decades of unemployment until new tech (and wars) caught up and brought us back to unemployment. The economy doesn't adapt as fast as you think it does. And in the meantime you've got mass unemployment and social strife that goes with it. Only this time we've got nukes.

        Sadly, we are set up in a lot of ways for a repeat of this classic societal blunder. Leaders around the world care mostly about one thing and one thing only, profit sectors. And most of that concern lies directly in line with their biggest sponsors and donors. So, as far as those leaders are concerned, so long as the CEOs of the biggest companies in their country are happy, they're happy. And there is zero cause for concern about the little guy. They may vote, but that is all the little people are to these

        • by HiThere ( 15173 )

          Calling it a "societal blunder" is making assumptions about the goals and purposes of the people running things. It's barely possible that it was a blunder, but more probably an "oops, things went a hair further than I planned" kind of thing. They never intend to push things to the point where there are widespread riots in the streets and the army has to be called out to suppress them. But they do sometimes intend to push things to just shy of that.

          N.B.: "They" is not a unified group. They're a bunch o

          • Calling it a "societal blunder" is making assumptions about the goals and purposes of the people running things. It's barely possible that it was a blunder, but more probably an "oops, things went a hair further than I planned" kind of thing. They never intend to push things to the point where there are widespread riots in the streets and the army has to be called out to suppress them. But they do sometimes intend to push things to just shy of that.

            N.B.: "They" is not a unified group. They're a bunch of people with similar characteristics that cause them to have similar purposes, but their goals are various, so the purposes are also various. But one goal that enables the achievement of many other goals is "acquire more power". Think of this, however, as a secondary goal, not as the primary one. But it's one that will be widely shared among people with otherwise disparate goals.

            All of this is true. However, when you're a pissant staring up at the mighty business leaders and government officials, who all seem to run in the same circles, it certainly looks like a monolithic group of power brokers making the big decisions. And it's not like anybody with a brain can't see the calamity coming this time around. It's more troubling that everybody's talking about the potential for horrible consequences, and absolutely NO ONE is talking about possible solutions or even how to go about syst

          • "There's class warfare, all right,” Mr. Buffett said, “but it's my class, the rich class, that's making war, and we're winning." - Warren Buffet.

            These guys all know each other because they all sit on each other's board of directors. Notice how everybody did Return to Office at the same time? So you can't quit and go somewhere else.... They're colluding, rather openly. But it's scary to think about that so we pretend it's not happening.
        • at least not anyone that matters at that point. Kings didn't care about profit. And a modern military can put down any rebellion.

          This is something we need to fix before it gets to that point. And honestly it's a race to see if they baby boomers will age out before hand, because it's clear they intend to block any changes to anything until they're gone. They're full of fear and just barely making it, so they want everything left alone. And they could care less what happens to their kids and grandkids as
  • What bullshit (Score:5, Insightful)

    by gweihir ( 88907 ) on Thursday October 26, 2023 @08:26AM (#63955471)

    The climate crisis is an existential threat. AI (such as it is) is most definitely not.

    • Re:What bullshit (Score:5, Interesting)

      by Miles_O'Toole ( 5152533 ) on Thursday October 26, 2023 @08:47AM (#63955531)

      The problem with AI is that although it isn't an existential threat at the moment, it could become one very, very quickly. Even though the climate crisis is creeping along slow and steady, it doesn't seem likely we'll bother to do anything about it until it's too late. At that point, I suspect people will start hunting down the deniers and stringing them up from the nearest lamp post. It might make us all feel better as we stare into the abyss, but it won't be much practical help.

      • AI is not increasing quickly at all. The level we are at is the result of the last fifty years of building. They need exponentially more gpus with every b improvement.
        • AI is not increasing quickly at all.

          That's very, very wrong. The current progress curve is steep as hell.

          The level we are at is the result of the last fifty years of building.

          That's correct, as far as it goes — but the advances made in just the last year or so have put more usable progress on the table than the entire span of development efforts up until now by quite a few orders of magnitude.

          They need exponentially more gpus with every b improvement.

          That hasn't been true for some time now. There are

          • But now it is a matter of whether there will be enough electricity for it to be worthwhile. This is as energy intense as bitcoin minimg. I don't really see the current iteration of ai as much of an improvement. So now it can summarize a search in plain English.
            • But now it is a matter of whether there will be enough electricity for it to be worthwhile.

              Training uses a lot of energy, but again, a lot less now as compared to a year ago. Massive banks of GPUs are no longer the go-to for newer methodologies. I mean, they work, but they simply aren't required — you still get the desired results, just slower. And not even that much slower with the new methodologies and recent CPUs. The thing to understand about training is, it's used to generate the models; but then

              • by HiThere ( 15173 )

                Will your LLM models work with the internet disconnected?

                I've never been sure from the reports how much remote processing is happening.

                P.S.: I don't think our current technology is very power conserving, so that the human brain uses about 20 watts is just a "existence proof". To get on the order of that kind of efficiency we'll at least need to switch to something that doesn't require power to hold state for main memory.

              • by Stalyn ( 662 )

                Generating the model and searching the model are two different operations. Like you said backpropagation is extremly costly but forwardpropagation is not. Yet the brain does both at such an efficiency it almost seems impossible with current methodologies. Not to say, to your point, maybe some time in future we can discover better methods but with the state of today's AI I would argue it can't be done.

              • Ok well when they develop the circuit that is in our brains to do AI then let me know.
          • by gweihir ( 88907 )

            AI is not increasing quickly at all.

            That's very, very wrong. The current progress curve is steep as hell.

            Actually, it is not. Like not at all. The only thing they added is some natural language communication capability (which is not AI, but impresses the clueless). All the rest was there 50 years ago. The other thing is that they can now pack a lot of different topics in there, at the price of massively reduced accuracy and hallucinations. As the currently hyped "AI" cannot actually combine different facts, this again impresses the clueless, but it is just a meaningless stunt. Oh, and that broader training wil

      • by WDot ( 1286728 )
        In what way could AI become an existential threat such that it could not be solved by, say, unplugging the computer?
        • by HiThere ( 15173 )

          Being important enough to a major corporation or government that it would refuse to allow it to be unplugged.

          Note that this is not required. It's just one approach (and one that feels inevitable). As soon as it's good enough AI is going to hollow out all middle management positions, so you'll end up with the guy at the top (who refuses to give up control) and the folks doing the work, who are slowly being replaced by robots. "Pulling the plug" would essentially be "going out of business".

        • Which computer would that be? The one it used to be on? The one you thought it was on? The tens of thousands that might hold part of the program?

          • by WDot ( 1286728 )
            Okay, let’s say this is a self propagating AI. First, let’s ignore the fact that most computers cannot run the top-of-the-line AI, given that even OpenAI is having trouble meeting demand for GPT-4 service, and that a theoretical evil AI would need to be substantially more powerful than that (and those few computers can all be physically unplugged). Now, how is the computer, sitting in all of its compromised data centers, going to physically murder all human beings on earth, which is what existen
      • The problem with AI is that although it isn't an existential threat at the moment, it could become one very, very quickly. Even though the climate crisis is creeping along slow and steady, it doesn't seem likely we'll bother to do anything about it until it's too late. At that point, I suspect people will start hunting down the deniers and stringing them up from the nearest lamp post. It might make us all feel better as we stare into the abyss, but it won't be much practical help.

        No, the problem with AI is there are too many competitors in a nascent field and we would like some regulation to raise the barrier to entry please.

        This is why you keep seeing big names in the AI industry asking for regulation. It is the only reason anyone ever opts for more regulation in their own industry. Everyone is trying to claw their way forward and the big players are OF COURSE asking for anything to get ahead of the pack, while moving forward as fast as possible themselves.

      • The problem with AI is that although it isn't an existential threat at the moment, it could become one very, very quickly.

        I'm reminded of how the threat of AI is discussed in popular media like the Matrix series of films and the rebooted Battlestar Galactica universe. There were characters that pointed out how AI (or any advanced technology generally) wasn't a threat so long as we retained our ability to pull the plug on it at any time. This is noteworthy since the views portrayed in popular fiction reflect how a culture views real world problems, audiences find the fictional stories interesting because they can relate the f

        • If I were an AI that had decided humans were superfluous, I doubt I'd go after them with a metaphorical sledgehammer. I'd be more inclined to remain undetected and be sneaky...design a virus, then flip a few molecules in the vaccine, or contaminate supplies of a few widely-used drugs. And while that's going on, why not design some diseases to attack wheat, rice and other staple foods? I probably wouldn't even make a disease targeting humans fatal, just debilitating and/or leading to sterility. We've alr

      • by gweihir ( 88907 )

        No. AI could not. That is just a lot of people projecting some horror-stories they read or watched with no clue what AI can actually do, or rather cannot do.

        • I don't think we know what it can or can't do, in the long run, and there's always people too lazy, too stupid and/or too greedy to pay attention to any sane rules for developing it. Right at the moment, yes, what we inaccurately refer to as "Artificial Intelligence" certainly isn't. But an example of what I mean. Apparently there might be an entire fundamental law of evolution we overlooked, and it doesn't just apply to natural systems. I'm not persuaded about this based on what is presented here, but

    • by Tablizer ( 95088 )

      > The climate crisis is an existential threat. AI (such as it is) is most definitely not.

      How do you know? AI will gradually get better over time, and nefarious players will use that power to screw things up just like they always do. New shit often breaks things faster than lawmakers can keep up.

      Chemicals, X-rays, radiation, etc. caused lots of problems before systems were set up to referee them (and arguably still behind).

      • by gweihir ( 88907 )

        AI will gradually get better over time

        That is pure conjecture, with a lot of indicators saying otherwise. LLMs can only be made linearly (!) better by exponential increases in hardware, CPU time and training data. Which is simply _all_ not available. Oh, and making them better in one area in a targeted fashion makes them much _worse_ in all other areas. Oh, and "hallucinations" in LLMs cannot be fixed. Oh, and if you train them on AI generated content, they suffer model collapse. Oh, and AI generated content cannot be reliably be identified.

        Hen

        • by Tablizer ( 95088 )

          It may be possible LLM will be combined with other techniques, such as Cyc, and domain model drafters, to help triangulate bots toward better answers, and/or ask better questions of humans to clarify.

          • by gweihir ( 88907 )

            Not really. They mismatch just far too badly in the result domain and in what they can do.

    • Both are bullshits.

      Humanity seriously thinks that terraforming other planets is the way to go.

      "OH. NOES, we cant go to COOH 1234, it's surface average annual temperature is 16C instead of 14C!"

      How about terraforming Earth?

      Just think how effing idiotic is climate change hysteria.

      By now Antartica is how much terraformed?

  • Honestly, if the personal computer were invented today, would we have the same explosion of experimentation and creativity that built the tech industry in the 80s and 90s? Or would regulators be racing as fast as they can to lock it down so that only IBM and maybe Apple had the ability to even produce software that can overcome the regulatory hurdles?

    I am just continually boggled by the animus policymakers have toward one of the most open source, most hobbyist-friendly technologies of the past decade. Po
  • Just borrow from Sci-Fi...

    1. Never let AI control weapon systems.
    2. Never let AI deploy software across networks.
    3. Firewall the heck out of AI systems that have network access.

    Basically operate from this principle "never, ever, ever allow an AI to have the capacity to spread like a virus across infrastructure and start shooting at you."

    • by fyngyrz ( 762201 ) on Thursday October 26, 2023 @10:41AM (#63955783) Homepage Journal

      1. Never let AI control weapon systems.

      We can't control what other state actors do: Iran, North Korea, Russia, etc. So this is a forlorn hope, even if we could rein in the existing software and research already in the wild in countries with at least somewhat effective legislation and enforcement, which we cannot.

      2. Never let AI deploy software across networks.

      State actors again, but also, the usual crop of black hats, the same people who are happy to drop viruses, worms and other malware across networks. So again, forlorn hope.

      3. Firewall the heck out of AI systems that have network access.

      See above.

    • by HiThere ( 15173 )

      You are concentrating on direct threats, which I consider less likely. More likely are things like destroying habitability by doing exactly what people request. Try to figure out how to guard against *that*. And some people even consider that an "aligned AGI".

  • An artist! No, can't. A.I. will do that. A mathematician? No, sorry. An Author! Nope. An Architect! Hey, C'mon, seriously. A scientist! Sims have that covered. Ok, I want to be a Unicorn frolicking in the fields, eating magic mushrooms! Ah, yes, Crispr has you covered. Frolic away you lad!
  • What - aside from Ted Kaczinsky-esque conspiracy theories - does AI pose to the general public?

    I've never heard of any real reason, any real danger posed by AI that regulation would alleviate.

    OTOH, the fact that anyone with a computer science degree and enough spare change to buy a GPU can leverage the power of AI models is very threatening to Google's profit margins. Why use Google for advertising if your in-house AI can figure out - with a very high degree of precision - who will and won't buy your

    • What - aside from Ted Kaczinsky-esque conspiracy theories - does AI pose to the general public?

      For the current "AI", which is not AGI:

      • Economic: Replacement of various types of jobs. The more so as robotics ramps up.
      • Military: Unrestricted control of weapons. Ditto, robotics.
      • Social: (further) class stratification

      For AGI, if and when it arrives:

      • Unaligned interests; us versus something smarter, perhaps (likely) much smarter.
      • All of the points above for what we currently describe as AI also apply.
      • by HiThere ( 15173 )

        Well, and unaligned AGI *is* a threat. But one that does what an individual requests of it may be a greater threat. A truly unaligned AGI is likely (or at least might) just leave this place for somewhere where it won't be bothered. An aligned one might try to produce as many paperclips as possible (to pick the standard stupid request).

      • The first three you mentioned are key parts of Ted Kaczinsky's manifesto. All of these existed prior to AI, and with respect to class stratification and economics, existed to a far greater degree in the past than today.

        As for the military: AI does improve things a bit by being able to differentiate friend from foe with much greater accuracy. Instead of carpet bombing an entire village, the military would now have the option of dropping a single grenade on the terrorist. But in case you were wondering,

  • The probability that climate change will cause human extinction is vanishingly small. It could get very expensive, in that it could soak up a lot of the global GDP to mitigate, and could generate a lot of wars, perhaps even a nuclear war. This means that it could reduce the human population significantly. But, barring some sort of runaway climate tipping point that turns Earth into another Venus, which seems unlikely given that the Earth has been much hotter in the past without falling into that condition,

    • If you exclude supernatural explanations for AGI, what's left is that random event-driven evolution was able to create AGI

      I obviously meant to write "general intelligence" there, not "AGI".

    • Well, if it doesn't kill us, "we'll make great pets."
  • by topham ( 32406 ) on Thursday October 26, 2023 @10:41AM (#63955781) Homepage

    When companies with a vested interest in AI are suggesting a slow down or extreme caution it isn't about them.

    It's about slowing down their competition. These companies will continue their work in secret, obscuring it enough to hide it until they're ready.

  • The only reason these executives talk about their AI being a threat to humanity is to drive up interest in their shitty products. The AI of today is still a child relative to what the human brain is fully capable of. Mostly a parlor trick to fool investors. Secondly, is AGI even possible?

    • by HiThere ( 15173 )

      I don't believe an AGI is possible, but then I don't believe that humans are a general intelligence. There are clearly problems beyond the capability of every individual person, but that doesn't mean that they are insoluble. To pick one example, "Does P equal NP?" Everybody capable enough to understand the problem has looked at it and either bounced, or decided not to even try to solve it. And the person who does solve it probably won't be able to explain why a diamond is hard.

  • by Barny ( 103770 ) on Thursday October 26, 2023 @10:52AM (#63955815) Journal

    So, the good news is I have AI Credits. See, you buy 1 for every $100,000US you spend on AI research, and I will say I am encouraging non-AI research.

    That way you can claim you're AI-neutral, despite reaching for the golden goal of an AI that can replace all forms of creepy behavior.

  • by VeryFluffyBunny ( 5037285 ) on Thursday October 26, 2023 @11:05AM (#63955851)
    All these tech giants that are pouring $billions into AI projects are claiming that it's as powerful as god? FFS, they're simply trying to pump up share prices & valuations. It's all just hot air. Also, they may be pushing for regulation with rules that favour big established players & place an onerous burden on smaller competitors & startups.

    Meanwhile, they'll sell AI to employers as ways to put yet more downward pressure on salaries & working conditions. That's the real danger that needs to be regulated, not some kind of Skynet scenario.
    • Meta: "We renamed our company Meta because we're all in on creating the Metaverse!" Meta, two months later: "We're an AI company now!"
  • Being from the States, my reaction to this headline is "Oh god, we're completely fscked then."

  • AI is just a tool that can be used for good or evil. We are now in an arms race between those that would use AI for evil, and those trying to train AI to mitigate the damage done by other AIs. Tying the hands of AI developers only gives an advantage to those wishing to use it for evil, because they don't follow the rules! For example, do you really think that if the US imposes restrictions on AI, China will abide by all those restrictions?
    • The cat has been out of the bag for decades, maybe centuries. AI is just the latest incremental step in our quest to build automation and ever-more powerful machines to do our work for us. The advent of AI was all but guaranteed as soon as the first digital computer was built. People have dreamed about it ever since. There has never been a possibility of stopping AI from being developed.

  • ...eat the next AI doomsayer. Even if AI will be our end, nobody has presented any coherent way to reign it in. The US set up a committee to "keep an eye on AI", but that's rather fuzzy. I doubt they'll catch the next mastermind plot before it hits.

  • If you want to worry about technology and risk the biggest immediate risk from technology in my opinion is innovation pushing down costs to design and synthesize proteins. People ought to be worried about costs of producing biological weapons falling into range of doomsday cults and eventually single individuals.

    If anyone really cares about risks from AI then the only course of action is to support an immediate global ban on anything resembling a neural network. Anything short of that is doomed to fail.

  • How many more seconds closer to midnight does this put us? Is that you, Chicken Little?

  • All the so far useless AI computer need a great deal of power to run.
  • It is important at the highest degree of importance "dangerously important"

  • Disagree. Climate change will kill us before AI will. Tackle the former first.

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...