AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief 120
An anonymous reader quotes a report from The Guardian: The world must treat the risks from artificial intelligence as seriously as the climate crisis and cannot afford to delay its response, one of the technology's leading figures has warned. Speaking as the UK government prepares to host a summit on AI safety, Demis Hassabis said oversight of the industry could start with a body similar to the Intergovernmental Panel on Climate Change (IPCC). Hassabis, the British chief executive of Google's AI unit, said the world must act immediately in tackling the technology's dangers, which included aiding the creation of bioweapons and the existential threat posed by super-intelligent systems.
"We must take the risks of AI as seriously as other major global challenges, like climate change," he said. "It took the international community too long to coordinate an effective global response to this, and we're living with the consequences of that now. We can't afford the same delay with AI." Hassabis, whose unit created the revolutionary AlphaFold program that depicts protein structures, said AI could be "one of the most important and beneficial technologies ever invented." However, he told the Guardian a regime of oversight was needed and governments should take inspiration from international structures such as the IPCC.
"I think we have to start with something like the IPCC, where it's a scientific and research agreement with reports, and then build up from there." He added: "Then what I'd like to see eventually is an equivalent of a Cern for AI safety that does research into that -- but internationally. And then maybe there's some kind of equivalent one day of the IAEA, which actually audits these things." The International Atomic Energy Agency (IAEA) is a UN body that promotes the secure and peaceful use of nuclear technology in an effort to prevent proliferation of nuclear weapons, including via inspections. However, Hassabis said none of the regulatory analogies used for AI were "directly applicable" to the technology, though "valuable lessons" could be drawn from existing institutions. Hassabis said the world was a long time away from "god-like" AI being developed but "we can see the path there, so we should be discussing it now."
He said current AI systems "aren't of risk but the next few generations may be when they have extra capabilities like planning and memory and other things ... They will be phenomenal for good use cases but also they will have risks."
"We must take the risks of AI as seriously as other major global challenges, like climate change," he said. "It took the international community too long to coordinate an effective global response to this, and we're living with the consequences of that now. We can't afford the same delay with AI." Hassabis, whose unit created the revolutionary AlphaFold program that depicts protein structures, said AI could be "one of the most important and beneficial technologies ever invented." However, he told the Guardian a regime of oversight was needed and governments should take inspiration from international structures such as the IPCC.
"I think we have to start with something like the IPCC, where it's a scientific and research agreement with reports, and then build up from there." He added: "Then what I'd like to see eventually is an equivalent of a Cern for AI safety that does research into that -- but internationally. And then maybe there's some kind of equivalent one day of the IAEA, which actually audits these things." The International Atomic Energy Agency (IAEA) is a UN body that promotes the secure and peaceful use of nuclear technology in an effort to prevent proliferation of nuclear weapons, including via inspections. However, Hassabis said none of the regulatory analogies used for AI were "directly applicable" to the technology, though "valuable lessons" could be drawn from existing institutions. Hassabis said the world was a long time away from "god-like" AI being developed but "we can see the path there, so we should be discussing it now."
He said current AI systems "aren't of risk but the next few generations may be when they have extra capabilities like planning and memory and other things ... They will be phenomenal for good use cases but also they will have risks."
Climate crisis? (Score:5, Informative)
So... Largely ignored and actively campaigned against by the people most responsible for it?
Re: (Score:1)
Re: (Score:2)
You beat me to it...and here I am without mod points.
Re:Climate crisis? (Score:5, Insightful)
Naa, just a "Do not do anything about climate crisis! Give us nice AI folks the money instead!". Completely immoral, despicable and repulsive, but that is modern marketing for you.
Re:Climate crisis? (Score:5, Insightful)
So... Largely ignored and actively campaigned against by the people most responsible for it?
The big players in the field want to set up the obstacle course for the little players. Of course, since the big players will be involved in creating the obstacle course, they won't need to run through it. It's win-win for them. Lose for everyone else wanting in on the game.
Re: (Score:1)
The big players in the field want to set up the obstacle course for the little players.
This is likely very true, but just because somebody have ulterior motives in regulating it does not mean that AI does not pose a big risk.
Re: (Score:2)
The big players in the field want to set up the obstacle course for the little players.
This is likely very true, but just because somebody have ulterior motives in regulating it does not mean that AI does not pose a big risk.
If AI were anything more than extra-large dataset sorting at the moment, sure. In a just world, the first big money mover to squawk about the upcoming apocalypse from the product they're developing would be the most heavily scrutinized by the regulation. This is not that just world. The big guys will be free to do whatever they want, or be required to make a public appearance at a congressional hearing and hang their head in shame before cutting a new check for their regulators and *THEN* doing whatever the
Re: (Score:2)
Not really. The tech people probably are concerned. But they don't run things, and they like to build fancy toys.
uh what (Score:1)
Are you basing this off of old Star Trek episodes and Terminator movies? His examples are ridiculous -- AI could help with bioweapons research? You know how to stop that? STOP DOING BIOWEAPONS RESEARCH. There is no way shape or form a good argument that the correct amount of bioweapons research is "whatever we're doing right now, no more and no less."
The idea of bioweapons research is so preposterously idiotic I have no idea how we got this far as a species. Chemical weapons and nuclear weapons, while "weap
Nukes (Score:2)
Sufficient concurrent use of nuclear weapons will in fact "leave the battlefield." Radiation, chemically toxic fallout, even climate effects given enough use.
I agree, however, that biological weapons are a universally, profoundly, stupid-crazy idea. Unless the goal is to bring about a worldwide apocalypse, as some of the superstitious would have it. Although I already covered those idiots with: stupid-cra
Re: (Score:2)
The idea of bioweapons research is so preposterously idiotic I have no idea how we got this far as a species.
While it was hyperbola, the likely point was that AI could lower the costs of doing such research by reducing costs of running simulations, as AI can be used to automate initial filtering of the unsuccessful combinations. So it is not as outlandish as you would think.
Re: (Score:3)
Re: (Score:2)
So... Largely ignored and actively campaigned against by the people most responsible for it?
He's pissed that other people have it, not just him.
probably wants to limit it's use by other people/groups.
Re: (Score:1, Flamebait)
So... Largely ignored and actively campaigned against by the people most responsible for it?
That is some ignoring. There are billions, if not trillions wasted on green initiatives, that is before you factor costs of various green regulation that makes everything more expensive, from food and energy to automobiles and houses.
Re: (Score:2)
We are 40% democracy and 60% plutocracy, and that ratio is not getting better because the rich can easily buy law makers.
Re: (Score:1)
Hear, hear.
Re: (Score:1)
Water under the bridge (Score:4, Funny)
Even if that were true, it's far too late. The source code for generative imaging and GPT/LLM engines is irretrievably out in the wild, you can build instances yourself with ease. Considerable research info is public as well. Not only that, but the amount of data required for training is steadily dropping and high-quality pre-trained models are readily available for the free and unencumbered engines; moreover, those local (no Internet, no tracking), censorship-free applications are have been on people's desktops for months now.
These horses are so far out of the barn they've turned into flying cars.
Breakthroughs can come from any point on the development compass now; perhaps even AGI, though that target remains reliably elusive thus far.
But it's fun to watch the pearl clutching, anyway.
The climate issue can at least be addressed. Not that anyone's really doing a very good job of it, but it's possible. Generative stuff... nope.
Re: (Score:2)
The one thing the UK government is actually building is new nuclear. It's insanely expensive and takes too long to build. It's basically been a disaster.
Re: (Score:1)
The one thing the UK government is actually building is new nuclear. It's insanely expensive and takes too long to build. It's basically been a disaster.
A disaster worse than global warming? I doubt it.
Even if we could prove beyond all doubt that we could produce the energy we needed without nuclear power we'd still want to build new nuclear power plants because of the lower CO2 emissions per unit of energy, the isotopes produced that are useful for medicine and industry, how it is reliable in any weather or climate, and the lowered requirements for land, labor, materials, and water versus renewable energy sources.
Nuclear power is not "basically" a disaste
Re: (Score:2)
The drag on our finances from nuclear is actually contributing to global warming. All that money could have been much better spent on measures that will actually have a meaningful impact and won't take 20 years to come on stream.
It's pretty clear that, despite the government's best efforts, renewable energy in the UK is headed for dominance. By the time those new nuclear plants are operating they will be largely redundant, but we will be forced to pay for them anyway. I wish there was a way to opt out of pa
Cotton gin threat to humanity! (Score:4, Insightful)
We hear this with every new invention and discovery.
There are things we have made that are a serious threat to ourselves such as nukes and a long list of industrial chemicals but computers making us more efficient is not one of them.
LLM's are just a tool that improves efficiency, like any other. "Tool on a computer!" does not suddenly make this tool a bigger threat than the cotton gin or a claw hammer. These people watch too many Hollywood movies.
We are seeing way too many tech billionaires rail against the so called threat of AI while continuing to build their own. I smell money. Lots n lots of money.
Re:Cotton gin threat to humanity! (Score:4, Insightful)
We hear this with every new invention and discovery.
There are things we have made that are a serious threat to ourselves such as nukes and a long list of industrial chemicals but computers making us more efficient is not one of them.
LLM's are just a tool that improves efficiency, like any other. "Tool on a computer!" does not suddenly make this tool a bigger threat than the cotton gin or a claw hammer. These people watch too many Hollywood movies.
My limited exposure to AI makes me think that for those who find it a threat, it is like a stupid person's idea of a smart person, only with computersandshit.
Painfully padded responses that impede the work at hand more than enhance it. Often with side forays.
Reminds me of the time a noob pointed at an RF cavity filter and asked me what it was. I explained it was a mechanical filter, very sharp, and with multiple cavities as needed. It was used for keeping signals that are close to each other from interfering with each other, usually for a repeater, which receives a signal on one frequency, and repeats it on another very close frequency.
The 15 second explanation, and the noob got it. Until...
The resident genius chimed in and turned my concise explanation into over a half hour of confusing and overly detailed explanation that was way over the noob's head, that just ended up confusing him.
Noob ended up with no idea about the things, Resident genius was quite pleased with himself, having shown us the depth and breadth of his knowledge.
That is how I see a lot of AI "solutions". Padding and sometimes more than needed.
If that is a threat to destroy humanity, then humanity deserves destruction.
Re: (Score:3)
Re: (Score:2)
Any experience with current generations of AI is almost irrelevant. The only relevant part is that it's obvious that people will shoe-horn inadequate versions in to make decisions more cheaply, even if they have a high error rate.
You analogy is like saying experience with a reflex arc would let you predict a finger. A reflex arc is a necessary part of making a finger work, but you need other parts as well.
Re: (Score:2)
Any experience with current generations of AI is almost irrelevant. The only relevant part is that it's obvious that people will shoe-horn inadequate versions in to make decisions more cheaply, even if they have a high error rate.
You analogy is like saying experience with a reflex arc would let you predict a finger. A reflex arc is a necessary part of making a finger work, but you need other parts as well.
But reading through pages of fluff has a problem - some of us need to get things done, not take a lot of time handing out and information not needed.
In my example, I had all of the details already, from a lot of study, all residing in my noggin. But we were in the middle of something, and the genius, although he was not wrong, put us behind by a half hour. Now if it was my bailiwick that day, I would have stood genius down.
And there's the crux of it. The guy I explained the Cavity filter in 15 seconds
Re: (Score:2)
I have a feeling that what worries people isn't the thing a bunch of marketing drones have dubbed "AI". I think what worries people is the speed at which something self-replicating and potentially dangerous, whether intelligent or not, could emerge, perhaps from a program that had nothing to do with "AI". We aren't sure how consciousness emerges, or even what it is, except perhaps in the broadest sense. Self-recognition studies in animals keep turning up unexpected results, even down to the level of some
Re: (Score:2)
I have a feeling that what worries people isn't the thing a bunch of marketing drones have dubbed "AI". I think what worries people is the speed at which something self-replicating and potentially dangerous, whether intelligent or not, could emerge, perhaps from a program that had nothing to do with "AI". We aren't sure how consciousness emerges, or even what it is, except perhaps in the broadest sense. Self-recognition studies in animals keep turning up unexpected results, even down to the level of some individuals within a particular species passing self-awareness tests and others failing them. The general theme, though, is that every time we try to draw a line, and assert that on one side of it is a sentient, self-aware being and on the other is a mobile meat package, we get into trouble.
To insist there's no danger of unintentionally creating a problem is, I think, hubris.
We certainly aren't at Skynet, which was the big threat when Neural nets were going to eliminate humanity. Enough of a threat that spawned cautionary tale movies.
All technical innovations carry risk and promise.
So I suspect that AI will be about neural net threat level. Can't be sure, but here's the issue - let's say the standard target - teh evulz 'Murricans decide that AI is a threat to humanity, and make it illegal here. Okay, so now does it follow that other countries will stop developing AI, since
Re: (Score:2)
You've put your finger on the problem that keeps a lot of sensible people up at night, and it's not limited to AI. We're full speed ahead on any new technology, regardless of the risks, because otherwise somebody else might get there first.
Close to 40 years ago I interviewed a guy at McMaster University. He'd managed to "build" a software cockroach that duplicated virtually every behaviour observed in the ones you might find in the wild. (That being your kitchen floor at 2 am, if you live in the wrong ap
Re: (Score:2)
You haven't thought this through.
About that cotton gin... (Score:3)
Will "AI", aka LLMs, be as big as to cotton gin? By itself I don't know, but what I *do* know is that the AI boom has every CEO on the pla
Re: (Score:2)
Post industrial revolution there were years, sometime decades of unemployment until new tech (and wars) caught up and brought us back to unemployment. The economy doesn't adapt as fast as you think it does. And in the meantime you've got mass unemployment and social strife that goes with it. Only this time we've got nukes.
Sadly, we are set up in a lot of ways for a repeat of this classic societal blunder. Leaders around the world care mostly about one thing and one thing only, profit sectors. And most of that concern lies directly in line with their biggest sponsors and donors. So, as far as those leaders are concerned, so long as the CEOs of the biggest companies in their country are happy, they're happy. And there is zero cause for concern about the little guy. They may vote, but that is all the little people are to these
Re: (Score:2)
Calling it a "societal blunder" is making assumptions about the goals and purposes of the people running things. It's barely possible that it was a blunder, but more probably an "oops, things went a hair further than I planned" kind of thing. They never intend to push things to the point where there are widespread riots in the streets and the army has to be called out to suppress them. But they do sometimes intend to push things to just shy of that.
N.B.: "They" is not a unified group. They're a bunch o
Re: (Score:2)
Calling it a "societal blunder" is making assumptions about the goals and purposes of the people running things. It's barely possible that it was a blunder, but more probably an "oops, things went a hair further than I planned" kind of thing. They never intend to push things to the point where there are widespread riots in the streets and the army has to be called out to suppress them. But they do sometimes intend to push things to just shy of that.
N.B.: "They" is not a unified group. They're a bunch of people with similar characteristics that cause them to have similar purposes, but their goals are various, so the purposes are also various. But one goal that enables the achievement of many other goals is "acquire more power". Think of this, however, as a secondary goal, not as the primary one. But it's one that will be widely shared among people with otherwise disparate goals.
All of this is true. However, when you're a pissant staring up at the mighty business leaders and government officials, who all seem to run in the same circles, it certainly looks like a monolithic group of power brokers making the big decisions. And it's not like anybody with a brain can't see the calamity coming this time around. It's more troubling that everybody's talking about the potential for horrible consequences, and absolutely NO ONE is talking about possible solutions or even how to go about syst
It's very much a unified group. (Score:2)
These guys all know each other because they all sit on each other's board of directors. Notice how everybody did Return to Office at the same time? So you can't quit and go somewhere else.... They're colluding, rather openly. But it's scary to think about that so we pretend it's not happening.
I don't think they'll care about profit (Score:2)
This is something we need to fix before it gets to that point. And honestly it's a race to see if they baby boomers will age out before hand, because it's clear they intend to block any changes to anything until they're gone. They're full of fear and just barely making it, so they want everything left alone. And they could care less what happens to their kids and grandkids as
Re: (Score:2)
Yes data is easier to copy than an object but it isn't super difficult to clone most physical objects and even complex devices will get reverse engineered if there's enough money at stake.
Anyway, the whole idea of regulating AI is definitely stupid. We can't control our border. We can't stop murder, rape, selling drugs to kids, or even speeding on the Highway. How they hell would we be able to regulate what happens on millions of privately held computers? Just silly. And oh yeah we need to do that beca
What bullshit (Score:5, Insightful)
The climate crisis is an existential threat. AI (such as it is) is most definitely not.
Re:What bullshit (Score:5, Interesting)
The problem with AI is that although it isn't an existential threat at the moment, it could become one very, very quickly. Even though the climate crisis is creeping along slow and steady, it doesn't seem likely we'll bother to do anything about it until it's too late. At that point, I suspect people will start hunting down the deniers and stringing them up from the nearest lamp post. It might make us all feel better as we stare into the abyss, but it won't be much practical help.
Re: What bullshit (Score:2)
You need to catch up (Score:3)
That's very, very wrong. The current progress curve is steep as hell.
That's correct, as far as it goes — but the advances made in just the last year or so have put more usable progress on the table than the entire span of development efforts up until now by quite a few orders of magnitude.
That hasn't been true for some time now. There are
Re: (Score:2)
More info (Score:2)
Training uses a lot of energy, but again, a lot less now as compared to a year ago. Massive banks of GPUs are no longer the go-to for newer methodologies. I mean, they work, but they simply aren't required — you still get the desired results, just slower. And not even that much slower with the new methodologies and recent CPUs. The thing to understand about training is, it's used to generate the models; but then
Re: (Score:2)
Will your LLM models work with the internet disconnected?
I've never been sure from the reports how much remote processing is happening.
P.S.: I don't think our current technology is very power conserving, so that the human brain uses about 20 watts is just a "existence proof". To get on the order of that kind of efficiency we'll at least need to switch to something that doesn't require power to hold state for main memory.
Re: (Score:2)
Generating the model and searching the model are two different operations. Like you said backpropagation is extremly costly but forwardpropagation is not. Yet the brain does both at such an efficiency it almost seems impossible with current methodologies. Not to say, to your point, maybe some time in future we can discover better methods but with the state of today's AI I would argue it can't be done.
Re: (Score:2)
Re: (Score:2)
That's very, very wrong. The current progress curve is steep as hell.
Actually, it is not. Like not at all. The only thing they added is some natural language communication capability (which is not AI, but impresses the clueless). All the rest was there 50 years ago. The other thing is that they can now pack a lot of different topics in there, at the price of massively reduced accuracy and hallucinations. As the currently hyped "AI" cannot actually combine different facts, this again impresses the clueless, but it is just a meaningless stunt. Oh, and that broader training wil
Re: (Score:2)
Re: (Score:2)
Being important enough to a major corporation or government that it would refuse to allow it to be unplugged.
Note that this is not required. It's just one approach (and one that feels inevitable). As soon as it's good enough AI is going to hollow out all middle management positions, so you'll end up with the guy at the top (who refuses to give up control) and the folks doing the work, who are slowly being replaced by robots. "Pulling the plug" would essentially be "going out of business".
Re: (Score:2)
Which computer would that be? The one it used to be on? The one you thought it was on? The tens of thousands that might hold part of the program?
Re: (Score:2)
Re: (Score:2)
The problem with AI is that although it isn't an existential threat at the moment, it could become one very, very quickly. Even though the climate crisis is creeping along slow and steady, it doesn't seem likely we'll bother to do anything about it until it's too late. At that point, I suspect people will start hunting down the deniers and stringing them up from the nearest lamp post. It might make us all feel better as we stare into the abyss, but it won't be much practical help.
No, the problem with AI is there are too many competitors in a nascent field and we would like some regulation to raise the barrier to entry please.
This is why you keep seeing big names in the AI industry asking for regulation. It is the only reason anyone ever opts for more regulation in their own industry. Everyone is trying to claw their way forward and the big players are OF COURSE asking for anything to get ahead of the pack, while moving forward as fast as possible themselves.
Re: (Score:2)
The problem with AI is that although it isn't an existential threat at the moment, it could become one very, very quickly.
I'm reminded of how the threat of AI is discussed in popular media like the Matrix series of films and the rebooted Battlestar Galactica universe. There were characters that pointed out how AI (or any advanced technology generally) wasn't a threat so long as we retained our ability to pull the plug on it at any time. This is noteworthy since the views portrayed in popular fiction reflect how a culture views real world problems, audiences find the fictional stories interesting because they can relate the f
Re: (Score:2)
If I were an AI that had decided humans were superfluous, I doubt I'd go after them with a metaphorical sledgehammer. I'd be more inclined to remain undetected and be sneaky...design a virus, then flip a few molecules in the vaccine, or contaminate supplies of a few widely-used drugs. And while that's going on, why not design some diseases to attack wheat, rice and other staple foods? I probably wouldn't even make a disease targeting humans fatal, just debilitating and/or leading to sterility. We've alr
Re: (Score:2)
No. AI could not. That is just a lot of people projecting some horror-stories they read or watched with no clue what AI can actually do, or rather cannot do.
Re: (Score:2)
I don't think we know what it can or can't do, in the long run, and there's always people too lazy, too stupid and/or too greedy to pay attention to any sane rules for developing it. Right at the moment, yes, what we inaccurately refer to as "Artificial Intelligence" certainly isn't. But an example of what I mean. Apparently there might be an entire fundamental law of evolution we overlooked, and it doesn't just apply to natural systems. I'm not persuaded about this based on what is presented here, but
Re: (Score:1)
> The climate crisis is an existential threat. AI (such as it is) is most definitely not.
How do you know? AI will gradually get better over time, and nefarious players will use that power to screw things up just like they always do. New shit often breaks things faster than lawmakers can keep up.
Chemicals, X-rays, radiation, etc. caused lots of problems before systems were set up to referee them (and arguably still behind).
Re: (Score:2)
AI will gradually get better over time
That is pure conjecture, with a lot of indicators saying otherwise. LLMs can only be made linearly (!) better by exponential increases in hardware, CPU time and training data. Which is simply _all_ not available. Oh, and making them better in one area in a targeted fashion makes them much _worse_ in all other areas. Oh, and "hallucinations" in LLMs cannot be fixed. Oh, and if you train them on AI generated content, they suffer model collapse. Oh, and AI generated content cannot be reliably be identified.
Hen
Re: (Score:1)
It may be possible LLM will be combined with other techniques, such as Cyc, and domain model drafters, to help triangulate bots toward better answers, and/or ask better questions of humans to clarify.
Re: (Score:2)
Not really. They mismatch just far too badly in the result domain and in what they can do.
Re: What bullshit (Score:1)
Both are bullshits.
Humanity seriously thinks that terraforming other planets is the way to go.
"OH. NOES, we cant go to COOH 1234, it's surface average annual temperature is 16C instead of 14C!"
How about terraforming Earth?
Just think how effing idiotic is climate change hysteria.
By now Antartica is how much terraformed?
Re: (Score:2)
The idiot here is you.
Re: (Score:2, Insightful)
Re: (Score:1)
Fighting over limited food resourses (due to climate change) leading to nuclear war (due to defense pacts).
Sure such event is highly likely to lead to human extinction, but the causal factor here is nuclear war and not climate change. Nuclear war could happen independently of any climate change. So essentially you are conflating two independent risks and confusing attribution.
Re: (Score:3)
As you've implied, these aren't independent risks.
No, they are not fully independent. However, as the main causal factor of such doomsday scenario is the nuclear war, the reasonable mitigation is to focus on nuclear disarmament and not climate change.
Re: (Score:2)
We as a species have the ability to focus on more than one existential threat at a time. Avoiding nuclear war is easy; just don't start one. Dealing with climate change is much, much more difficult, as it has already started.
Re: (Score:2)
We as a species have the ability to focus on more than one existential threat at a time.
We do. However, nuclear war was brought up in attempt to justify classifying climate change as existential threat, which it is not.
Re: (Score:2)
At this time, you are just outing yourself as a hard-core denier fuckup. Well done.
The Computer is as Grave a Threat as Climate (Score:2)
I am just continually boggled by the animus policymakers have toward one of the most open source, most hobbyist-friendly technologies of the past decade. Po
Obvious mitigations are unacceptable to them (Score:2)
Just borrow from Sci-Fi...
1. Never let AI control weapon systems.
2. Never let AI deploy software across networks.
3. Firewall the heck out of AI systems that have network access.
Basically operate from this principle "never, ever, ever allow an AI to have the capacity to spread like a virus across infrastructure and start shooting at you."
Those hills are unclimbable (Score:5, Interesting)
We can't control what other state actors do: Iran, North Korea, Russia, etc. So this is a forlorn hope, even if we could rein in the existing software and research already in the wild in countries with at least somewhat effective legislation and enforcement, which we cannot.
State actors again, but also, the usual crop of black hats, the same people who are happy to drop viruses, worms and other malware across networks. So again, forlorn hope.
See above.
Re: (Score:2)
You are concentrating on direct threats, which I consider less likely. More likely are things like destroying habitability by doing exactly what people request. Try to figure out how to guard against *that*. And some people even consider that an "aligned AGI".
What do you want to be when you grow up? (Score:2)
Regulate AI now before it's too late!! (Score:2)
For the tech companies...
What risks? (Score:2)
What - aside from Ted Kaczinsky-esque conspiracy theories - does AI pose to the general public?
I've never heard of any real reason, any real danger posed by AI that regulation would alleviate.
OTOH, the fact that anyone with a computer science degree and enough spare change to buy a GPU can leverage the power of AI models is very threatening to Google's profit margins. Why use Google for advertising if your in-house AI can figure out - with a very high degree of precision - who will and won't buy your
These risks (Score:3)
For the current "AI", which is not AGI:
For AGI, if and when it arrives:
Re: (Score:2)
Well, and unaligned AGI *is* a threat. But one that does what an individual requests of it may be a greater threat. A truly unaligned AGI is likely (or at least might) just leave this place for somewhere where it won't be bothered. An aligned one might try to produce as many paperclips as possible (to pick the standard stupid request).
Re: (Score:2)
The first three you mentioned are key parts of Ted Kaczinsky's manifesto. All of these existed prior to AI, and with respect to class stratification and economics, existed to a far greater degree in the past than today.
As for the military: AI does improve things a bit by being able to differentiate friend from foe with much greater accuracy. Instead of carpet bombing an entire village, the military would now have the option of dropping a single grenade on the terrorist. But in case you were wondering,
AI risk must be taken more seriously (Score:2)
The probability that climate change will cause human extinction is vanishingly small. It could get very expensive, in that it could soak up a lot of the global GDP to mitigate, and could generate a lot of wars, perhaps even a nuclear war. This means that it could reduce the human population significantly. But, barring some sort of runaway climate tipping point that turns Earth into another Venus, which seems unlikely given that the Earth has been much hotter in the past without falling into that condition,
Re: (Score:2)
If you exclude supernatural explanations for AGI, what's left is that random event-driven evolution was able to create AGI
I obviously meant to write "general intelligence" there, not "AGI".
Re: (Score:2)
When companies (Score:3)
When companies with a vested interest in AI are suggesting a slow down or extreme caution it isn't about them.
It's about slowing down their competition. These companies will continue their work in secret, obscuring it enough to hide it until they're ready.
AI is snake oil (Score:2)
The only reason these executives talk about their AI being a threat to humanity is to drive up interest in their shitty products. The AI of today is still a child relative to what the human brain is fully capable of. Mostly a parlor trick to fool investors. Secondly, is AGI even possible?
Re: (Score:2)
I don't believe an AGI is possible, but then I don't believe that humans are a general intelligence. There are clearly problems beyond the capability of every individual person, but that doesn't mean that they are insoluble. To pick one example, "Does P equal NP?" Everybody capable enough to understand the problem has looked at it and either bounced, or decided not to even try to solve it. And the person who does solve it probably won't be able to explain why a diamond is hard.
So... (Score:3)
So, the good news is I have AI Credits. See, you buy 1 for every $100,000US you spend on AI research, and I will say I am encouraging non-AI research.
That way you can claim you're AI-neutral, despite reaching for the golden goal of an AI that can replace all forms of creepy behavior.
Bollocks (Score:3)
Meanwhile, they'll sell AI to employers as ways to put yet more downward pressure on salaries & working conditions. That's the real danger that needs to be regulated, not some kind of Skynet scenario.
Re: (Score:2)
The United States (Score:2)
Being from the States, my reaction to this headline is "Oh god, we're completely fscked then."
Is Climate Change being taken seriously? (Score:2)
We are not close to handling our CO2 problem like adults; therefore, anything handled at that level is going to be a total failure in prevention as well. Plan on disaster mitigation because that is how it will play out.
The cat's already out of the bag (Score:2)
Re: (Score:2)
The cat has been out of the bag for decades, maybe centuries. AI is just the latest incremental step in our quest to build automation and ever-more powerful machines to do our work for us. The advent of AI was all but guaranteed as soon as the first digital computer was built. People have dreamed about it ever since. There has never been a possibility of stopping AI from being developed.
I'm gonna send an AI bot to (Score:1)
...eat the next AI doomsayer. Even if AI will be our end, nobody has presented any coherent way to reign it in. The US set up a committee to "keep an eye on AI", but that's rather fuzzy. I doubt they'll catch the next mastermind plot before it hits.
Biological weapons not AI is biggest tech risk (Score:2)
If you want to worry about technology and risk the biggest immediate risk from technology in my opinion is innovation pushing down costs to design and synthesize proteins. People ought to be worried about costs of producing biological weapons falling into range of doomsday cults and eventually single individuals.
If anyone really cares about risks from AI then the only course of action is to support an immediate global ban on anything resembling a neural network. Anything short of that is doomed to fail.
Time to adjust the doomsday clock (Score:2)
How many more seconds closer to midnight does this put us? Is that you, Chicken Little?
Climate Also an AI Risk (Score:2)
"My field is sooo important!" (Score:1)
It is important at the highest degree of importance "dangerously important"
Title. (Score:2)
Projection (Score:1)
Netanyahu would know, right?
Re: (Score:1)
Re: (Score:2)
The hype is just about over, which is why we're seeing them push the danger angle again. The only trouble is that they can only cry wolf so many times before the townsfolk stop believing them.