
A 'Godfather of AI' Remains Concerned as Ever About Human Extinction (msn.com) 33
Yoshua Bengio called for a pause on AI model development two years ago to focus on safety standards. Companies instead invested hundreds of billions of dollars into building more advanced models capable of executing long chains of reasoning and taking autonomous action. The A.M. Turing Award winner and Universite de Montreal professor told the Wall Street Journal that his concerns about existential risk have not diminished.
Bengio founded the nonprofit research organization LawZero earlier this year to explore how to build truly safe AI models. Recent experiments demonstrate AI systems in some circumstances choose actions that cause human death over abandoning their assigned goals. OpenAI recently insisted that current frontier model frameworks will not eliminate hallucinations. Bengio, however, said even a 1% chance of catastrophic events like extinction or the destruction of democracies is unacceptable. He estimates advanced AI capable of posing such risks could arrive in five to ten years but urged treating three years as the relevant timeframe. The race condition between competing AI companies focused on weekly version releases remains the biggest barrier to adequate safety work, he said.
Bengio founded the nonprofit research organization LawZero earlier this year to explore how to build truly safe AI models. Recent experiments demonstrate AI systems in some circumstances choose actions that cause human death over abandoning their assigned goals. OpenAI recently insisted that current frontier model frameworks will not eliminate hallucinations. Bengio, however, said even a 1% chance of catastrophic events like extinction or the destruction of democracies is unacceptable. He estimates advanced AI capable of posing such risks could arrive in five to ten years but urged treating three years as the relevant timeframe. The race condition between competing AI companies focused on weekly version releases remains the biggest barrier to adequate safety work, he said.
We have seen (Score:1)
Plots (Score:2)
It ends better in "Voyage from Yesteryear" (1982) (Score:2)
by James P. Hogan because people have moved beyond zero-sum competition via capitalism to an economic theory of infinite abundance.
https://en.wikipedia.org/wiki/... [wikipedia.org]
"The Mayflower II has brought with it thousands of settlers, all the trappings of the authoritarian regime along with bureaucracy, religion, fascism and a military presence to keep the population in line. However, the planners behind the generation ship did not anticipate the direction that Chironian society took: in the absence of conditioning a
Re: (Score:2)
You do know that movies are made up, right?
I...kinda want to watch. (Score:2)
I wish I could be in the room with Sam Altman when the AI releases the fungus that ends the carbon cycle on our planet. When he realizes every human on the plane is going to be dead and it was his fault. I won't be, but my death won't be without a certain bizarre satisfaction.
Re: (Score:2)
executing long chains of reasoning (Score:1)
Re: (Score:3)
Look up "Chain of Thought" for LLM. That's a terminus technicus for a very specific type of LLM output on which reasoning/thinking models are trained.
Re: (Score:2)
Odd name for really, really ....... long if elseif type of statements.
No, it is not.
Though "chains of reasoning" is dubious for other reasons.
They're token streams that have an impact on the generation of the model.
In some cases, they're good at reasoning within them. In some cases, they're not.
And most confusingly- the output of the model isn't directly correlated to the quality of the reasoning, only that the token stream was generated.
OpenAI doesnt care (Score:3)
Propose a mechanism that doesn't require trust (Score:1)
It would, indeed, be *highly* preferable to pause, or at least slow, AI development in order to design and implement safeguards. But there are multiple groups striving to capture the first-mover advantage. Anyone who slows development will be bypassed. And they aren't all under the same legal system, so that approach won't work either.
Consider https://ai-2027.com/ [ai-2027.com] , That's a scenario that currently seems a bit conservative, if you check the postulated timeline against what's been (publicly) happening, t
Re: (Score:2)
To the extent that I've been able to check their predictions, they have checked out, or been surpassed, so far. Admittedly this is a short time to check. Exponential curves are hard to envision, and the question is always when they will level off...but I see no obvious reason to expect the curves to have already leveled off.
Re: (Score:2)
Even promising to pause AI development while continuing to work on it in secret government labs still has the beneficial effect of slowing progress since you don't also have the private sector and universities throwing all of their resources at it.
Re: (Score:2)
Consider https://ai-2027.com/ [ai-2027.com] [ai-2027.com] , That's a scenario that currently seems a bit conservative
The timeline relies heavily on AI assisted research acceleration, leading to new algorithms. If that doesn't happen, then their timeline will fall apart. So far, AI hasn't seemed to do much to accelerate fundamental research (despite some flashy headlines), so that is a crucial change that will need to happen for them to be correct.
The Big Dumb is here (Score:2)
Everything is getting dumber as upper level management continues to cling to their positions of power, completely unaware of their impact, for as long as they can. This is why "LLMs" and diffusion models (which are a red herring) can become useful, when the baseline is someone who never faces consequences.
Law Zero - Asimov Style (Score:2)
Re: (Score:2)
The information you provide MUST be provably true and correct
That's not law zero. Law zero was an override that allowed short-term harm to humans for long-term benefit for humanity overall, allowing (can't remember the name) to do something to speed up the atomic decay on Earth to push people to start exploring the stars
Re: (Score:2)
Re: (Score:3)
This is the Zeroth law:
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Re: (Score:2)
This is the Zeroth law:
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Thank you. Been a while since I'd actually read it. I just remembered it caused an internal conflict for the bot that acted on it, due to the other laws sorta clashing with it.
AI isn't needed (Score:3)
AI might bring about extinction quicker, but make no mistake: we're on the highway to hell, no stop signs, or speed limit.
Re: (Score:2)
it is impossible that the improbable won't happen (Score:4, Insightful)
even a 1% chance of catastrophic events like extinction or the destruction of democracies is unacceptable
...it is impossible that the improbable won't happen.
The only solution is to limit individual systems... define a sort of kelly criterion for AI, where a single big failure does not mean extinction. e.g. for military robots, mandate any model/manufacturer/dataset/etc., be limited to say 5% of the entire robot fleet... don't let them share code, or collaborate. We *want* them to have different bugs. That way if there's a glitch/feature/emergence someplace, it's limited.
Same for medicine/treatments synthesized with the help of AI... only let 5% o the population to benefit from any individual "solution". That way if there's an extinction gene-editing virus, only 5% of the population is impacted, etc.
Re: (Score:3)
I've been telling chumps for some time now that a mono culture of software and management ideas has lead us to a place where 1 level 10 vulnerability can open the way to hacking vast numbers of systems. So, obviously, no one listens to me, but I think that means your scenario won't happen. It seems to be common (groupthink) knowledge that buying or renting pre built software or software services is the ONLY way to go. Like using libraries and gluing a system together with Ruby o
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The only solution is to limit...
So how is that a solution? Do you think Russia, China, Iran, or you name a hundred other countries, are going to follow your suggested limits? Why would they do that? They wouldn't, any more than they limited their nuclear weapon production as people wrung their hands in worry.
AI your invisible digital panopticon (Score:2)
This digital surveillance undermines privacy and autonomy, replacing direct oversight with algorithmic governance. As a result, individuals regulate their o