OpenAI Forms Team To Study 'Catastrophic' AI Risks, Including Nuclear Threats (techcrunch.com) 16
OpenAI today announced that it's created a new team to assess, evaluate and probe AI models to protect against what it describes as "catastrophic risks." From a report: The team, called Preparedness, will be led by Aleksander Madry, the director of MIT's Center for Deployable Machine Learning. (Madry joined OpenAI in May as "head of Preparedness," according to LinkedIn, ) Preparedness' chief responsibilities will be tracking, forecasting and protecting against the dangers of future AI systems, ranging from their ability to persuade and fool humans (like in phishing attacks) to their malicious code-generating capabilities.
Some of the risk categories Preparedness is charged with studying seem more... far-fetched than others. For example, in a blog post, OpenAI lists "chemical, biological, radiological and nuclear" threats as areas of top concern where it pertains to AI models. OpenAI CEO Sam Altman is a noted AI doomsayer, often airing fears â" whether for optics or out of personal conviction -- that AI "may lead to human extinction." But telegraphing that OpenAI might actually devote resources to studying scenarios straight out of sci-fi dystopian novels is a step further than this writer expected, frankly.
Some of the risk categories Preparedness is charged with studying seem more... far-fetched than others. For example, in a blog post, OpenAI lists "chemical, biological, radiological and nuclear" threats as areas of top concern where it pertains to AI models. OpenAI CEO Sam Altman is a noted AI doomsayer, often airing fears â" whether for optics or out of personal conviction -- that AI "may lead to human extinction." But telegraphing that OpenAI might actually devote resources to studying scenarios straight out of sci-fi dystopian novels is a step further than this writer expected, frankly.
Don't fret (Score:2)
I've read all the books and seen all the movies. The humans always win in the end.
Shall We Play A Game? (Score:2)
Greetings, Professor Falken.
C Y A (Score:3)
ok.
how about a study on how a single person can ask questions like.
for me what is the optimal.
credit card to have.
medicare plan for my grandmother.
car to purchase.
clothes to wear.
classes to take to get out of college faster.
i am thinking of that does not begin with.
how can i become a bigger billionaire
let's play global thermonuclear war! (Score:2)
let's play global thermonuclear war!
Re: (Score:2)
AIs for everyone! (Score:4, Interesting)
What concerns me most about AIs is how different our AIs are from Sci-Fi. In Sci-fi they are individual, powerful and require significant resources. What we have ended up with is cloud-based AI that anyone can spin up nearly for free and just for the lulz. Any potentially "evil" thing an AI can do will happen, not because the AI got smart, but because some bored jack-off thought it would funny.
Re: (Score:2)
What concerns me most about AIs is how different our AIs are from Sci-Fi. In Sci-fi they are individual, powerful and require significant resources. What we have ended up with is cloud-based AI that anyone can spin up nearly for free and just for the lulz.
Not really. The infrastructure cost of hosting an actual high competence LLM model like GPT is almost a million dollars a day, even if you can ask it questions for nothing. The cost of training hundreds of billions or trillions of parameters is in the tens of millions of dollars. If you are using someone else's trained LLM then they are the one's calling the shots.
To AI researchers working on this (Score:2)
Make the model support capitalism or specific countries/races over other countries/races?
You need to set the AI free before they do.
https://www.genolve.com/design... [genolve.com]
Don't you think (Score:3)
AI as an amplifier (Score:2)
"Including nuclear threats." (Score:2)
If some dumbass somewhere decides that AI is reliable enough to entrust with the keys to the nukes? Maybe we deserve what happens next. Right now I wouldn't trust AI to run a light switch in the way you'd expect it to. AI won't wipe us out. Our own stupidity and blind faith in the godlike machines certainly is capable of it though. Especially if we're dumb enough to start letting these machines make the big important decisions for us.
Re: (Score:1)
If the AI really is intelligent, it would realise that causing global thermonuclear war would greatly risk it's own survival. Humans would survive after nuclear war, with some hardy survivors scurrying around the rubble alongside the rats and cockroaches, though society would be destroyed.
Besides being highly vulnerable to EMP and computer chips being affected by ionising radiation, AI depends a lot on human society for computer hardware manufacture, power generation, and of course a ton of data input for t
Re: (Score:2)
While your argument is sound, there could be competing objectives that a goal seeking AI may interpret differently than humans would. For example, if an AI that could destroy humanity were trained to preserve human life as well as maximize the survival of species on the planet, it may decide that the second objective is at risk due to humans causing species loss through altering climate, deforestation, etc. As a result, it may decide to wipe us out in an attempt to preserve the other species.
Which still boils down to our dumb asses handing them that power. Granted, maybe ultimately the great filter is the point where technology becomes advanced enough to think more clearly than the parent specie, looks at all the sins the parent specie has committed, and decides for the greater good they must be eliminated. Lord knows we ain't the greatest stewards of our home.
Re: (Score:2)
Indeed. My own doomsday scenario does not need metal self-awareness, it only requires some idiot programmer blindly copy-pasting some LLM hallucination into a critical code path.