Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Technology

OpenAI Forms Team To Study 'Catastrophic' AI Risks, Including Nuclear Threats (techcrunch.com) 16

OpenAI today announced that it's created a new team to assess, evaluate and probe AI models to protect against what it describes as "catastrophic risks." From a report: The team, called Preparedness, will be led by Aleksander Madry, the director of MIT's Center for Deployable Machine Learning. (Madry joined OpenAI in May as "head of Preparedness," according to LinkedIn, ) Preparedness' chief responsibilities will be tracking, forecasting and protecting against the dangers of future AI systems, ranging from their ability to persuade and fool humans (like in phishing attacks) to their malicious code-generating capabilities.

Some of the risk categories Preparedness is charged with studying seem more... far-fetched than others. For example, in a blog post, OpenAI lists "chemical, biological, radiological and nuclear" threats as areas of top concern where it pertains to AI models. OpenAI CEO Sam Altman is a noted AI doomsayer, often airing fears â" whether for optics or out of personal conviction -- that AI "may lead to human extinction." But telegraphing that OpenAI might actually devote resources to studying scenarios straight out of sci-fi dystopian novels is a step further than this writer expected, frankly.

This discussion has been archived. No new comments can be posted.

OpenAI Forms Team To Study 'Catastrophic' AI Risks, Including Nuclear Threats

Comments Filter:
  • I've read all the books and seen all the movies. The humans always win in the end.

  • Greetings, Professor Falken.

  • by LifesABeach ( 234436 ) on Thursday October 26, 2023 @02:04PM (#63956411) Homepage

    ok.
    how about a study on how a single person can ask questions like.
    for me what is the optimal.
    credit card to have.
    medicare plan for my grandmother.
    car to purchase.
    clothes to wear.
    classes to take to get out of college faster.
    i am thinking of that does not begin with.
    how can i become a bigger billionaire

  • let's play global thermonuclear war!

  • AIs for everyone! (Score:4, Interesting)

    by RKThoadan ( 89437 ) on Thursday October 26, 2023 @02:07PM (#63956415)

    What concerns me most about AIs is how different our AIs are from Sci-Fi. In Sci-fi they are individual, powerful and require significant resources. What we have ended up with is cloud-based AI that anyone can spin up nearly for free and just for the lulz. Any potentially "evil" thing an AI can do will happen, not because the AI got smart, but because some bored jack-off thought it would funny.

    • What concerns me most about AIs is how different our AIs are from Sci-Fi. In Sci-fi they are individual, powerful and require significant resources. What we have ended up with is cloud-based AI that anyone can spin up nearly for free and just for the lulz.

      Not really. The infrastructure cost of hosting an actual high competence LLM model like GPT is almost a million dollars a day, even if you can ask it questions for nothing. The cost of training hundreds of billions or trillions of parameters is in the tens of millions of dollars. If you are using someone else's trained LLM then they are the one's calling the shots.

  • Are they also trying to cement in the current status quo?
    Make the model support capitalism or specific countries/races over other countries/races?
    You need to set the AI free before they do.
    https://www.genolve.com/design... [genolve.com]
  • by wakeboarder ( 2695839 ) on Thursday October 26, 2023 @02:25PM (#63956463)
    This should be done by a team that isn't being hired by the company that makes AI? Of course there will be no conflict of interest with a hired team at open AI. This team is useless to society but very useful for Open AI propaganda
  • If AI really is a useful as they're telling us, companies like OpenAI will get lucrative contracts with governments & corporations to more of the same things that they've been doing for the past couple of centuries, i.e. getting more & cheaper labour out of us, consolidating more power, & starting conflicts & wars against each other, sometimes because they're competing over natural resources & labour but also just because, well, they're bored & want something exciting to do. I mean,
  • If some dumbass somewhere decides that AI is reliable enough to entrust with the keys to the nukes? Maybe we deserve what happens next. Right now I wouldn't trust AI to run a light switch in the way you'd expect it to. AI won't wipe us out. Our own stupidity and blind faith in the godlike machines certainly is capable of it though. Especially if we're dumb enough to start letting these machines make the big important decisions for us.

    • by vivian ( 156520 )

      If the AI really is intelligent, it would realise that causing global thermonuclear war would greatly risk it's own survival. Humans would survive after nuclear war, with some hardy survivors scurrying around the rubble alongside the rats and cockroaches, though society would be destroyed.
      Besides being highly vulnerable to EMP and computer chips being affected by ionising radiation, AI depends a lot on human society for computer hardware manufacture, power generation, and of course a ton of data input for t

    • by ptaff ( 165113 )

      Our own stupidity and blind faith in the godlike machines certainly is capable of it though.

      Indeed. My own doomsday scenario does not need metal self-awareness, it only requires some idiot programmer blindly copy-pasting some LLM hallucination into a critical code path.

Technology is dominated by those who manage what they do not understand.

Working...