Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
AI

Is the Altruistic OpenAI Gone? (msn.com) 46

"The altruistic OpenAI is gone, if it ever existed," argues a new article in the Atlantic, based on interviews with more than 90 current and former employees, including executives. It notes that shortly before Altman's ouster (and rehiring) he was "seemingly trying to circumvent safety processes for expediency," with OpenAI co-founder/chief scientist Ilya telling three board members "I don't think Sam is the guy who should have the finger on the button for AGI." (The board had already discovered Altman "had not been forthcoming with them about a range of issues" including a breach in the Deployment Safety Board's protocols.)

Adapted from the upcoming book, Empire of AI, the article first revisits the summer of 2023, when Sutskever ("the brain behind the large language models that helped build ChatGPT") met with a group of new researchers: Sutskever had long believed that artificial general intelligence, or AGI, was inevitable — now, as things accelerated in the generative-AI industry, he believed AGI's arrival was imminent, according to Geoff Hinton, an AI pioneer who was his Ph.D. adviser and mentor, and another person familiar with Sutskever's thinking.... To people around him, Sutskever seemed consumed by thoughts of this impending civilizational transformation. What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering?

By then, Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety. He appeared to people around him as both boomer and doomer: more excited and afraid than ever before of what was to come. That day, during the meeting with the new researchers, he laid out a plan. "Once we all get into the bunker — " he began, according to a researcher who was present.

"I'm sorry," the researcher interrupted, "the bunker?"

"We're definitely going to build a bunker before we release AGI," Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. "Of course," he added, "it's going to be optional whether you want to get into the bunker." Two other sources I spoke with confirmed that Sutskever commonly mentioned such a bunker. "There is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture," the researcher told me. "Literally, a rapture...."

But by the middle of 2023 — around the time he began speaking more regularly about the idea of a bunker — Sutskever was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman's pattern of behavior was undermining the two pillars of OpenAI's mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions.

"For a brief moment, OpenAI's future was an open question. It might have taken a path away from aggressive commercialization and Altman. But this is not what happened," the article concludes. Instead there was "a lack of clarity from the board about their reasons for firing Altman." There was fear about a failure to realize their potential (and some employees feared losing a chance to sell millions of dollars' worth of their equity).

"Faced with the possibility of OpenAI falling apart, Sutskever's resolve immediately started to crack... He began to plead with his fellow board members to reconsider their position on Altman." And in the end "Altman would come back; there was no other way to save OpenAI." To me, the drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: How do we ensure that we'll make our future better, not worse? The events of November 2023 illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be....
The author believes OpenAI "has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry to no longer share meaningful technical details about AI models..."

"At the same time, more and more doubts have risen about the true economic value of generative AI, including a growing body of studies that have shown that the technology is not translating into productivity gains for most workers, while it's also eroding their critical thinking."

Is the Altruistic OpenAI Gone?

Comments Filter:
  • by ffkom ( 3519199 ) on Saturday May 17, 2025 @04:39PM (#65383723)
    ... and you know that there never was an "altruistic OpenAI". Just some hardcore capitalists who happened to pick up the trend to call something "open" because this gained some PR points for free.
    • by quenda ( 644621 )

      Get yer hand off it, mate!

    • by gweihir ( 88907 )

      Indeed. Same approach as "Don't be evil". The one thing these people can do really well is use tha "Big Lie" approach (https://en.wikipedia.org/wiki/Big_lie).
      And a lot of rather dumb people fall for it ...

  • Never existed (Score:5, Interesting)

    by hdyoung ( 5182939 ) on Saturday May 17, 2025 @04:39PM (#65383729)
    It was always done with an eye towards monetization. Right from the start. But our economy has this weird grey-zone economic thing that amounts to "we're legally a non-profit but not really we want to make money and everybody knows it but nobody is gonna talk about it".

    Combine that with private ownership, and you've got a perfect recipe for complete murkiness, which can make doing business a lot easier than answering to those pesky shareholders and paying taxes.
  • Are you kidding me? Altruism at OpenAI has been dead so long that the idea that it ever existed is being questioned.

    • s/Altruism/Altmanism/g
    • by dvice ( 6309704 )

      I think that more important point is that OpenAI has never invented anything. So it doesn't really matter what they do.

      If you want to know who is actually developing AI, you need to read stories like this:
      https://deepmind.google/discov... [deepmind.google]
      "And in 20% of cases, AlphaEvolve improved the previously best known solutions, making progress on the corresponding open problems."

      In OpenAI language that would mean "We invented AI that would beat Einstein". Google has smart people, but they can't speak language that is u

  • Funny yet normal (Score:5, Informative)

    by peterww ( 6558522 ) on Saturday May 17, 2025 @04:57PM (#65383747)

    ...how borderline moronic and insane that people can be that are talented and motivated. Sutskever seems like simultaneously a genius, and a crazy person. A computer program bringing about a rapture? Bunkers? AGI "in 10 years" (for the last 30 years) ? Turns out this kind of genius-yet-crazy is pretty common.

    - Newton was super smart. But he also thought he could find the date for the rapture hidden in codes in random texts. And of course spent half his life studying alchemy.

    - Einstein had some really great ideas, but some stinkers too.

    - Alfred Russel Wallace, the guy who thought up evolution right before Darwin, was obsessed with seances to try to talk to the dead.

    - Joseph Priestley discovered oxygen, and used it to continuously try to justify a theory of 4 natural elements... and every scientist in the world used it to prove why there *aren't* four natural elements.

    - Francis Crick, the guy who discovered DNA, also says that DNA arrived on earth because aliens.

    - James Watson, the other guy who discovered DNA, firmly believes all black people are less smart than white people, because of his interactions with black employees. Oh and women scientists are more difficult to work with. And we should alter the genes of "inferior" people.

    People: we have nukes. NUKES. And they're not even controlled by strictly designed, non-AI computer programs to keep the system safe. They're controlled by humans. Humans like *Trump*. Crazy, insane, aggressive, emotional humans. Yet we aren't all dead yet. A lot of people did hide in bunkers, when nukes came out. But, amazingly, despite this _super dangerous technology_, we aren't all dead yet.

    • Yet we aren't all dead yet. A lot of people did hide in bunkers, when nukes came out. But, amazingly, despite this _super dangerous technology_, we aren't all dead yet.

      Yet. That doesn't mean it's not going to happen. Isn't it a good idea to try to control things that might kill us all, like nukes or bioweapons or ASI? Maybe we can delay or even completely avoid all being killed.

    • > we aren't all dead yet.

      Yet. Give it a bit more time.

    • People: we have nukes. NUKES. And they're not even controlled by strictly designed, non-AI computer programs to keep the system safe. They're controlled by humans. Humans like *Trump*. Crazy, insane, aggressive, emotional humans. Yet we aren't all dead yet. A lot of people did hide in bunkers, when nukes came out. But, amazingly, despite this _super dangerous technology_, we aren't all dead yet.

      If we were all dead you wouldn't be around to post this nonsense.

    • by tragedy ( 27079 )

      - Francis Crick, the guy who discovered DNA, also says that DNA arrived on earth because aliens.

      Technically, it was the structure of DNA that was discovered, not DNA itself. Also, technically the team credited was Crick and Watson, not just Crick. Also Rosalind Franklin's contribution to the discovery was probably the most important part since she directly imaged it.

      As for the theory of panspermia, it's not really all that crazy. The version that Crick apparently subscribed to where intelligent aliens actually spread DNA to other worlds is a bit more iffy. Still, depending on what assumptions you make

  • Bruh (Score:4, Informative)

    by systemd-anonymousd ( 6652324 ) on Saturday May 17, 2025 @04:58PM (#65383749)

    This question is at least three years too late. Around GPT-2 Sam Altman established his marketing/scam loop of:

    1. is sooo dangerous we'll never ever EVER even show you, let alone release the weights!
    2. okay fine, you can look at but it's so dangerous we'll never ever let you touch it!
    3. okay fine, celebrities and influencers are allowed to touch but definitely not you, it's way too dangerous!
    4. okay fine, you can touch thing but it's so flippin' dangerous that we're going to have to charge you a fee, and those weights? no way, humanity can't handle that! only our for-profit non-profit that makes billion dollar deals with defense contractors can be trusted with that!
    5. erm wow you violated our terms of service for responsible usage of . You've lost the possibility of early access to

    Also somewhere in there Sam Altman gets expelled from Kenya for refusing to comply with orders to stop scanning locals' irises in exchange for shitcoin.

  • by rsilvergun ( 571051 ) on Saturday May 17, 2025 @05:05PM (#65383761)
    We don't have a ruling class or if we do acknowledge their existence we all pretend they are benevolent benefactors and not cutthroat bloodthirsty psychopaths.

    It's weird because we grow up with movies telling us just how bad corporations and CEOs and Wall Street ghouls really are and we have actual history we are taught about how god-awful they are but then when we get out into the real world all it takes is a little tiny bit of Charity giveaways here and there and suddenly we act like they are saints.
    • by ffkom ( 3519199 ) on Saturday May 17, 2025 @05:17PM (#65383771)

      we grow up with movies telling us just how bad corporations and CEOs and Wall Street ghouls really are and we have actual history we are taught about how god-awful they are but then when we get out into the real world all it takes is a little tiny bit of Charity giveaways here and there and suddenly we act like they are saints.

      That certainly irks me, too, but to be fair: Those psychopaths appear to make not only negative impact, they also appear to be the ones that make investments (either by themselves or gullible followers) happen, some of which end up bringing some actual (technological) progress. Which would otherwise have died in endless committee discussions on who should spend whose money on what.
      I think we should be able to see both the good and the bad in these people, and try to establish legislation that mitigates their ability to cause harm. And yes, praising them as Saints after spending part of their amassed wealth on Charity is stupid.

      • by rsilvergun ( 571051 ) on Saturday May 17, 2025 @06:24PM (#65383849)
        The billionaires shut down virtually all investment and study and advancement except for a handful of things coming out of the public university system. I knew a cancer researcher that left America because they couldn't get funding to research children's fucking cancer because it wasn't profitable. Yeah they could get funding in Europe but the ruling class over there isn't any happier about not having that money themselves so they're working on shutting it down

        And most of the research, but whom I kidding all of the research done since the '60s, is being done on the taxpayers dime the majority of which is paid for not by the billionaire ghouls but by regular taxpayers like yourself.

        They are absolutely nothing but parasites. They do not contribute anything they just take and take and take and take and take. But they have a lot of money so they can spend it making you think that they are doing good.

        And US nerds are especially vulnerable to that because we all grew up reading sci-fi books about the billionaire genius philanthropist who flew in space rockets. So guys like Elon musk and Mark Zuckerberg can make us believe they are the heroes from the old sci-fi books we read in grade school.

        It's tough getting away from that kind of thinking and unpacking it. And so they keep getting away with robbing us blind. If this keeps up we won't have anything left.
    • by jythie ( 914043 )
      This is one of the reasons the right (esp the christian-capitalist community) rail against media and education so much. They see it has intellectuals and hollywood (i.e. the jews and communists) warping the children. In their worldview, we are supposed to have a ruling class. Everyone in their place, a well defined hierarchy, as god and his economists intended.
      • "... an effete corps of impudent snobs who characterize themselves as intellectuals" --- Spiro T Agnew, bribe taker and convicted tax evader
  • Turns out that their push for AI safety was all about destroying the competition after all. Who would have guessed?

    • Destroying competition, but also muddying the waters about the actual dangers and what effective regulation would be. They get everyone talking about Skynet instead of the more mundane and real dangers, like people getting denied work and housing.

      Similar to when Congress starts talking about aliens. They fill up space with that, to avoid discussing the real problems. In Mexico they went so far as to lay out a papier-mache "alien" in their equivalent of Congress, and listen to some bozo regale them with tale

  • Thanks to AI we'll need to be in a bunker to avoid the flood of ads.

  • When you are not bitching that this technology isnâ(TM)t even real or no more advanced than your sed script from 1994, you bitch that it is so evil and dangerous they should not be allowed to make a profit. If you picked a lane and stuck with it you all might have a shred of credibility
    • by Rujiel ( 1632063 )
      The capabilities of the software to replace you are a separate question from that of whether they want to replace you. Of course they do.
    • The capacity of the software to do stuff doesn't preclude people trying to have it do things it can't, and faceplanting in the process. When your doctor, or the guy administering your mortgage, or your employer faceplants, it's usually not good for you.

      I actually haven't heard much argument on Slashdot that AI shouldn't make a profit. I know that's what OpenAI itself claimed, apparently disingenuously.

      What I have heard on Slashdot is that AI incorporating copyrighted works shouldn't make a profit. That's ju

  • Few thoughts. (Score:5, Informative)

    by jd ( 1658 ) <imipak@FREEBSDyahoo.com minus bsd> on Saturday May 17, 2025 @06:11PM (#65383833) Homepage Journal

    1. Even the AI systems (and I've checked with Claude, three different ChatGPT models, and Gemini) agree that AGI is not possible with the software path currently being followed.

    2. This should be obvious. Organic brains have properties not present within current neural nets (localised, regionalised, and even globalised feedback loops within an individual module of a brain (the brain doesn't pause for inputs but rather mixes any external inputs with synthesised inputs, the and brain's ability to run through various possible forecasts into the future and then select from them along the brain's ability to perform original synthesis between memory constructs and any given forecast to create scenarios for which no input exists, to produce those aforementioned synthesised inputs). There simply isn't a way to have multi-level infinite loops in existng NN architectures.

    3. The brain doesn't perceive external inputs the way NNs do - as fully-finished things - but rather as index pointers into memory. This is absolutely critical. What you see, hear, feel, etc -- none of that is coming from your senses. Your senses don't work like that. Your senses merely tell the brain what constructs to pull, and the brain constructs the context window entirely from those memories. It makes no reference to the actual external inputs at all. This is actually a good thing, because it allows the brain to evolve and abstract out context, things that NNs can't do precisely because they don't work this way.

    Until all this is done, OpenAI will never produce AGI.

    • by methano ( 519830 )
      Wow! 1658! You must be from Michigan!
    • by gweihir ( 88907 )

      You also have an unproven assumption in there: Physicalism. At this time, Pysicalism is belief and not Science. Because nobody knows how smart humans do what they do and we reliably know that physical models or reality are grossly inciomplete in this area.

      Hence the path to AGI is about as unclear as it can be. First understand what makes smart humans tick (the dumb ones are mostly reflex-automatons and probably are within reach now with some limitations, but they are useless), and then make predictions abou

      • Wow.

        First understand what makes smart humans tick (the dumb ones are mostly reflex-automatons and probably are within reach now with some limitations, but they are useless), and then make predictions about AGI.

        "Dumb people are automatons and useless". Damn man, even the people who have that view don't usually say it out loud. Rich nobility having open disdain for the unwashed masses isn't a good look. Giant corporate masters seeking to reduce costs by eliminating the unneeded masses is downright dystopian.

        But no, there is no "first". We are more than capable of developing the science and the state of the art in fields where we lack 100% understanding. Otherwise Arthur Eddington would never be able to e

    • We had a poll over this. Just WTF do you mean by "AGI"?

      The large language models mostly regurgitate back you to what's in their training set. So them saying AGI isn't possible is more a reflection of how egotistical Internet chatrooms are. But just DIG A LITTLE. It knows the term is being used differently by different groups. You're using the term like a philosopher or skeptic or, you know, like how social media uses it. As a big scary boogeyman that's going to hunt down Sarah Conner. Think for

  • All our base are belong to Sam's OpenAI now!
    At least Bill Gates was honest about becoming rich and famous.

  • Musk was right about this one.
    I hope he wins in court and money-grubbing Altman loses it all.
  • That lie is only getting pushed to attract more stupid money from investors. They all know by now that AGI is not happening as part of this AI hype.

  • They are years late. "OpenAI" wasn't open for many years now, and everybody who works with LLM knows that. After GPT-2, they didn't release anything else. They promised earlier this year to release an open model again, and we're still waiting for it. And I bet that when they release it, we will see that it is censored as hell and comes with a non-commercial license. The open AI doesn't come from OpenAI anymore, but from companies like Mistral or Alibaba.

  • > Sutskever "the brain behind the large language models that helped build ChatGPT" ...

    Well, no ...

    LLMs, i.e. today's AI, are all based on the Transformer architecture, designed by Jakob Uzkoreit, Noam Shazeer et al, at Google.

    Sutskever, sitting at OpenAI, decided to play with what Google (Jacob, Noam) has designed, intrigued to see how much better it would get as it was scaled up.

    ChatGPT - the first actually usable LLM was - came about by the addition RLHF, turning a purely statistical generator into one

  • Money is the kryptonite of altruism.

System going down at 5 this afternoon to install scheduler bug.

Working...