
Co-Founder of xAI Departs the Company (techcrunch.com) 11
Igor Babuschkin, co-founder of xAI, has left the company to start Babuschkin Ventures, a VC firm focused on AI safety and humanity-advancing startups. TechCrunch reports: Babuschkin led engineering teams at xAI and helped build the startup into one of Silicon Valley's leading AI model developers just a few years after it was founded. "Today was my last day at xAI, the company that I helped start with Elon Musk in 2023," Babuschkin wrote in the post. "I still remember the day I first met Elon, we talked for hours about AI and what the future might hold. We both felt that a new AI company with a different kind of mission was needed."
Babuschkin is leaving xAI to launch his own venture capital firm, Babuschkin Ventures, which he says will support AI safety research and back startups that "advance humanity and unlock the mysteries of our universe." The xAI co-founder says he was inspired to start the firm after a dinner with Max Tegmark, the founder of the Future of Life Institute, in which they discussed how AI systems could be built safely to encourage the flourishing of future generations. In his post, Babuschkin says his parents immigrated to the U.S. from Russia in pursuit of a better life for their children.
Prior to co-founding xAI, Babuschkin was part of a research team at Google DeepMind that pioneered AlphaStar in 2019, a breakthrough AI system that could defeat top-ranked players at the video game StarCraft. Babuschkin also worked as a researcher at OpenAI in the years before it released ChatGPT. In his post, Babuschkin details some of the challenges he and Musk faced in building up xAI. He notes that industry veterans called xAI's goal of building its Memphis, Tennessee supercomputer in just three months "impossible." [...] Nevertheless, Babuschkin says he's already looking back fondly on his time at xAI, and "feels like a proud parent, driving away after sending their kid away to college." "I learned 2 priceless lessons from Elon: #1 be fearless in rolling up your sleeves to personally dig into technical problems, #2 have a maniacal sense of urgency," said Babuschkin.
Babuschkin is leaving xAI to launch his own venture capital firm, Babuschkin Ventures, which he says will support AI safety research and back startups that "advance humanity and unlock the mysteries of our universe." The xAI co-founder says he was inspired to start the firm after a dinner with Max Tegmark, the founder of the Future of Life Institute, in which they discussed how AI systems could be built safely to encourage the flourishing of future generations. In his post, Babuschkin says his parents immigrated to the U.S. from Russia in pursuit of a better life for their children.
Prior to co-founding xAI, Babuschkin was part of a research team at Google DeepMind that pioneered AlphaStar in 2019, a breakthrough AI system that could defeat top-ranked players at the video game StarCraft. Babuschkin also worked as a researcher at OpenAI in the years before it released ChatGPT. In his post, Babuschkin details some of the challenges he and Musk faced in building up xAI. He notes that industry veterans called xAI's goal of building its Memphis, Tennessee supercomputer in just three months "impossible." [...] Nevertheless, Babuschkin says he's already looking back fondly on his time at xAI, and "feels like a proud parent, driving away after sending their kid away to college." "I learned 2 priceless lessons from Elon: #1 be fearless in rolling up your sleeves to personally dig into technical problems, #2 have a maniacal sense of urgency," said Babuschkin.
Butlerian Jihad? (Score:3)
a VC firm focused on AI safety and humanity-advancing startups.
There is no "AI safety".
"humanity-advancing startups" like what, a normal business that employs humans and doesn't replace them with AI?
Re: (Score:2)
Agreed, "AI safety" is neither practical nor possible, in part because AI has no awareness or meaning, in part because AI has no capacity for introspection, but also because AI is only useful if it can handle hard questions and hard questions are, by their nature, not safe, and (as usual) because it would utterly destroy the entire economic model of the AI companies.
I can't find any obvious evidence that the guy really knows what "humanity-advancing" means, beyond advancing his own take on the world.
Re:Butlerian Jihad? [One-dimensional implosion?] (Score:2)
Mostly disappointed the important topic didn't elicit more reactions on Slashdot, but responding here because I'm missing the reference of your Subject. Care to explain? I'm guessing Butler might be an author rather than an AI butler of some sort?
As regards the story, I think it's describing a paradox and this joke will not end well. (There was one funny comment on the story, but not so much... Certainly not an LOL there.) I'm not sure if it is the primary threat posed by AI, but creating AIs for the sake o
Which translates to... (Score:2)
Elon is mental and I've made enough money now that I'm alright Jack. I will now concentrate on rehabilitating / whitewashing my past behaviour by saying that I'll be funding "philanthropic" research on AI Safety, even though the damage may already have been done.
Re: (Score:3)
He clearly got tired of trying to build the Antichrist [x.com] ;)
Re: (Score:2)
Ironic from the guy who sells 'environmentally friendly' Teslas. In quotes because out the factory door, an EV takes significantly more power than a ICE to build (such that it takes ~8 years to overtake the ICE in carbon).
Any sources for this?
Re:xAI, power gobbler (Score:5, Funny)
Anonymous (2021). "How My Uncle’s Friend’s Mechanic Proved EVs Are Worse." International Journal of Hunches, 5(3), 1-11.
Backyard, B. (2018). "EVs Are Worse Because I Said So: A Robust Analysis." Garage Journal of Automotive Opinions, 3(2), 1-2.
Dunning, K. & Kruger, E. (2019). "Why Everything I Don’t Like Is Actually Bad for the Environment." Confirmation Bias Review, 99(1), 0-0.
Johnson, L. & McFakename, R. (2022). "Carbon Footprint Myths and Why They Sound Convincing After Three Beers." Annals of Bro Science, 7(2), 1337-42.
Lee, H. (2025). "Numbers I Felt Were True". Global Journal of Speculative Engineering, 22(1), 34-38.
Outdated, T. (2015, never revised). "EVs Are Bad Because of That One Study From 2010 I Misinterpreted." Obsolete Science Digest, 30(4), 1-5.
Tinfoil, H. (2020). "Electric Cars Are a Government Plot (And Other Things I Yell at Clouds)." Conspiracy Theories Auto, 5(5), 1-99.
Trustmebro, A. (2019). "The 8-Year Rule: Why It’s Definitely Not Made Up." Vibes-Based Research, 2(3), 69-420.
Wrong, W. (2018). "The Art of Being Loudly Incorrect About Technology." Dunning-Kruger Journal, 1(1), 1-?.
Re:xAI, power gobbler (Score:4, Insightful)
The average ICE car burns its entire mass worth of fuel every year. Up in smoke into our breathing air, gone, no recycling.
The average car on the road lasts about two decades, and is then recycled, with the vast majority of its metals recovered.
The manufacturing phase is not the phase you have to worry about when it comes to transportation.
If 2 is now "a few" is 1 now "a couple"? (Score:5, Funny)
...and helped build the startup into one of Silicon Valley's leading AI model developers just a few years after it was founded. "Today was my last day at xAI, the company that I helped start with Elon Musk in 2023," Babuschkin wrote.
(Obligatory) Of course, I'm a /. old-timer so 1 is the only couplehood I'll ever know.