Meta Chief AI Scientist Yann LeCun Plans To Exit To Launch Startup 13
According to the Financial Times (paywalled), Meta's Chief AI Scientist Yann LeCun, a deep-learning pioneer and Turing Award winner, is reportedly leaving the company to launch his own startup. Reuters reports: The owner of Facebook and Instagram has significantly increased its investments in artificial intelligence, with CEO Mark Zuckerberg reorganizing the company's AI initiatives under Superintelligence Labs. Zuckerberg hired Alexandr Wang, former CEO of data-labeling startup Scale AI to lead the new AI effort. As a result, LeCun, who had reported to chief product officer Chris Cox, is now reporting to Wang, the report said.
The company began investing in AI in 2013 by launching Facebook Artificial Intelligence Research (FAIR) unit and recruiting LeCun, who is a known skeptic of the large language model path to superintelligence. LeCun is also a Silver Professor of data science, computer science, neural science and electrical and computer engineering at New York University, according to his LinkedIn page. He is known for his work in deep learning and the invention of the convolutional neural network, which is widely used for image, video and speech recognition.
The company began investing in AI in 2013 by launching Facebook Artificial Intelligence Research (FAIR) unit and recruiting LeCun, who is a known skeptic of the large language model path to superintelligence. LeCun is also a Silver Professor of data science, computer science, neural science and electrical and computer engineering at New York University, according to his LinkedIn page. He is known for his work in deep learning and the invention of the convolutional neural network, which is widely used for image, video and speech recognition.
Possible in favor of open source (Score:3, Interesting)
Rumors are, that he does not like that Meta seems to change its course related to releasing models as open source. The Llama 4 release didn't work out that good and people are seriously doubting we will see Llama 5 with open weights and LeCun was in favor of the releases.
Re: (Score:3)
An open weight model is like an mp4 movie. They give you a compressed datafile and tell you to run it in your favourite (movie) player software.
Re:Possible in favor of open source (Score:4, Interesting)
No one wants to reveal their levels of mass illegal pillaging.
Re: (Score:2)
I think open weights are for most people more useful than open training data. Training DeepSeek R1 costed compute equivalent to $300,000 in GPU cloud costs and that's one of the cheapest large models. Fine-tuning DeepSeek R1 can be done with a few hundreds of dollars, because you have the open weights.
Turing Award winner (Score:4, Funny)
I read this story before. (Score:2)
I hope he does better than Christopher Mitchell, from Maas Biolabs.
Re: (Score:2)
Farkin' slam hounds!
So .. more money (Score:3)
Venture capitalists have so much money to throw away , everyone wants their piece of the cake. No surprise. Bubble bursting in 3 , 2 ..
Re: (Score:2)
I can't blame him. If I had the talent I'd run one of those get-rich-quick scams too.
Just inflate the value of your company with VC money for a few "rounds" until a big player buys you, then take the money and run.
It's the standard business model in Silicon Valley really.
Pytorch leader also leaving (Score:3)
https://soumith.ch/blog/2025-1... [soumith.ch]
Related?
This is a good thing (Score:2)
Meta wants to misuse AI tech to create fake friends who will likely try to sell you stuff
He needs to work on AI that can help us solve previously intractable problems in science, engineering, medicine, etc
Of course. (Score:2)
He knows he can become a billionaire. Of course he left
A more interesting article (Score:3)
https://techstartups.com/2025/... [techstartups.com]
LeCun has grown frustrated with shrinking budgets, layoffs within his lab, and the reallocation of computing resources to support generative AI projects. Meta’s decision to prioritize scaling large language models for consumer products, including chatbots and AI assistants, has sidelined FAIR’s exploratory work.
LeCun has long argued that today’s large language models, or LLMs, are limited because they rely on statistical pattern matching rather than real reasoning. Earlier this year, he posted on X: “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat.” His research focuses on “world models,” systems that learn by observing and predicting the physical world—a path he believes will lead to AI that can reason, plan, and understand cause and effect.