Forgot your password?
typodupeerror
AI Businesses

Yann LeCun Raises $1 Billion To Build AI That Understands the Physical World (wired.com) 61

An anonymous reader quotes a report from Wired: Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta's former chief AI scientist Yann LeCun, announced Monday it has raised more than $1 billion to develop AI world models. LeCun argues that most human reasoning is grounded in the physical world, not language, and that AI world models are necessary to develop true human-level intelligence. "The idea that you're going to extend the capabilities of LLMs [large language models] to the point that they're going to have human-level intelligence is complete nonsense," he said in an interview with WIRED.

The financing, which values the startup at $3.5 billion, was co-led by investors such as Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Other notable backers include Mark Cuban, former Google CEO Eric Schmidt, and French billionaire and telecommunications executive Xavier Niel. AMI (pronounced like the French word for friend) aims to build "a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe," the company says in a press release. The startup says it will be global from day one, with offices in Paris, Montreal, Singapore, and New York, where LeCun will continue working as a New York University professor in addition to leading the startup. AMI will be the first commercial endeavor for LeCun since his departure from Meta in November 2025. [...]

LeCun says AMI aims to work with companies in manufacturing, biomedical, robotics, and other industries that have lots of data. For example, he says AMI could build a realistic world model of an aircraft engine and work with the manufacturer to help them optimize for efficiency, minimize emissions, or ensure reliability. LeCun says AMI will release its first AI models quickly, but he's not expecting most people to take notice. The company will first work with partners such as Toyota and Samsung, and then will learn how to apply its technology more broadly. Eventually, he says, AMI intends to develop a "universal world model," which would be the basis for a generally intelligent system that could help companies regardless of what industry they work in. "It's very ambitious," he says with a smile.

This discussion has been archived. No new comments can be posted.

Yann LeCun Raises $1 Billion To Build AI That Understands the Physical World

Comments Filter:
  • Excellent! (Score:4, Funny)

    by Mr. Dollar Ton ( 5495648 ) on Wednesday March 11, 2026 @09:10AM (#66035012)

    They just have to build a simulation the size of the Universe and the gods themselves will pop out of Heaven to congratulate them.

    • If Ernst Mach was right then you're right. Didn't some guy at IBM conjure up something similar ( problem size vs solution size ) about 29 years ago ?
      • Quite likely. If you're in the business of brute forcing the world without regard to the physics laws that make it move, what are your other options?

    • 42
    • They can use minecraft as a simulation. It's a trillion billion times larger than the known universe or something.

    • They just have to build a simulation the size of the Universe and the gods themselves will pop out of Heaven to congratulate them.

      Guess it's time to re-read Olympos and Illium by Dan Simmons.

  • by geekmux ( 1040042 ) on Wednesday March 11, 2026 @09:13AM (#66035020)

    So, we want to teach AI about the physical world. Huh. Some would argue the body-less entity would merely need a few volumes on physics to understand that. Are investors going to start funding apple orchards near the data centers when we get to the part on gravity or what?

    I'm reminded of a variant on a related theme; You can lead a bot to solder, but you can't make it think.

    • by HiThere ( 15173 )

      Actually, this is a problem being worked on by everyone working on robots. And LOTS of progress is being made, though it's usually not described in quite the terms used here.

    • Some would argue the body-less entity would merely need a few volumes on physics to understand that.

      No. Think about how, say, dogs understand physics. Obviously not via Newton's "laws" (or should I say, Newton's very useful mathematical approximations). Dogs navigate the world and 'understand' concepts like threats, prey, and mates well enough to persist in the world.

      What LeCun is proposing is largely what self-driving cars already do. Waymo isn't driven by a Large "Language" Model that predicts wor

  • The "A" in AI stands for *artificial.* Artificial cannot be "true" human intelligence. It may be able to do amazing things, but that does not make it "true" intelligence.

  • I think we will need to build a system to train this computer. It will probably be the size of a planet. If we hire some small 4-legged scientists (mice) who experiment on larger two-legged lab animals (humans), we might expect results in a few centuries, unless the Vogons decide to build a galaxy highway through it first.
  • "AMI could build a realistic world model of an aircraft engine and work with the manufacturer to help them optimize for efficiency, minimize emissions, or ensure reliability"

    So it seems to me that they would build a realistic model of an aircraft engine, the word "world" here is meaningless. If you don't have a realistic model, then you have no model or a bad model. There are realistic models in some other world? So they are using possible worlds models for a modal logic, but those are not models of this wo

    • Even if it's just another approach to an "expert system," I'm still glad someone is working on something other than glorified sentence-completion.

  • Why is the goal to create superhuman intelligence? Do we need something smarter than us? Are you trying to get us all killed??

    • Why is the goal to create superhuman intelligence? Do we need something smarter than us? Are you trying to get us all killed??

      Greed, billionaires want magical AI genies that will do their bidding because they are not already rich and powerful enough.

    • If you are an optimist, The Culture offers a clear vision of where we as a society could head, read the books, here is a short summary online: The Culture War: Iain M. Banks’s Billionaire Fans Why Elon Musk and Jeff Bezos love Iain M. Banks’ anarcho-communist space opera. https://bloodknife.com/culture... [bloodknife.com]
      Video summary of The Culture, slow start gets better: https://www.youtube.com/watch?... [youtube.com]
    • People are getting rich over the bubble and speculation. That's it really. Remember how for a brief period if your company mentioned blockchain the stock would jump? That fizzled out but they found something else that stuck.

  • What makes world models any different from any of the other models? You are just training them on different stuff that operates on a much lower level than existing LLMs. Even if you were able to train models to the point where they are relevant for simulations what does this get you?

    "LeCun argues that most human reasoning is grounded in the physical world, not language"

    What reasoning skills do feral children have?

    • I'm not a big fan of LeCun - his level of recognition seems far in excess of his actual accomplishments, and his main claim to fame seems to a somewhat questionable claim to have invented CNNs, a long time ago.

      That said, I do think LeCun is correct (but hardly alone) is saying that LLMs won't get us to AGI, and that we need a different approach, more akin to animal intelligence.

      While LeCun does talk about animal intelligence, there is also this focus on "world models" and physical grounding, and it's not cl

      • The real difference between the animal intelligence approach and an LLM is that while an LLM predicts training sample continuations, and stops learning once it is trained, an animal predicts the real world (via it's perceptual inputs), including how the world reacts to it's own actions, and learns continually.

        This is an implementation issue. There is no reason you can't loop outputs of any world model, LLM..etc. back into the models LTM. In fact people do exactly this in a supervised manner when training LLMs. The issues with this approach (accumulation of error, over fitting..etc) is the same in both cases. This is somewhat easier in cases where an objective function can be clearly evaluated.

        • Not really - continual learning from real-world inputs completely disrupts the whole "pre-train then serve to everybody" LLM approach. Instead you've now got every model instance running and experiencing different things and needing real-time learning.

          Not only do you need a billion or so instances of that real-time learning algorithm running in parallel vs the "build a datacenter, train once" approach, but you need to invent that so-far elusive incremental training algorithm in the first place.

          You could sho

          • This is an implementation issue. There is no reason you can't loop outputs of any world model, LLM..etc. back into the models LTM.

            Not really - continual learning from real-world inputs completely disrupts the whole "pre-train then serve to everybody" LLM approach. Instead you've now got every model instance running and experiencing different things and needing real-time learning.

            You say not really while at the same time offering implementation related objections. I'm not sure what to make of this. There are approaches available to merging and differentially training models (e.g. LoRA) with relatively small amounts of compute.

            The industry is going this way anyway because there is significant value in custom training models on corporate datasets.

            Not only do you need a billion or so instances of that real-time learning algorithm running in parallel vs the "build a datacenter, train once" approach, but you need to invent that so-far elusive incremental training algorithm in the first place.

            I don't understand this line of argument. What makes world models any different in these regards? No matter the model you pick, no matte

            • LoRA is just an efficient way to fine tune a single model. It's not about merging different models.

              Merging models is not even well-defined. What would it mean? What would be a principled criteria for deciding how to merge them when there are conflicting weight updates needed?

              How do you address the privacy concerns of merging models? Are you really proposing to merge proprietary/private data from multiple companies and/or individuals then redistribute the merged changes to everyone? Sounds like a non-starter

              • LoRA is just an efficient way to fine tune a single model. It's not about merging different models.

                Merging models is not even well-defined. What would it mean?

                Merge in the context of LoRAs is taking multiple LoRAs and applying them to the same model. In the context of models it is combining multiple models into one.

                What would be a principled criteria for deciding how to merge them when there are conflicting weight updates needed?

                One of my all time favorite models is a frankenmerge of two slightly altered versions of itself. It's stupid that it works at all. I vaguely remember seeing references to papers on how to do this shit yet I don't pretend to have a clue of the details.

                Sure the industry is fine tuning models woith LoRA, but they are NOT then sharing their private updates with each other !!

                I think this is obvious. For proprietary you start with a model that best meets your needs and cust

                • > Personally I would be surprised if world models offered anything of value given they operate at such a low level.

                  You're thinking of the animal approach in the wrong way. Forget all the "world model" type, and just think of it as a predictive model, a near cousin of an LLM, that learns to predict next perceptual input(s) rather than next token form an historically gathered training set.

                  Let's also note that the input to an LLM really isn't text or symbolic sub-word tokens - it's really the high dimension

    • "What reasoning skills do feral children have?"

      You underestimate the feral kid at your peril. He beat the Humungous's gang with a boomerang and a little help from Mad Max.

  • You can train an AI on a the physical world. But then what? Yes, it will be good at copying us and doing tasks for us that are repetitive. But will it have the ability to innovate and do something new?

    • The business doesn't care about that. They want robots to understand the world and navigate/interact with it better than we do so that they can replace labor with robots. They want to build the Terminator.

  • by gweihir ( 88907 ) on Wednesday March 11, 2026 @11:49AM (#66035358)

    Always interesting how these people gloss over that. Essentially a lie by misdirection.
    Incidentally, it is not known whether it is necessary either.

    That said, there will never be AGI in LLMs. The approach does not support it. The one thing striking in the current AI hype is how many people without a clue are making grand predictions.

    • Where has LeCun said that anything here is necessary and sufficient? If something is in your view or his view necessary to accomplish a goal, then of course trying to do that thing even if it isn't sufficient makes sense. I'm also not sure what the point of your last sentence is since LeCun is one of the more prominent people who doesn't think that LLM AIs will lead to intelligence, and even says so in the summary above.
    • That said, there will never be AGI in LLMs.

      Of course not; but, something similar will be a part of AGI.

  • I keep trying to get AI to answer chemistry questions, mostly of an organic synthesis nature. I get answers that sound like they might know something but it's not very specific. Sort of like trying to bullshit your way through it. I think it will be a long time before we get good AI on organic synthesis which is kind of central to drug discovery. It's mostly because the literature isn't all that easily extractable. The other is that there's a lot of garbage to ignore. And yes, I'm aware of AlphaFold et al.
  • https://archive.org/details/pr... [archive.org]
    "Autonomous factories with intelligence: world models from sensory data"

    But I also suggested there would be a big risk in doing that -- which is one reason I stopped working on building AI and robotics a few years after that.

    And since then I have developed my sig -- which I feel is the single most important thing to know about AI and robotics (and other advanced technology):
    "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of t

  • Teach it about that power cord feeding it.

  • Just because humans are capable of symbol processing doesn't mean that we are, at root, symbol processors.
  • Death is a part of our lives--pull the the plug on AI,
  • Mostly because they are just statistical models. They don't understand anything at all. They just know that 'when b is near c, that often means we'll have an x followed by a y' That could be pixels or letters or wave-forms.

    With that basis, it's almost miraculous that they do as good of a job as they are at 'pretending' to give coherent answers. That's why I always say, "AI is great, as long as it doesn't have to be correct."

    I'd love to have AI that 'understood.' It didn't 'make up' answers, it

  • "a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe,"

    Pick three.

What this country needs is a good five cent nickel.

Working...