Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Microsoft Technology

Microsoft Readies New AI Model To Compete With Google, OpenAI (theinformation.com) 26

For the first time since it invested more than $10 billion into OpenAI in exchange for the rights to reuse the startup's AI models, Microsoft is training a new, in-house AI model large enough to compete with state-of-the-art models from Google, Anthropic and OpenAI itself. The Information: The new model, internally referred to as MAI-1, is being overseen by Mustafa Suleyman, the ex-Google AI leader who most recently served as CEO of the AI startup Inflection before Microsoft hired the majority of the startup's staff and paid $650 million for the rights to its intellectual property in March. But this is a Microsoft model, not one carried over from Inflection, although it may build on training data and other tech from the startup. It is separate from the Pi models that Inflection previously released, according to two Microsoft employees with knowledge of the effort.

MAI-1 will be far larger than any of the smaller, open source models that Microsoft has previously trained, meaning it will require more computing power and training data and will therefore be more expensive, according to the people. MAI-1 will have roughly 500 billion parameters, or settings that can be adjusted to determine what models learn during training. By comparison, OpenAI's GPT-4 has more than 1 trillion parameters, while smaller open source models released by firms like Meta Platforms and Mistral have 70 billion parameters. That means Microsoft is now pursuing a dual trajectory of sorts in AI, aiming to develop both "small language models" that are inexpensive to build into apps and that could run on mobile devices, alongside larger, state-of-the-art AI models.

Microsoft Readies New AI Model To Compete With Google, OpenAI

Comments Filter:
  • Arguably the "state of the art" would be something that is reasonably small but also highly capable. Because one of the problems we're seeing with "state of the art" is that it's extremely power hungry due to compute costs involved.

    But if the goal is AGI, then it's "as complex as possible at all costs" of course. But if it's more about "AI assistant", then smaller model is likely better.

    • Re:State of the art (Score:4, Interesting)

      by sg_oneill ( 159032 ) on Monday May 06, 2024 @11:14AM (#64451498)

      Yeah. While GPT4 and Claude are extemely impressive, I'm more interested in things I can run myself, rather than shitty cloud services that shuttle all my private data off to some anonymous GPU cloud for disection.

      And so far , I've been rather impressed with the likes of Mistral and llama2. Obviously they arent going to win any battle of intellects agains the trillion parameter behemoths. But they still work fine enough for the kind of tasks I'm interested in.

      • by gweihir ( 88907 )

        You seem to be easy to impress, because my take is exactly on the other side: Somewhat better search, can produce better crap, but that is it.

    • You mean something like the brain of a stable genius?

    • by gweihir ( 88907 )

      The goal is not AGI, because those that actually understand what AGI means know it cannot be reached by LLMs.

      • by Luckyo ( 1726890 )

        Sam Altman openly disagrees, and he knows more about the issue than both you and I.

        • by gweihir ( 88907 )

          Well, besides the obvious fact that Altmann stands to profit massively from a lie here, I really doubt he knows as much about AGI as I do. He does not strike me as nearly as smart or educated as I am and he decidedly has not followed AI research for something like 35 years, unlike me. But he does not need to understand what he is promising, as he is just pushing a scam. He just needs to know what people want to hear.

          Why he pushes AGI as a (fake) goal is quite clear: His hype AI is quite pathetic and cannot

          • by Luckyo ( 1726890 )

            >I really doubt he knows as much about AGI as I do

            The man standing at the helm of the company that made the breakthrough to current tech knows less than you do?

            Does "hubris" ring any bells?

            • by gweihir ( 88907 )

              And since when does an investor or manager know anything real about a product or the research area it resides in?

              Incidentally, OpenAI has not made any significant AI breakthroughs. They only scaled the whole thing up (by a massive criminal commercial copyright infringement campaign) and gave it a nicer interface. LLMs are _old_ tech.

              • by Luckyo ( 1726890 )

                >Incidentally, OpenAI has not made any significant AI breakthroughs

                I honestly got nothing. There's hubris, there's delusion, there's megalomania, there's a combination of all three, and then there's whatever this is. The organisation behind the first actually useful LLM, the actually useful image generator and actual useful video generator... "has not made any significant AI breakthroughs".

      • Interesting, why do you say LLMs wont be a path to GAI?

        When I look at how humans learn, it’s through repetitive reinforcement. You dont believe something IS, until you see it that way multiple times, in different scenarios, and it’s always the same.

        LLMs learn the same way, taking training information over, and over, and over, reinforcing it. We don’t have a final state of what IS, and neither does an LLM, just constant reinforcement, until there is none, and that fact becomes malleable.

        We

        • by dvice ( 6309704 )

          LLMs could be path to GAI in theory, but according to current experiments, it would be enormously inefficient, making in practically impossible.

          I think Google has much better approach where they can train individual parts isolated to make them experts on their narrow area, and then merge those isolated parts into one big AI that benefits from all of its parts. In some sense they already invented GAI, but it is just not on human level yet. I don't know if it will work or not, but I think they have much bette

        • by gweihir ( 88907 )

          Simple: There is no rational reason to think LLMs can do general intelligence at all. At the same time, it clearly is an extraordinary claim and so would need extraordinary proof to takle that claim seriously. There is not even simple proof for that claim, so it is clearly complete bullshit at this time. At the same time LLMs is mature tech and the only real improvement over, say, IBM Watson (15 years old) is a better natural language interface and a larger training data. Hence it is not rational to expect

  • by Pinky's Brain ( 1158667 ) on Monday May 06, 2024 @11:27AM (#64451524)

    Models like LLama almost certainly leave nearly 2 orders of magnitude of efficiency on the table as far as computational sparsity is concerned. The true state of the art models in industry probably use less than 1/10th of the parameters during inference and to a lesser extent training, also likely using quantization aware training to keep most intensive calculations int4.

    Fine grained MoE or predictor/topk approaches, implementing the KVQ projections like FFN so they can benefit too, approximate kNN searches for attention (Gemini 1.5 almost certainly doesn't use dense attention for it's multimillion context size). I doubt Microsoft has any real intention to use such things local though, can't keep it as a trade secret then. Only Apple is likely to push it into the open, they have far more commercial interest in local models.

  • by groobly ( 6155920 ) on Monday May 06, 2024 @12:58PM (#64451720)

    It seems likely that LLMs have nearly reached maturity, and the vast resources being plowed into further learning are not going to be cost effective, except possibly in limited domains. As my old AI adviser used to say, you can't reach the moon by building ever better balloons.

    • by gweihir ( 88907 )

      They had plateaued when they came out. All that has been happening since then is cosmetics and minor improvements.

  • That name will probably fit the result...

  • Reminds me of when Apple fooled everyone with "we have 1 million apps" in the store.
    Just curious... how many apps can you use?
    In the end, don't you end up just using a handful?
    It's not clear that bigger is better.
  • I find this headline amusing, given that Microsoft has never done anything responsibly in the almost 50 years it's been around.

  • What's it gonna be this time, a failure like the Zune or a success like the Surface, which then they will slowly phase out?

One good reason why computers can do more work than people is that they never have to stop and answer the phone.

Working...