Forgot your password?
typodupeerror
Google

Google Announces Gemini 3.1 Pro For 'Complex Problem-Solving' (9to5google.com) 18

Google has introduced Gemini 3.1 Pro, a reasoning-focused upgrade aimed at more complex problem-solving. 9to5Google reports: This .1 increment is a first for Google, with the past two generations seeing .5 as the mid-year model update. (2.5 Pro was first announced in March and saw further updates in May for I/O.) Google says Gemini 3.1 Pro "represents a step forward in core reasoning." The "upgraded core intelligence" that debuted last week with Gemini 3 Deep Think is now available in Gemini 3.1 Pro for more users. This model achieves an ARC-AGI-2 score of 77.1%, or "more than double the reasoning performance of 3 Pro."

This "advanced reasoning" translates to practical applications like when "you're looking for a clear, visual explanation of a complex topic, a way to synthesize data into a single view, or bringing a creative project to life." 3.1 Pro is designed for tasks where a simple answer isn't enough, taking advanced reasoning and making it useful for your hardest challenges.

This discussion has been archived. No new comments can be posted.

Google Announces Gemini 3.1 Pro For 'Complex Problem-Solving'

Comments Filter:
  • Is this for equations involving i ?

  • by oldgraybeard ( 2939809 ) on Thursday February 19, 2026 @09:08PM (#66000034)
    There isn't any reasoning going on! It is just the Sales and Marketing misdirection.
    It would have been better if these pretend AI companies had been honest in the beginning and said "Automation".
    • by gweihir ( 88907 )

      Indeed. But at least we have some signs things get towards the end of the hype: They could have claimed "10x the reasoning performance" and still have the same level of truth. They only dared to go to 2x.

    • It's easy to fall behind the state of the art on AI tools. I tried various coding assistants over the past few years and never found them helpful until suddenly last November, Claude Code crossed over to being legitimately useful. I now use it throughout the day, occasionally giving it tasks could easily go to a low-level engineer.

      An LLM may not be able to reason, but the combination of an orchestrator, agents and MCP tools certainly can. Not at a human level, but if I can ask it to do something, and it dec

      • by Junta ( 36770 )

        Reasoning involves an abstraction that I don't see LLMs doing. They can construct a narrative that looks like a reasoning chain, but it's not really modelling anything beyond the words itself.

        For a big chunk of code this could be particularly similar, as the end result is frequently untethered from some grounded reality anyway, and getting close enough is about where the software lives.

        That said, as a worker at a company that is crazy bullish on the AI tools, I too have access to Claude Code and while 'use

    • by AmiMoJo ( 196126 )

      I don't think it's that simple. When you ask the more advanced AIs a question, they do work through the problem. Maybe it's not reasoning in the same way a human does, but they research the issues, research the solutions, evaluate them, and then issue a response. Some will then test and refine that response, e.g. Claude can create tests using data you supply, or even find its own sample data, and then do the testing and refinement cycles automatically. You can have multiple Claude bots, some doing coding an

      • "using data you supply" So these AI companies will get all the proprietary information from every company that allows their employees to use it.
        But don't worry, they promise to never monetize any of "your" data.
        Wonder what happens to the data when the bankrupt shell gets sold.
        • by allo ( 1728082 )

          You mix up training with inference. Inference works with a given model and your input (that's what OP meant with data you supply) and creates an output without changing the model. Training happens on datasets usually created by crawling the web (and scanning books) and doesn't have anything to do with your inputs.

    • Yes, it can now reason at Level 4, instead of just Level 2.

    • by allo ( 1728082 )

      Reasoning is a technical term for a concept in modern LLM.
      Your statement is on the level of people saying "That's not AI" and misunderstanding that AI is the field where even ELIZA fits in and what they mean (and where they are right) is that it is not AGI.

  • You are being lied to.

    LLMs cannot do complex or "reasoning". They can just chain unreliable steps for a big mess at the end.

    • by iNaya ( 1049686 )
      So.... like humans then.
      • by gweihir ( 88907 )

        True for most humans, but not all. And, guess what, the ones that can do better are engineers, scientists, really good coders, etc. Yes, that is a minority. But without these people, everything collapses.

I don't have any use for bodyguards, but I do have a specific use for two highly trained certified public accountants. -- Elvis Presley

Working...