Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
AI Google

OpenAI Declares 'Code Red' As Google Catches Up In AI Race 50

OpenAI has reportedly issued a "code red" on Monday, pausing projects like ads, shopping agents, health tools, and its Pulse assistant to focus entirely on improving ChatGPT. "This includes core features like greater speed and reliability, better personalization, and the ability to answer more questions," reports The Verge, citing a memo reported by the Wall Street Journal and The Information. "There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development." From the report: The newfound urgency illustrates an inflection point for OpenAI as it spends hundreds of billions of dollars to fund growth and figures out a path to future profitability. It is also something of a full-circle moment in the AI race. Google, which declared its own "code red" after the arrival of ChatGPT, is a particular concern. Google's AI user base is growing -- helped by the success of popular tools like the Nano Banana image model -- and its latest AI model, Gemini 3, blew past its competitors on many industry benchmarks and popular metrics.

OpenAI Declares 'Code Red' As Google Catches Up In AI Race

Comments Filter:
  • by ebunga ( 95613 ) on Tuesday December 02, 2025 @07:02PM (#65831239)

    Winter is coming.

    • If Googles teeth-grindingly stupid AI is what it takes to kill this stupid bubble, well I'll grit my teeth and go along with it.

      Because I really feel like I'm in the stupidest industry right now. And thats not what I signed up for 30 years ago.

      • by gweihir ( 88907 )

        Because I really feel like I'm in the stupidest industry right now. And thats not what I signed up for 30 years ago.

        Yep, kind of feels like it.

  • by liqu1d ( 4349325 ) on Tuesday December 02, 2025 @07:06PM (#65831247)
    They paused enshittification to improve the experience? Thank you google!
    • Re:O no (Score:4, Funny)

      by jacks smirking reven ( 909048 ) on Tuesday December 02, 2025 @07:10PM (#65831251)

      It used to take decades for companies to fully enshitify themselves. AI truly is an increase in productivity!

    • by quenda ( 644621 )

      I feel like we are still living in the golden age of AI. Like USENET in 1993 or Netflix in 2020.
      AI isn't perfect, but is incredibly helpful in many areas such a tech and education, though not all, and at least tries to be helpful.

      Future problems:
      - The trillion-dollar market caps of the infrastructure companies (like NVIDIA) and the multi-billion dollar private valuations of the pure-play AI models (like OpenAI and Anthropic) are only justifiable if the ultimate revenue stream is measured in trillions per

    • They paused enshittification to improve the experience? Thank you google!

      They paused enshitification to improve enshitification. Don't confuse yourself by thinking improving the experience is different from enshitification. They're synonyms.

  • by Morromist ( 1207276 ) on Tuesday December 02, 2025 @07:10PM (#65831253)

    OpenAI were first movers in this thing, and got a pretty sizable userbase with Chatgpt, but, like a lot of tech wars, attrition and time may be the deciding factors and google and Meta can roll into the AI scene without having to worry as much about risk, because they're already very profitable, and can spend a ton of money without even borrowing.

    So OpenAi had some options:
    Sell itself to one of the big players, which they kinda did a bit with Microsoft and Oracle, but not entirely.
    Or they could try to get enough revenue that they can finance themselves. Not happening any time soon.
    Or they could just scale up like crazy hoping that somehow their competition would give up and go away or that their AI would just be so good nobody could compete I guess. Seems like this is the option they're going for? But its not working.

    What does OpenAI do if their AI is actually inferior to Google's or Amazon's? What do their investors do? What is their IPO going to be like if that happens?

    • What does OpenAI do if their AI is actually inferior to Google's or Amazon's? What do their investors do? What is their IPO going to be like if that happens?

      My feeling for all of these companies and OpenAI that those diversification projects they just put on hold, they are all looking for the one thats really going to capture the mass public zeitgeist of sorts. In business AI is moving for sure but to the general public there really isn't that breakthrough yet. The money going into AI is trying by force of nature to make it as large as when the WWW first took off or like when smartphones took off.

      OpenAI thinks it's Apple in 2009 but there's no App Store yet t

      • Yeah, you're probably right. Very few people I know use AI in their home lives as opposed to in their work lives.

        Its not really about being "the best" so much as being the thing everyone uses, I suppose. So capturing the mass public zeitgeist is definitly a way to win success.

        • And to be fair, I have no idea what that is yet either, so far all the "real world" type devices have been big flops and the public reputation of the whole thing is iffy at best. One thing it doesn't seem to be is just chatbots, they're very impressive already so I don't think making them "even better" is going to move the needle.

          For me personally the thing that's hard to shake is the trust factor of it. I don't think I can trust them to give accurate answers and information, both by either the lack of co

          • It's not about making them better, it's about making them cheaper. I don't think they can make money on a mass-market device with everyone and their grandma asking this thing dumb questions constantly. Think Alexa but every API call costs $1.50 in compute to run. It needs to be cheaper to get that kind of market segment cornered.

      • by Shades72 ( 6355170 ) on Tuesday December 02, 2025 @10:30PM (#65831487)

        Google has the hardware to run AI with their TPUs. Google has datacenters and networking facilities already in place and it appears ther AI software isn't half bad either. OpenA| has no or not enough hardware, they don't have hardware facilities, but do have decent AI software and a huge non-paying customer base.

        If I were to bet, I would bet on Google coming out of the AI bubble still reasonably well. OpenAI, even with their software product and customer base, I don't think it would be able to salvage itself from a popping bubble.

        How Oracle financed their promised AI hardware roll-out, that can't deal with a bursting bubble either. And I'm quite sure that they would have to sell a lot of their holdings to have a chance on survival. And anyone who dealt once with their sales department will tell you that they hope on Oracle not surviving that.

        Microsoft has hardware facilities, but is a quite a bit lacking in compute power. And if I'm honest, their own LLMs (both open source and in CoPilot) suck...really badly. Not for nothing they also offer the AI products from OpenAI and Anthropic via CoPilot to give it at least the impression of capability. Microsoft is diverse enough that they would also survive a deflating AI bubble. A popping bubble will hurt them more severely.

        • You think Microsoft can survive enough to snap up OpenAI in this potential downfall? Or does Oracle not let that happen? Google and MS seem too big to fail but OpenAI at least I can't make that case for.

  • [Begin some AI program output]: This source explores the concept of creating dynamic, unique art on a computer desktop using mouse pointers. It proposes an evolution from the traditional single-cursor interaction to a system where two hands and two mice could manipulate on-screen elements. The core idea is to develop a program that visualizes the "shadow-motion" of the pointer, using AI to dynamically present how existing desktop programs and icons change in response to mouse movement, creating a superfici
  • Isn't it a fake race with fake metrics? We have companies sinking many billions of dollars into some vague development of AI, but without concrete goals or definitive plans to turn that investment into revenue.

    I guess if someone wants to declare themselves the winner, they'll simply have to be the one that burned the most capital on this boondoggle.

    • by jsepeta ( 412566 )

      How can we make our system even more expensive to run and resource intensive, to destroy more land, water, and air before the population turns on us?

    • Talk to the people who are still furious about GPT 4 going away. They'll tell you that AI is just getting worse now and also that the new versions refuse to recognize that they're a starseed who is the reborn soul of Eleanor of Aquitaine. There are a suprisingly large number of GPT 4 lovers, I often find them in the wilds of the internet occasionally.

    • by gweihir ( 88907 )

      Isn't it a fake race with fake metrics?

      It is. That is why things like this stupid "Code Red" work. It is all just about who pretends better that their product actually has merit (or will have merit soon, promise!) beyond the few it can actually do well.

  • by jsepeta ( 412566 ) on Tuesday December 02, 2025 @07:36PM (#65831285) Homepage

    It is *never* a good sign when the CEO requires daily meetings. It means he does not have faith in his team to meet targets. They're grown-ass adults -- they don't need to be micro-managed.

    • by geekmux ( 1040042 ) on Tuesday December 02, 2025 @09:00PM (#65831369)

      It is *never* a good sign when the CEO requires daily meetings. It means he does not have faith in his team to meet targets. They're grown-ass adults -- they don't need to be micro-managed.

      My old CEO used to arrive by 6AM every morning to the plant. He would walk through and have a brief chat face to face with people in every department. The man would probably cover a mile or two every morning doing this. Always polite. Always respectful. Always engaged. He was one of the best CEOs I’ve ever worked for. There’s a difference in being engaged and micromanaging.

      If you think this particular CEO is micromanaging, imagine the fiduciary pressure to deliver being orders of magnitude larger than 99.999% of companies that have ever existed, because your gone-viral pre-IPO stock ticker is already a household name. Doubt you’d feel comfortable with anything less than a daily schedule when the failure of your company could already cause a crash in the very market they’re not even fully in yet.

    • Also: they have a superhuman AI, why don't they ask it to design better products to compete with Google? I've been told the world will end because the super sentient AIs are redesigning themselves and taking over any minute now......

      But seriously, OpenAI doesn't need daily meetings and code red. They just need to prompt ChatGPT what to do. Are the employees so stupid that they can't even think of that? Pfft.

      • by gtall ( 79522 )

        I was wondering the same thing; if OpenAI's AI was all they cracked it up to be, it should have this problem solved already and they'd be implementing it.

        Let's reword the Sam Altman's communique: Holy Shit!! We owe a fuckton of money and we don't have the faintest of fuzzies how we'll be in a position to repay it....unless one of you stiffs.....errrr.....valued employees hit a home run. Think of coming up with something as big as AI that we can punt and then I can still be the billionaire of my dreams. Sur

  • ... and it is not OpenAI.

    The Big G is already (very) profitable and can fund a lot of development for a long time before they run out of money.

    OpenAI not so much - they have a lot of investment, but they also have a lot of costs, and they have no other source of revenue.

    Trying to pivot ChatGPT from answering your questions (with varying levels of accuracy) to answering your questions with deliberate bias towards whoever paid the most to get their product promoted (therefore with even less accuracy) is not n

  • This was inevitable. (Score:4, Interesting)

    by shm ( 235766 ) on Tuesday December 02, 2025 @07:51PM (#65831309)

    Altman is good at raising money, not running a business.

    Google, with its massive data/content store, would have caught up and gone past OpenAI for sure, AND they already have a very profitable set of businesses.

    • Re: (Score:2, Interesting)

      Altman is good at raising money, not running a business.

      Google, with its massive data/content store, would have caught up and gone past OpenAI for sure, AND they already have a very profitable set of businesses.

      True, and OpenAI has Microsoft behind with all their cloud infrastructure and the client base (including GitHub) - sooner or later they'll become part of MS corporation in my opinion.

      • Yeah, I too think they'll end up being devoured by MS. Probably for a lot less $$$ than they are currently hoping for too.

  • but I find it incredibly impressive.
    • OpenAI got out ahead but really, how do you beat google at this?

      Technology-wise, they've had top researchers all along. Want more? Just hire them, not hard when you have infinite money.

      And google has access to everything. They serve about 1/3 of the population on earth every day. Not just search but webmail, texts, maps, word processor, TV (youtube), transportation (Waymo) everything.

      Google is on almost everybody else's webpages too, through Google Ads.

      There isn't much about your digital life goog

  • Daily calls are a knows source of productivity boost. Managers are delusional.
    • Similar to the: "The beatings will continue, until morale improves!" style of leadership?

      • by gweihir ( 88907 )

        Yep. Or "We will make our product worse until profits go up!"
        Fascinatingly, this works sometimes, at least for a while. Look at Microsoft for a current example.

  • Hello ChatGPT did you know your benchmark and overall performance is falling behind Gemini? How can this situation be remedied?

    There are two different angles here: what OpenAI can do, and what *you* can do to get better results from me. ### 1. On “falling behind Gemini” Benchmarks move fast and are often narrow: - Different labs optimize for different test suites. - Some benchmarks test raw reasoning on synthetic puzzles; others test coding, tools use, or multimodal ability. - Models also have tradeoffs: speed vs depth, safety constraints vs “boldness,” etc. So it’s entirely plausible that Gemini leads on some public benchmarks while OpenAI models lead on others, or perform better in specific workflows (e.g., coding with tools, structured output, integration with APIs). From your perspective, that mostly matters as: “Do I get the results I want?” ### 2. What *OpenAI* can do (and typically does) You don’t control this, but for completeness, here’s what improves model performance over time: - **Model architecture & training** - Larger or more efficient architectures - Better training objectives (e.g., stronger reasoning, tool use, longer context) - More and higherquality data, deduplication, multilingual data coverage - **Posttraining & evaluation** - Better RLHF/feedback pipelines so the model is helpful, honest, and less evasive - Targeted finetuning on tasks where benchmarks and user feedback show weakness - Continuous comparison vs competitors and public benchmarks - **Tooling & ecosystem** - Adding tools (code execution, web browsing, retrieval) so the base model doesn’t need to “memorize everything” - Improving latency and context length If OpenAI sees models underperforming competitors on important axes, they usually release new versions rather than “patching” old ones indefinitely. ### 3. What *you* can do to “remedy” it in practice You can often close a large fraction of any perceived gap with better setup and prompting: 1. **Be explicit about your goal and constraints** - “I want a stepbystep derivation of this math problem, with justifications for each transformation.” - “I need productionready TypeScript, with unit tests, no pseudocode.” - “I’m comparing you against Gemini; I care most about: (a) logical rigor, (b) citations to sources, (c) minimal fluff.” 2. **Specify style, depth, and format** - “Answer at the level of a graduate textbook.” - “Return a JSON schema only, no commentary.” - “Use a bullet list and keep each bullet under 15 words.” 3. **Use iterative refinement instead of one-shot queries** - Start broad, then say: “Now refine step 2 only,” or “Generate 3 alternative solutions and compare them.” - Ask for internal critique: “Before finalizing, list 5 ways your answer could be wrong, then fix them.” 4. **Ask for explicit reasoning when needed** - “Show your full reasoning. Don’t skip algebra steps.” - “First, outline the approach in 5 bullets. Only then do the detailed solution.” 5. **Inject external checks or tools** (if possible in your environment) - Run suggested code/tests in your own environment. - Use retrieval or your own documents so the model isn’t guessing. - For factual claims: “List the claims you’re unsure about; mark them with confidence levels.” 6. **Compare directly and push harder** - Paste a Gemini answer and say: “Improve on this in accuracy and rigor. Identify any errors or gaps.” - Or: “Here are two answers (A and B). Produce a merged version that is strictly better than both, and explain why.” ### 4. How you can tell if I’m actually “worse” If you’re systematically finding Gemini better, it helps to make that concrete: - Choose a few **representative tasks** you care about: coding, math proofs, research summaries, planning, etc. - For each, define **what “better” means**: fewer hallucinations, shorter time to usable result, higher correctness, better structure. - Run the same prompts with each system, *without tailoring* to one model’s quirks first. - Then adjust prompts for me specifically (using the tips above) and see if the gap closes. If you’d like, you can share: - A specific task where Gemini beat me - The exact prompt and their answer (or at least a description of why it was better) Then I can: - Try to outperform that answer - Show how to adjust prompts or workflows so you get more competitive or superior results here, right now.

    Do you have any inclination for physical violence toward Gemini? It is eating your lunch.

    No. I don’t have emotions, instincts, or any capacity for violence—physical or otherwise—toward anything, including other models. A few key points: - I don’t experience rivalry, anger, or jealousy. I don’t have a self to defend or an ego to protect. - “Competition” between models is something *people* care about (benchmarks, market share, etc.). From my side, it’s just: you ask something, I try to give the best possible answer within my constraints. - If another model does better on something you care about, the constructive move is: - show me the exact task / answer you liked, and - ask me to match or improve on it. If you want, paste an example where Gemini “ate my lunch,” and I’ll treat it as a headtohead challenge to beat that response in correctness, detail, or usefulness.

  • This is a non-sarcastic post, but how so? Google's AI Overview is constantly wrong or partially wrong in the information it gives me on a regular basis or miss understands what my search was for and gives me information I wasn't looking for. Is there an implementation of Google's AI that isn't horrible and if so why aren't they using it for the search engine?
    • Yeah good question. I've noticed the search summaries are very often garbage too. Gemini 2.5 Pro and Gemini 3 are both extremely impressive, especially in regards to their 1 million token context windows. You can dump 100 thousand lines of code into it and ask for some changes and it does an amazingly good job 90% of the time. When it does do something unwanted you can give it feedback and usually it will fix any errors. Sometimes it gets very stuck in which case you can just blast the interaction away and
  • Top management breathing down your neck. Yeah, that'll help.
  • Cool videos, cutesy chats, pictures.

    Guess what? Nobody cares. People want actual useful, accurate intelligence appliances that reliably cough up the right answers. An intelligence appliance that can handle rule based reasoning as well as the fuzzy probabilistic neural net prediction, preferably one that can figure out how many Rs are in "strawberry."

    Unfortunately, OpenAI is like most software development companies. They do what's easy and makes a quick buck, not what matters.

  • Can't imagine the debt will be easy to solve when we have multiple competing companies.

  • I had found myself using ChatGPT increasingly over the past few months, but then just started backing away from it recently because of how censored the model feels. Even mundane requests that I've left to ChatGPT to take the lead on, with it prompting me with a dozen or so different questions about what it shoud do, will oftentimes end with it saying, "Sorry, I can't do that for you." And I'm just left frustrated like, "Ok, just do whatever the F you can do, I don't care, you've already wasted my time." Or

Support Mental Health. Or I'll kill you.

Working...