Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Google AI

Google CEO Warns of High Stakes in 2025 AI Race (cnbc.com) 48

Google CEO Sundar Pichai has warned employees the company faces critical challenges in 2025 as it races to catch up in AI amid rising competition and regulatory scrutiny. "The stakes are high," Pichai said at a strategy meeting, details of which were reported by CNBC. "I think it's really important we internalize the urgency of this moment, and need to move faster as a company. The stakes are high. These are disruptive moments. In 2025, we need to be relentlessly focused on unlocking the benefits of this technology and solve real user problems."

The meeting revealed employee concerns about ChatGPT "becoming synonymous to AI the same way Google is to search." In response, DeepMind co-founder Demis Hassabis outlined plans to "turbo charge" Google's Gemini app, which executives hope will become their next product to reach 500 million users. Pichai showed a chart positioning Gemini 1.5 ahead of OpenAI's GPT, though he expects "some back and forth" in 2025. The report adds: [Pichai] acknowledged that Google has had to play catchup. "In history, you don't always need to be first but you have to execute well and really be the best in class as a product," he said. "I think that's what 2025 is all about."

Google CEO Warns of High Stakes in 2025 AI Race

Comments Filter:
  • And then the inflated balloon goes pop.

    • Artificial urgency. (Score:5, Interesting)

      by Brain-Fu ( 1274756 ) on Monday December 30, 2024 @01:29PM (#65050829) Homepage Journal

      In history, you don't always need to be first but you have to execute well and really be the best in class as a product,

      This has roundly and repeatedly been proven false. You neither have to be first nor best in class as a product to succeed. You just need to have the best marketing team.

      Microsoft has proven this again and again. I am not just taking jabs at them. Does anyone remember Borland? They invented the IDE, invented the first object windows library, the first visual toolkit for easily development and redistribution of C++ apps on windows, and possibly other firsts I am not thinking of. Not only did Microsoft trail behind them with copycat versions, but the Microsoft versions were objectively worse. I used both in these cases, and the difference was huge.

      But Microsoft had the brand name, and the best marketing team in the world. And Borland is no more.

      These are not isolated incidents.

      • by ranton ( 36917 ) on Monday December 30, 2024 @02:15PM (#65050945)

        Borland is not the best example for you to use here. They failed because of very poor management, not because of any marketing brilliance from Microsoft. You may have fond memories of the Borland IDE, but their failures with Quattro Pro, the Ashton-Tate acquisition, Borland Office, and the Inprise rebranding all showcased a company that didn't know what it was doing. Borland was its own worst enemy.

        • The person who designed the Delphi IDE, Anders Hejlsberg, moved to MS to design C#.
          • by Bumbul ( 7920730 )
            Delphi was an excellent product. We almost started developing with Visual Studio, which was published a couple of years earlier, but then Delphi came out and was SO FAR ahead in terms of speed and usability, with real compiler instead of interpreter. And Delphi as a programming platform still lives on in the hands of Embarcadero: https://www.embarcadero.com/pr... [embarcadero.com]
      • by gweihir ( 88907 )

        Microsoft is close to death. One or two more of their massive screwups and they will not have a future left.

  • Enough energy going into AI to raise global energy prices and start a race to new energy production under tech companies which control so much of the economy that the entire job market is effected. An oligarch populist taking white white house. A general inability to design a test for general intelligence that AIs can't blow past in a matter of a year or two. Rapid progress from text generation to images to video. Warfare becoming drone centric and AI enabled. A hundred years of protein folding work done in
    • by Zurk ( 37028 )

      nonsense. even the best Artificially Incompetent LLM cant even equal what a baby does at birth. much less anywhere near the bullshit that Google and Open AI execs spew. its all marketing until it goes pop then no one will care about the post pandemic AI hype train.

      • Interesting. And what do you think, a baby does better than ChatGPT, for example?
      • by gweihir ( 88907 )

        Indeed. But is tickles people's imagination and the not-so-smart ones start to hallucinate intently. As the one you responded to. AI doomers and AI fans have one thing in common: They do not understand how little LLMs can do and that there is no chance of that changing anytime soon. This is a decades old, failed technology that just got scaled up enough that it can fake some things with low reliability.

        • AI engines and training them will be commodified from training, implementation and production use.

          Vertical integration leads to lower overall revenue for cloud vendors per AI implementation.

          The space to watch will be the development of a OLTP benchmark type of test for
          - Training an AI model on a known data set with X parameters
          - And running a known set of input prompts and expected outputs

          Measurements for the test in
          - Energy used
          - GPU hours used
          - Total training time via wall-clock time
          - Percentage of correc

  • by Arrogant-Bastard ( 141720 ) on Monday December 30, 2024 @12:50PM (#65050713)
    Let me translate: "I want everyone at Google to sacrifice their free time, their families, their health, everything in order to rush the next iteration of our crappy AI product out the door, with no regard for adequate testing or guardrails or whether or not users actually want this junk. I don't care how much damage it does or how many people get hurt as a result of its output. I'm willing to burn down the rest of the company, stripping resources from Gmail and cloud computing and security and everything else, no matter what happens as a consequence. Why? Because this isn't about Google. This is about my bloated ego and my personal ambition, and if the end result is the destruction of Google, I'm okay with that -- as long I'm seen as some kind of visionary leader. Besides, I'm already a billionaire: responsibility, ethics, etc. are values for the Little People, not for me. And the thing is: even if I fail massively and completely, I'll never miss a meal or have to look for work: my ultra-rich buddies will make sure that I fail upwards."
    • For instance, their AI image generator, Imagen, will not generate people via the API - you get this error:

      Image generation failed with the following error: The prompt could not be submitted. Setting personGeneration value in the safety settings to allow generating images containing people is currently an allowlist-only feature. Please check your current safety settings for personGeneration or contact your Google representative to request allowlisting.

      And that is exactly what most users want to generate: pe

    • While I agree with the sentiment . . . Google has notoriously coddled and given incredibly generous benefits to its employees over the years. I really don't feel too sorry for them. They have the cache' of Google on their resume, the employees have plenty of options if they don't like having to work under pressure to succeed.
      • by Njovich ( 553857 )

        Does the google on resume still mean something? I feel like with the overhiring they did in the past five years it's no longer something I care for if someone comes from google.

        • by gweihir ( 88907 )

          Less and less. I know a number of ex-Googlers that are involved in hiring decisions these days. The name itself buys you exactly nothing.

    • by gweihir ( 88907 )

      Indeed. This also has the stink of desperation on it. This cretin is probably very afraid to be outed as a fraud now. He has done a lot of damage to Google.

    • ChatGPT: The text you've provided expresses a deeply distorted and nihilistic perspective, suggesting significant psychological disturbance. The writer may be projecting their own narcissistic tendencies onto the person they are describing. The writer has repressed these impulses or feels guilty about them, they might displace these feelings onto the individual they are describing. A defense mechanism to avoid confronting their own potential moral lapses, narcissism, or self-interest. The writer might be t
  • when does the The Butlerian Jihad begin?

  • I do agree with one aspect of what he is saying: AI needs to solve "real user Problems" Good Luck.
    • by gweihir ( 88907 )

      I would like to use AI for spam-filtering and ad-blocking. These are most certainly "real user problems" and that may even work to a degree. I guess there is no business case for that though. And probably too expensive anyways.

      • It's already here. Use LLM+search like chatGPT and Perplexity and never have to wade through the shit again.
      • by chrish ( 4714 )

        The Bayesian spam analysis we've been using for decades would be called "AI" now. Then regular expression pattern matching used in ad blockers would probably also be called "AI".

        The current AI hype bubble is LLMs; the actually useful "AI" bits have been chugging along doing useful things for ages, without burning down the planet.

  • The entire web and HTML are part of Google's ecosystem, designed to keep you browsing through Google ads for as long as possible. At the same time, ChatGPT indexes a wide range of data sources in multiple languages and provides brief answers based on numerous sources. If you're looking for information, you don't need to visit any webpages, making the web and HTML obsolete from this perspective. ChatGPT and other LLMs don't rely on the web and HTML to gather information; they can process data in any form an
  • ...google is doing it right.
    Alpha Fold is doing really useful work.
    Google should continue developing really useful AI and resist the urge to make crap generators.
    Most consumer facing AI is useless, annoying crap generators

  • They bet everything on AI and fired everyone that doesn't do AI. Google is screwed.

    • by ebunga ( 95613 )

      I have been informed that Google also fired people that work in AI that isn't (de)generative AI. Idiots.

    • by gweihir ( 88907 ) on Monday December 30, 2024 @05:14PM (#65051387)

      Not quite yet, but the Googlers I know all have contingency plans now. And I guess many would not be sad to implement them, working at Google apparently sucks pretty badly these days. What Google will not be able anymore is hire experienced, competent people in significant numbers. They have pissed off too many people by now and show themselves to care far too little about their employees. And everybody knows it.

      In addition, they have screwed up search so badly that most people that try an alternative will simply never return. Personally, I have not had a single instance in 2024 were Google search was better in the few cases where DuckDuckGo did not find what I was looking for. I am now dropping it from my bookmarks, it is just a waste of time.

      Hence they may very well be dead at this time, and will just take a long time dying.

    • Sundar has bet his future on AI Fraud!
  • He's right! (Score:5, Insightful)

    by CEC-P ( 10248912 ) on Monday December 30, 2024 @03:59PM (#65051225)
    It really is a race to see who can get the finish line first. That finish like is, of course, realizing that AI doesn't work and nobody wants it shoved into their product. It's unreliable, a privacy nightmare, and won't be accurate enough to trust with anything for years. It also degenerates over time due to recursion from training on its own material.
  • AI of the LLM variant is very close to a crash to almost nothing. As too many assholes in "leadership positions" have staked their future on it, obviously they will lie and misdirect in order to delay the popping of the bubble as long as they can.

  • "Go faster!" meaning I know you're trying. Heck I'm probably not trying as hard as I could. That's part of why I believe it.

    "The stakes are high." meaning normally we don't work on anything important, but THIS is important. Or because nobody has a proven reason to spend as much money as we have... we need to dig deeper/harder/faster/stronger.

    Probably worth looking up the "fast is slow, but slow is smooth and smooth is fast" type of thinking from high stakes operators (SWAT, SEAL's, etc). There is an

  • This statement is really just positioning Sundar to inflate his workforce again, going on hiring sprees because "every investor should care about AI".

    Two years from now, this will also give him the headcount to layoff thousands again, making investors happy because "Sundar is eying the bottom line like a hawk!"

  • They own traffic on the internet to the tune of 90% Search share.
    They own a platform with some estimates as high as 72% of the worlds phones.
    Youtube owns 25% of the streaming share.
    YouTube owns nearly 80% of the online video market (Datanyze)
    Their browser is the default for the internet at nearly 70% market share
    Need we mention Gmail is effectively the default email service on the web?

    Google Docs? Bueller Bueller?

    There is no race for AI that Google isn't in neck-and-neck. They may have been caught

  • They can double down on AI, but they can't double down on Ad revenues at the same time. The game has changed, we are in a world where we don't need to necessarily see ads or web pages, and people have seen how nice it is to get a direct answer instead of heaps of shit. But Google was using that shit to make their ads more attractive, can't serve too good links or nobody looks at the ads. What are they going to do now?
  • 1. Stop giving us AI if we don't ask for it.
    2. Go back 15 years, and give us searches that have a high signal-to-marketing, er, noise ratio.

It appears that PL/I (and its dialects) is, or will be, the most widely used higher level language for systems programming. -- J. Sammet

Working...