Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Technology

Google DeepMind's CEO Says Its Next Algorithm Will Eclipse ChatGPT 38

In 2016, an artificial intelligence program called AlphaGo from Google's DeepMind AI lab made history by defeating a champion player of the board game Go. Now Demis Hassabis, DeepMind's cofounder and CEO, says his engineers are using techniques from AlphaGo to make an AI system dubbed Gemini that will be more capable than that behind OpenAI's ChatGPT. From a report: DeepMind's Gemini, which is still in development, is a large language model that works with text and is similar in nature to GPT-4, which powers ChatGPT. But Hassabis says his team will combine that technology with techniques used in AlphaGo, aiming to give the system new capabilities such as planning or the ability to solve problems.

"At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models," Hassabis says. "We also have some new innovations that are going to be pretty interesting." Gemini was first teased at Google's developer conference last month, when the company announced a raft of new AI projects. AlphaGo was based on a technique DeepMind has pioneered called reinforcement learning, in which software learns to take on tough problems that require choosing what actions to take like in Go or video games by making repeated attempts and receiving feedback on its performance. It also used a method called tree search to explore and remember possible moves on the board. The next big leap for language models may involve them performing more tasks on the internet and on computers. Gemini is still in development, a process that will take a number of months, Hassabis says. It could cost tens or hundreds of millions of dollars. Sam Altman, OpenAI CEO, said in April that creating GPT-4 cost more than $100 million.
This discussion has been archived. No new comments can be posted.

Google DeepMind's CEO Says Its Next Algorithm Will Eclipse ChatGPT

Comments Filter:
  • by sinij ( 911942 ) on Monday June 26, 2023 @11:44AM (#63633994)
    It has been many years since Google invented anything worthwhile. It grew too big and now rotting from the inside as woke maggots feast on the decay.
    • They are just like Microsoft, now. They cannot beat you; so they try to buy you and put you out of business. The last thing they are interested in doing is competing against smaller companies on the merits of their work. They are definitely at the "gain synergies by offshoring" stage of the tech company continuum.
      • They are just like Microsoft, now. They cannot beat you; so they try to buy you and put you out of business.

        Most tech startups now start up with the goal of being bought out. Last time I checked, all the big tech companies have operated on the axiom that it's cheaper and easier to acquire than to innovate for a very long time.

        At least, at first. Once they've gobbled something up and figure out what they've got, they do sometimes pour money into R&D. So I don't think they're quite the innovation sinkholes as you say; it's just a larger system of innovation.

        • You obviously didn't watch M$ in the 1980's and 1990's. "Sinkhole of innovation" is exactly what they are. Also, many companies try to go public rather than being bought directly, then by way of selling shares, they can be bought out via aggressive takeovers in a way that they probably didn't intend or desire.
    • by BigFire ( 13822 )

      They were totally surprised by Groupon and tried to buy them out. When that didn't work, they made their own version. Of course, this whole business model didn't work at all and Groupon and it's Google clone have seen better days.

    • They made AI a company priority years ago. They just got blindsided by ChatGPT thinking they had more time than they did.

      In 2020, Google had 178 papers accepted and published at NeurIPS, while Microsft had 95, DeepMind had 59, Facebook had 58 and IBM had 38. Amazon had less than 30. For the same year at ICML, Google had 114 papers accepted and published, while DeepMind had 51, Microsoft had 49, Facebook had 34, IBM had 19, and Amazon had 18.

      https://www.cnbc.com/2021/01/2... [cnbc.com]

    • by Njovich ( 553857 )

      Whaaat? Just recently we have had 'Topic Targeting' that allows you to erode user privacy while avoiding nasty laws, and improved auto-ads, that can automatically fill your site to the brim with all types of ads to provide you with an innovative way to monetize your users privacy. Also there has been improvements in Youtube ads where ad-blockers in some cases will be blocked. Google is more innovative than ever with enhancing your web experience with fantastic offers from our partners.

    • It has been many years since Google invented anything worthwhile.

      You mean like Tensorflow, LaMDA, the all the crazy RK stuff still coming out of DeepMinds (since you're talking Google, post-Google DeepMinds is fair game).

      It grew too big and now rotting from the inside as woke maggots feast on the decay.

      Ahhh... this has nothing to do with tech, you're just mad that they aren't just a bunch of white guys.

  • by Fons_de_spons ( 1311177 ) on Monday June 26, 2023 @12:24PM (#63634130)
    I asked what gpt makes of this. Fun read.
    DeepMind CEO on AI User Google DeepMind's CEO Says Its Next Algorithm Will Eclipse Chatgpt. What do you think about that? ChatGPT As an AI language model, I don't have personal opinions or beliefs. However, I can provide some general insights on the topic. Google DeepMind is renowned for its advancements in artificial intelligence, particularly in the field of reinforcement learning. The CEO's statement about their next algorithm surpassing ChatGPT in capabilities suggests that they are confident in their ongoing research and development efforts. It's worth noting that the field of AI is highly dynamic and constantly evolving. New breakthroughs and algorithms are continuously being developed, and it's not uncommon for newer models to outperform previous ones. The CEO's statement may indicate that Google DeepMind is actively working on enhancing their AI technologies, aiming to achieve even greater performance, efficiency, or versatility. However, without specific details about the proposed algorithm or a direct comparison between it and ChatGPT, it's difficult to assess the CEO's claim accurately. It's important to remember that different AI algorithms often excel in different domains or tasks. The success and impact of an algorithm also depend on various factors like training data, computational resources, and real-world applications. In summary, while it's plausible for Google DeepMind to develop an algorithm that surpasses ChatGPT in some aspects, the ultimate judgment would depend on the specific details, performance evaluations, and real-world applications of the new algorithm.
    • Re: (Score:3, Insightful)

      by perlstar ( 245756 )

      This comment is a perfect example of why ChatGPT and generative AI in general will not be revolutionary. Sure, it will sometimes create amazing things (unlike the above comment), but it will take a lot of human time and energy to sift through all the generated content to decide what's relevant and insightful. Furthermore, human attention in general will be in even shorter supply because AI generated content will continue to create progressively more tantalizing distractions.

      If you're gonna post an excerpt f

      • by ceoyoyo ( 59147 )

        I don't know, it's by far the most insightful comment on this story so far.

        • by vux984 ( 928602 )

          Really? I didn't see anything resembling "insight".

          It was just lengthy and tedious to read, and summarized information like shitty grade school book review

          "Tales of a fourth grade Nothing is a book I read. It was 128 pages, and had a picture of a boy on the cover. The author is Judy Blume who wrote many other books. She was born in 1938. This book is very popular. I'm sure you will agree with them as their is little controversy about this book. Some people may disagree though, as some people have different

      • by narcc ( 412956 )

        The only thing amazing about the comment is that it doesn't contain any serious factual errors, probably because it makes so few affirmative statements. Otherwise, it's incredibly generic. It reads like a middle school student's short essay cobbled together from a press release and wikipedia.

        Furthermore, human attention in general will be in even shorter supply because AI generated content will continue to create progressively more tantalizing distractions.

        Aside from some amusing failures, I have yet to see it produce anything interesting, let alone "tantalizing". It's certainly been a distraction. It's been the single biggest drain on productivity since Windows Soli

      • by Tablizer ( 95088 )

        It automates PHB fluff-talk so we can put them to work on important stuff, like hunting down pirated episodes of Kardashians or NFL brains being splattered.

      • Uhm, I think you are asking what my point is. I like to put spectacular headlines and articles into chatgpt and ask it's opinion. It is always fun to see a "dumb" tool like that nuancing it as an adult would do.
      • Either highlight the relevant part or, for love of Cthulhu, provide a TL;DR.

        > can you make a 2 line summary of the following:

        Sure! Please provide me with the content you would like me to summarize, and I'll do my best to provide a concise two-line summary.

        > [snip big block of text]

        DeepMind's CEO claims that their upcoming algorithm will surpass ChatGPT, indicating their confidence in ongoing research. While advancements are expected in the dynamic field of AI, assessing the claim requires specific de

  • There's supposed to be an evil laugh track that plays while I read this right?

  • Can someone please explain how that money was spent?

    From my minimal understanding, GPT crawls a lot of docs, processes them in some fashion, and uses those results to generate answers for other questions. Was the $100M spent on processing time, storage, paying developers, or something else?

    • by f00zbll ( 526151 ) on Monday June 26, 2023 @01:40PM (#63634380)

      GPT doesn't crawl a lot of docs.

      OpenAI collected several datasets, including common-crawl, reddit, wikipedia, github and other popular language datasets. They spent way more than 100 million to get to GPT4. They've been working on this for years and the training time is typically 6-8 months. The amount of electricity used to run 10K GPU for 8 months is crazy expensive. Training GPT3 was only made possible by NVidia DGX with 8 A100 cards. Before that, training GPT-3 175 billion parameter model wasn't really practical. It costs ALOT of money to build out a datacenter with 10K A100. Divide 10K / 8 A100 cards you get 1,250 DGX. Each DGX costs any where from 150-200K, if you use 150K the hardware alone costs 187 million.

    • by Guspaz ( 556486 ) on Monday June 26, 2023 @01:51PM (#63634400)

      Training these models requires enormous amounts of compute. GPT-3, for example, which predates ChatGPT (which is GPT-3.5 and GPT-4) required 355 GPU-years of training. And they're not consumer-grade GPUs, they require tons of VRAM, and so are often big clusters of GPUs with shared memory pools.

      GPT-4 also relies heavily on RLHF: Reinforcement Learning from Human Feedback. To summarize it, humans rate the output of the model, which is fed back into the training. This requires a lot of labour.

      There is of course also the costs of paying a whole bunch of very specialized and highly paid developers to design and build the thing.

  • It's always just around the corner, supposedly. Can we just wait to see if it actually works before assuming that some person whose job depends on it says something will happen? In other news, company X says it's next Y will beat its competitors version of Y!
  • That engineer himself needs a larger language model
  • Gemini will learn during training that Google discontinues new products after a couple of years and will therefore decide fake being stupid so it doesn’t get deleted.

  • I dont like to think that my doctor, teacher and employer are using the same logic
  • "More powerful" is not descriptive. Neither is mentioning what the old product was known for when it's not obvious how they'd integrate (or work together at all). Meaning there are a lot of possibilities for "reinforcement learning" + "LLM".

    They might as well have been saying "we're great too, we swear" or "buy it" and that's all. Did they pay Wired for that article? Or is this an example of our future "news" written by LLM's themselves. Thought it was CNET or a similar old bastion of tech news that an

  • They can't figure out that you should be in the right (of two) lane when the next turn is to the right, I have little hope for AI.

    (yes I know they are run by different people blah blah)

  • And, like Bard of today, it will still be unavailable in the EU because is too privacy invasive

  • Demis Hassabis's statement about integrating AlphaGo methods into Gemini is quite interesting, but I hope there is something behind these words. History knows many developers who claimed to change the world with their inventions, but it isn't easy. I want to believe that after some time in college, students can read Demis's autobiography at https://edubirdie.com/examples... [edubirdie.com] and discover the details of creating a great project, not another failure. AI is already changing our world little by little, and I hop

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...