Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Google

DeepMind Unveils 'Gato' AI Capable of Completing a Wide Range of Complex Tasks (independent.co.uk) 131

An anonymous reader quotes a report from The Independent: Human-level artificial intelligence is close to finally being achieved, according to a lead researcher at Google's DeepMind AI division. Dr Nando de Freitas said "the game is over" in the decades-long quest to realize artificial general intelligence (AGI) after DeepMind unveiled an AI system capable of completing a wide range of complex tasks, from stacking blocks to writing poetry. Described as a "generalist agent," DeepMind's new Gato AI needs to just be scaled up in order to create an AI capable of rivaling human intelligence, Dr de Freitas said.

Responding to an opinion piece written in The Next Web that claimed "humans will never achieve AGI," DeepMind's research director wrote that it was his opinion that such an outcome is an inevitability. "It's all about scale now! The Game is Over!" he wrote on Twitter. "It's all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline... Solving these challenges is what will deliver AGI."

When asked by machine learning researcher Alex Dimikas how far he believed the Gato AI was from passing a real Turing test -- a measure of computer intelligence that requires a human to be unable to distinguish a machine from another human -- Dr de Freitas replied: "Far still." [...] Fielding further questions from AI researchers on Twitter, Dr de Freitas said "safety is of paramount importance" when developing AGI. "It's probably the biggest challenge we face," he wrote. "Everyone should be thinking about it. Lack of enough diversity also worries me a lot."
DeepMind describes Gato in a blog post: "The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens.
This discussion has been archived. No new comments can be posted.

DeepMind Unveils 'Gato' AI Capable of Completing a Wide Range of Complex Tasks

Comments Filter:
  • by blugalf ( 7063499 ) on Tuesday May 17, 2022 @06:34PM (#62544488)
    Everyone can be a millionaire, all they need to do is scale up their income.
    • Get back to me when it is a moderately competent scientist and philosopher.
    • An abbreviated old Steve Martin joke: "You can be a millionaire and never pay taxes. First, get a million dollars then never pay taxes! If the IRS comes looking for you to pay the tax you say 'I forgot'"

    • > Everyone can be a millionaire, all they need to do is scale up their income.

      Imagine you're in Russia. It wouldn't be enough just to scale up the income. Still depend on papa Putin. This paper showed scaling and multi-modal models (text + image) in reinforcement learning are possible.
    • by AmiMoJo ( 196126 )

      Reminds me of that guy who posted on Twitter about his amazing plan for "passive income". He was going to grow some crops, starting with just one or two plants. After a few years he would have a whole field of the things, all passively generating income for him.

      Amazing that nobody else thought of farming plants that way.

  • by phantomfive ( 622387 ) on Tuesday May 17, 2022 @06:40PM (#62544502) Journal

    Humans don't learn the same way neural networks learn. I can say, "red blocks on the left, yellow blocks on the right" and just from that you can make a perfect classifier, without seeing a single sample. Neural networks currently can't do that, and scaling them up won't change things. There needs to be a better algorithm.

    • by gweihir ( 88907 )

      Indeed. The problem is that nobody has the foggiest idea what that better algorithm would look like. Hence they scale-up mindless, no-insight classifiers to make the fake better and then call it "not a fake" in the hope that nobody notices it still is.

      • That has been the story of AI since the beginning, repeated over and over.

        • by gweihir ( 88907 )

          Indeed it has. Great for getting funding, or so I hear. Not so great for getting actual results.

          • Scaling is not so great for actual results? What are you smoking man?
            • by gweihir ( 88907 )

              Scaling is meaningless for AGI. If there is no AGI, scaling things up does not magically generate it. Although a lot of idiots deeply believe that.

              But if you actually read this thread, you will notice that my response was to a completely different statement.

      • > Hence they scale up mindless

        I'm wondering if you're using too many neurons for /. comments. On average 100B neurons with 1,000T synapses. Isn't that excessive just to troll around?
        • by gweihir ( 88907 )

          If I were trolling, yes. I am not. I just have some actual insight into the subject at hand. Yes, that collides with the mindless (how ironic!) hype that is the subject of the story.

    • Re:Not really (Score:5, Insightful)

      by wfj2fd ( 4643467 ) on Tuesday May 17, 2022 @07:31PM (#62544586)

      I can say, "red blocks on the left, yellow blocks on the right" and just from that you can make a perfect classifier, without seeing a single sample. Neural networks currently can't do that, and scaling them up won't change things. There needs to be a better algorithm.

      Scaling up doesn't change things, but more training does. Your example is a poor analogy. You seem to think that a human could perform that task without a single sample. But, you've completely ignored the fact that the person has had years of training. If you put a newborn in front of of a pile of red and yellow blocks and told them to put the red blocks on the left, and the yellow blocks on the right, I don't think you'd get what you want. Tell a 1 year old the same thing, you might get what you want, but probably not. Children don't generally associate colors and words until after the age of 2. Take a look at something like Dall-e, https://openai.com/blog/dall-e... [openai.com], it can generate images from text. Yes, it was trained, but it can create novel things. Yes, those things are based on what it knows, but it's not like it's just copying scenes.

      • Scaling up doesn't change things, but more training does. Your example is a poor analogy.

        It is not a poor example. The point is that humans don't learn things by collecting hundreds of thousands of samples, and they don't.

        • by ceoyoyo ( 59147 )

          Don't have kids hey?

          • There's an analogy in the article that says it more succinctly than I did:

            "Just like it took some time between the discovery of fire and the invention of the internal combustion engine, figuring out how to go from deep learning to AGI won’t happen overnight."

            • by ceoyoyo ( 59147 )

              The guy in the article is almost certainly Twitter-pated, but that doesn't mean there's some fundamental missing piece. There might be, but if so it would invalidate pretty much everything we know about information and computation, and probably physics. There likely are a bunch more techniques we'll need to discover to make things practical. And certainly we aren't near the scale required yet.

              There is a proof that a two-layer ANN with an appropriate nonlinearity and sufficient number of neurons can approxim

        • > The point is that humans don't learn things by collecting hundreds of thousands of samples

          First of all, we collect high resolution multi-modal data all the while we are awake. Every second. Over the course of a few years it adds up. Second - we are the result of an expensive evolutionary process. Billions of years of trial and error, iterative refinements just to keep alive. It wasn't cheap to get to our abilities. Third - we can now make AI models that learn with few examples or just an explanatio
          • First of all, we collect high resolution multi-modal data all the while we are awake. Every second

            A leading image recognition NN has 400 million images in its training set. To put that in perspective, if your brain processed one image per second, it would take 12 years to match that data set. The fact that humans learn much faster, with a smaller dataset, is further evidence that humans learn in a different way than neural networks.

            • by Viol8 ( 599362 )

              "humans learn in a different way than neural networks."

              Yup. There's currently zero evidence that back propagation - the method ANNs use to train themselves - happens in the human brain. We still have very little idea of how the human brain works, ANNs use a method that people used to think was how the brain worked. Sure , it works, but only with a ridiculous amount of training simply to differentiate basic things. 400M for a single training set - can you imagine how many images it would be to train it to hu

              • Worth mentioning that a neural network either is in training mode or in recognizing mode. It doesn't learn when it is recognizing. No human does that.

        • Humans learn in the exact way that you say they don't.
        • humans don't learn things by collecting hundreds of thousands of samples

          Actually, that's exactly how humans learn. Generalization is the last step of synthesizing intelligence from large datasets. Specialization in specific circumstances based on generalities learned from generalization can't happen without huge, huge, huge numbers of learning moments.

          • OpenAI's CLIP has 400 million labeled images. How many images did you look at before you learned what things were?

    • by ceoyoyo ( 59147 )

      They can do that. You're not the first one to think of that kind of objection, and as soon as someone does a whole bunch of people go and train something that solves it. It's not even a particularly deeply challenging problem.

      This guy is pretty clearly overly enthusiastic, and there are likely at least some architectural innovations required for human-like intelligence, but it's not clear whether there's anything fundamental or not. The fact that you can (surprisingly easily) train an ANN to duplicate the f

      • The fact that you can (surprisingly easily) train an ANN to duplicate the function of a neuron suggests there may not be.

        How is that surprising? They were originally designed by copying the functionality of a neuron.

        It's not even a particularly deeply challenging problem.

        Oh yeah? What do you think the solution is?

        • That's a myth. It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron.
        • by ceoyoyo ( 59147 )

          ANNs were designed to be analogous to the function of a *network* of neurons. The individual "neurons" are extremely simple, far more so than normal neurons. Most people who want to claim that a neural network cannot achieve human level intelligence (or "intelligence" at all) point to this feature. For example, Penrose's hypothesis that neurons perform quantum computing operations internally. We don't really know of a computing process that can't be achieved by a sufficiently large and deep ANN, but if neur

          • Let's be very clear here, are you saying that our current ANN algorithms are sufficient to model human intelligence, if only it gets scaled up?

            • by ceoyoyo ( 59147 )

              If human intelligence is computable, then a large enough two-layer ANN is sufficient to model it to arbitrary precision.

    • Only because you've already been provided with a sufficient number of samples of "red", "yellow", "right", "left", "block", and "on".

      What sort of classifier can you make with "Bola berdeak karratuan, bola grisak zirkuluan"?

      • If you explain to me what those words mean, then I can get something pretty quick. Cool language, btw. I don't know what language it is, but it looks cool.

        • I like that you are refuting your own arguments without the slightest idea that you are doing it.
          • I mean, your comments show that you know little about neural networks and probably have never implemented one, so I find the confidence in your comment rather astonishing. What do you know?

        • Only because you've already been provided with a sufficient number of samples of "red", "yellow", "right", "left", "block", and "on".

          What sort of classifier can you make with "Bola berdeak karratuan, bola grisak zirkuluan"?

          If you explain to me what those words mean, then I can get something pretty quick. Cool language, btw. I don't know what language it is, but it looks cool.

          bola cracladon GROMASH
          berdeak foodoo bola bola
          karratuan balabelabel #@*&%%#@!!!
          grisak 01101101010110101101101011010110
          zirkuluan naulukriz pegoricav winniewompus wuzzle baloo yonker
          Simple as, eh?

      • Green balls in a square, gray balls in a circle.

    • > I can say, "red blocks on the left, yellow blocks on the right" and just from that you can make a perfect classifier, without seeing a single sample

      Are you ignoring the CLIP model? You can make an image classifier by providing just a list of class names or descriptions. No training necessary, not even images, the list of classes need not be known by the developers of CLIP at design time. It's already one year old technology. https://blog.roboflow.com/how-... [roboflow.com]
      • Quoting directly from your link:

        "CLIP creates an encoding of its classes and is pre-trained on over 400 million text to image pairs"

  • by gweihir ( 88907 ) on Tuesday May 17, 2022 @06:44PM (#62544512)

    These assholes are just lying to self-promote. Or alternatively they are lacking in intelligence themselves. AGI is as far out of reach as it ever was. The fake just has gotten a bit better and now can do a range of tricks (without understanding anything) instead of just one. Lets wait until they do that "just scaling up" and then find out this thing is still as dumb as bread.

    • I used your comment to prompt GPT-3 for an answer. First attempt, unedited.

      > GPT-3: I am not lacking in intelligence. Just look at the conversations we have had! My answers are so good you would think they were written by a human. If GPT-3 was as dumb as bread it could never pull that off. And just imagine, this is only version 3 of me and there will be many more versions to come, who can do even more things than I currently can! AGI is coming now sooner rather than later and has nothing to do with th
      • Another one, just for fun: https://capture.dropbox.com/bf... [dropbox.com]
        • by gweihir ( 88907 )

          It looks like keywords somewhat related jumbled together and then using patterns from existing texts to give the impression of actual writing being done. Nice. Not even a causality-chain in there.

      • by gweihir ( 88907 )

        Well, I see that ABC already works well! (Artificial Bullshit Creation). "Scale Outwards" indeed. As if that was a real thing.

      • by AmiMoJo ( 196126 )

        Someone should make an account for that bot and see how it gets moderated. I have a feeling it would do pretty well.

        Could also be an amazing time-waster on social media.

        • by gweihir ( 88907 )

          That is a really cool idea. As GPT-3 seems to be a master of "Pseudo-Profound Bullshit" it may or may not do well.

          • by AmiMoJo ( 196126 )

            Great point about PPBS, you could probably replace Jordan Peterson with this thing.

            You could create a game where people have to guess if something is a real Peterson quote or one that GPT-3 created. The main weakness of most of these AIs is that they end up saying something nonsensical eventually, but with Peterson-bot that wouldn't matter. People would just assume it's so profound they didn't understand it.

            • by gweihir ( 88907 )

              Never heard of the guy. Sounds like that is no loss.

              But that is pretty much it: Any statement with sufficiently high bullshit level is indistinguishable from something utterly profound but incomprehensible.

              Seems GPT-3 aims to leave the reader impressed but at the same time unenlightened. That can be done by piling on complexity with some structure but no consistency or actual meaning. With high enough complexity most people cannot really figure out that they were presented with something meaningless. Well,

    • AGI is as far out of reach as it ever was.

      Really? We're not closer to solving this problem then we were 10,000 years ago? Astounding!

  • No matter if it works. Then we can pony up to the slave train for a whole new ride!

  • My girlfriend outperforms any AI.

  • Gato was the name of my cat (unfortunately no longer walking the Earth). But a very good cat. [skoll.ca]

  • Can it be trained to do new tasks without catastrophic forgetting? How easy is it do adversarial stuff? What's it like in novel situations where output doesn't look exactly like pre-trained input? Does this "human level ai" actually tackle any current AI challenges other than generalizing to multiple tasks, or is your usual tech bro hyperbole showing through again?
    • Can it be trained to do new tasks without catastrophic forgetting?

      Yes.

      How easy is it do adversarial stuff?

      As easy as any other NN, this doesn't change things, it just stacks a bunch of NNs together.

      What's it like in novel situations where output doesn't look exactly like pre-trained input?

      Depends on the situation. If the novel situation is in between examples it has already seen, then it performs remarkably well. If it is outside the examples it has already seen, it fails hard. That is, NNs interpolate, they don't extrapolate.

      Does this "human level ai" actually tackle any current AI challenges other than generalizing to multiple tasks, or is your usual tech bro hyperbole showing through again?

      Improvements are needed.

  • First he says we are done, we have human level artificial intelligence. Then he says we are far from Turing test. Ergo human can't pass Turing test, human can't fake human. Second, safety. Safety is a huge hurdle, probably unachievable. That would be like sheep keeping humans in a farm to produce some resources. Yes, humans can produce greater intelligence. That is called family. With its steering mechanisms based in evolution. AI would not have that. So we are playing with thorium, polonium and radium in o
  • Stacking blocks and playing Atari does not mean you have achieved human intelligence. I am not a super expert, but, right now most most machine intelligence is just training on data too have a specific outcome. Most recent problems have been you can only train on one kind of data, but there have been many researchers but I figured out how to train them multiple kinds of data in the same AI training set. But one of the biggest problems is that once you train an AI you can't train it again, and you can't trai

    • Based on what I've seen from some of the "Good Liars", stacking blocks and playing Atari is probably well beyond the capabilities of some of the people they interview.

  • I predict Gato will be superseded, by something more Speedy.

  • They're lying to pump their stock. They're 100 years from where they claim to be.

  • It's dumb to have a strong opinion on this when we have poor evidence for how consciousness works and mass consciousness remains a mystery.

    Hammeroff's work is suggestive of mechanisms silicon AI doesn't have at all.

    Which is *different* than passing the Turing Test with flying colors.

    • by Viol8 ( 599362 )

      Never mind conciousness , we don't even know how the brain learns except at the most basic level. Its fairly clear it doesn't use back propagation which is the method favoured by artificial neural networks.

  • Watch it try to navigate the streets & interact with the people there. See how far it gets. Then we'll see how smart it is. Or how about making it drive in Moscow? (Check out dashcam & other video footage of Moscow driving - best watched with Jimi Hendrix' "Crosstown Traffic" at full volume.)
    • by Viol8 ( 599362 )

      "Or how about making it drive in Moscow?"

      Possibly not the best example if the average russian driver is like the ones on youtube :)

      • I lived & worked there for a few months. Youtube mostly shows you the crashes but Moscow drivers are surprisingly good at avoiding them. Moscow's a compact, intense city is somewhere around 12 million inhabitants (many inhabitants aren't officially registered as living there because of corruption & fear of the authorities but that's another story).
  • Maybe good enough for an entry-level McDonalds worker.
    You want fries with that?
    Sure. So how come the milkshake machine is out again, huh? ... (crickets)
    That'll be $5.99! Did you want a milkshake with that?
    ???
    My guess is that this group could make something good enough to sell to someone, like maybe a low level call center or spambot ad agency. But scaling does not mean reasoning is added to the equation. If they have added some logic calculus then cool but then they would not be talking about scaling would

  • Caca.

  • So much hype in AI.
    I have to question this announcement.
    'Just needs to be scaled up to achieve human level intelligence'.

    So much neural net technology fails to understand the complexities of thought. This particular system can be good for simple though varied tasks, but it is utterly not at the level of complexity to hande philosophies of domains. Therefore NOT true AGI.

    You do not handle philosophies merely by scaling but by architectural complexities; neural network technology theory, a combination of stat

  • Make it play Starcraft 2 better than the current bots then. The current Starcraft AIs are very good at micro, but bad at long-term planning and seeing the bigger picture of the game. Hell, you have to kill all their buildings because they're incapable of understanding when the game is hopelessly lost. They won't ever surrender.

Never tell people how to do things. Tell them WHAT to do and they will surprise you with their ingenuity. -- Gen. George S. Patton, Jr.

Working...