DeepMind Unveils 'Gato' AI Capable of Completing a Wide Range of Complex Tasks (independent.co.uk) 131
An anonymous reader quotes a report from The Independent: Human-level artificial intelligence is close to finally being achieved, according to a lead researcher at Google's DeepMind AI division. Dr Nando de Freitas said "the game is over" in the decades-long quest to realize artificial general intelligence (AGI) after DeepMind unveiled an AI system capable of completing a wide range of complex tasks, from stacking blocks to writing poetry. Described as a "generalist agent," DeepMind's new Gato AI needs to just be scaled up in order to create an AI capable of rivaling human intelligence, Dr de Freitas said.
Responding to an opinion piece written in The Next Web that claimed "humans will never achieve AGI," DeepMind's research director wrote that it was his opinion that such an outcome is an inevitability. "It's all about scale now! The Game is Over!" he wrote on Twitter. "It's all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline... Solving these challenges is what will deliver AGI."
When asked by machine learning researcher Alex Dimikas how far he believed the Gato AI was from passing a real Turing test -- a measure of computer intelligence that requires a human to be unable to distinguish a machine from another human -- Dr de Freitas replied: "Far still." [...] Fielding further questions from AI researchers on Twitter, Dr de Freitas said "safety is of paramount importance" when developing AGI. "It's probably the biggest challenge we face," he wrote. "Everyone should be thinking about it. Lack of enough diversity also worries me a lot." DeepMind describes Gato in a blog post: "The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens.
Responding to an opinion piece written in The Next Web that claimed "humans will never achieve AGI," DeepMind's research director wrote that it was his opinion that such an outcome is an inevitability. "It's all about scale now! The Game is Over!" he wrote on Twitter. "It's all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline... Solving these challenges is what will deliver AGI."
When asked by machine learning researcher Alex Dimikas how far he believed the Gato AI was from passing a real Turing test -- a measure of computer intelligence that requires a human to be unable to distinguish a machine from another human -- Dr de Freitas replied: "Far still." [...] Fielding further questions from AI researchers on Twitter, Dr de Freitas said "safety is of paramount importance" when developing AGI. "It's probably the biggest challenge we face," he wrote. "Everyone should be thinking about it. Lack of enough diversity also worries me a lot." DeepMind describes Gato in a blog post: "The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens.
Well that's great news, (Score:5, Funny)
This is not general intelligence (Score:2)
Re: (Score:2)
An abbreviated old Steve Martin joke: "You can be a millionaire and never pay taxes. First, get a million dollars then never pay taxes! If the IRS comes looking for you to pay the tax you say 'I forgot'"
Re: (Score:2)
Imagine you're in Russia. It wouldn't be enough just to scale up the income. Still depend on papa Putin. This paper showed scaling and multi-modal models (text + image) in reinforcement learning are possible.
Re: (Score:2)
Reminds me of that guy who posted on Twitter about his amazing plan for "passive income". He was going to grow some crops, starting with just one or two plants. After a few years he would have a whole field of the things, all passively generating income for him.
Amazing that nobody else thought of farming plants that way.
Not really (Score:3)
Humans don't learn the same way neural networks learn. I can say, "red blocks on the left, yellow blocks on the right" and just from that you can make a perfect classifier, without seeing a single sample. Neural networks currently can't do that, and scaling them up won't change things. There needs to be a better algorithm.
Re: (Score:3)
Indeed. The problem is that nobody has the foggiest idea what that better algorithm would look like. Hence they scale-up mindless, no-insight classifiers to make the fake better and then call it "not a fake" in the hope that nobody notices it still is.
Re: (Score:2)
That has been the story of AI since the beginning, repeated over and over.
Re: (Score:2)
Indeed it has. Great for getting funding, or so I hear. Not so great for getting actual results.
Re: (Score:2)
Re: (Score:2)
Scaling is meaningless for AGI. If there is no AGI, scaling things up does not magically generate it. Although a lot of idiots deeply believe that.
But if you actually read this thread, you will notice that my response was to a completely different statement.
Re: (Score:2)
I'm wondering if you're using too many neurons for
Re: (Score:2)
If I were trolling, yes. I am not. I just have some actual insight into the subject at hand. Yes, that collides with the mindless (how ironic!) hype that is the subject of the story.
Re: (Score:2)
Re: (Score:2)
Sounds nicely complex, means nothing. In actual reality, nobody knows what general intelligence really does to arrive at its results.
Re:Not really (Score:5, Insightful)
I can say, "red blocks on the left, yellow blocks on the right" and just from that you can make a perfect classifier, without seeing a single sample. Neural networks currently can't do that, and scaling them up won't change things. There needs to be a better algorithm.
Scaling up doesn't change things, but more training does. Your example is a poor analogy. You seem to think that a human could perform that task without a single sample. But, you've completely ignored the fact that the person has had years of training. If you put a newborn in front of of a pile of red and yellow blocks and told them to put the red blocks on the left, and the yellow blocks on the right, I don't think you'd get what you want. Tell a 1 year old the same thing, you might get what you want, but probably not. Children don't generally associate colors and words until after the age of 2. Take a look at something like Dall-e, https://openai.com/blog/dall-e... [openai.com], it can generate images from text. Yes, it was trained, but it can create novel things. Yes, those things are based on what it knows, but it's not like it's just copying scenes.
Re: (Score:3)
Scaling up doesn't change things, but more training does. Your example is a poor analogy.
It is not a poor example. The point is that humans don't learn things by collecting hundreds of thousands of samples, and they don't.
Re: (Score:2)
Don't have kids hey?
Re: (Score:2)
There's an analogy in the article that says it more succinctly than I did:
"Just like it took some time between the discovery of fire and the invention of the internal combustion engine, figuring out how to go from deep learning to AGI won’t happen overnight."
Re: (Score:2)
The guy in the article is almost certainly Twitter-pated, but that doesn't mean there's some fundamental missing piece. There might be, but if so it would invalidate pretty much everything we know about information and computation, and probably physics. There likely are a bunch more techniques we'll need to discover to make things practical. And certainly we aren't near the scale required yet.
There is a proof that a two-layer ANN with an appropriate nonlinearity and sufficient number of neurons can approxim
Re: (Score:2)
Re: (Score:2)
but a few very bright people have voiced large concerns that we are dealing with an unknown that has already solved problems beyond our grasp of how it was managed.
I don't know what this means.
Re: (Score:2)
No in chess and Go there is nothing mysterious about what the computers are doing at an algorithmic level. Some of the players might be mystified, but that's because they don't understand computers. Larry Kaufman, who is both an expert at chess and an expert at computer chess, says, "When a computer says one side has an advantage, we can explain in words what the advantage is."
Re: (Score:2)
experts have conveyed doubts of strong dangers to the immensely complex radically new devotion to seduce real intellect into the digital world .
Which experts? Give me some quotes and we can address them.
Re: (Score:2)
Of all the ML algorithms, deep learning is closet to how the brain works. The work was motivated by studying the brain and more recent work is connecting deep learning back to the brain. https://www.quantamagazine.org... [quantamagazine.org]
While back propagation is not done by the brain, this work suggests that some type of gradient technique might be key. Here is some more work suggesting possible feed forward gradient techniques. https://www.quantamagazine.o [quantamagazine.org]
Re: (Score:2)
As for the experts opinions, the theory people can't prove why deep learning works/generalizes
I don't know why you think this. There is a lot of information about why it works. Someone misled you.
Re: (Score:2)
Hawking knew a lot about physics, not much about AI. Musk doesn't know much about either.
Re: (Score:2)
I guess I misled myself since this is what I do for a living. Please point me to any paper that actually proves some useful generalization properties of modern deep learning. Not empirical results or limited models. You'll see that it's easy to find many papers that make insightful claims, but they don't back it up with proofs.
This is somewhat true of most theoretical ML research. Even where we have proven bounds, they are far from what we see on typical problems. One problem is our theoretical mod
Re: (Score:2)
Oh yeah? You work on neural networks for a living? And you don't understand how it works?
Better hit the text books and come back.
Re: (Score:2)
True, and we shouldn't discount them. If you want to investigate in more detail what either of them said, provide a particular quote and we can discuss it.
Re: (Score:2)
Re: (Score:2)
The query was for links was about AI experts who are afraid of AI.
Re: (Score:2)
To put it in the most polite formulation, I am simply scared shitless.
Don't be. Fear is overrated.
Re: (Score:2)
In Helsinki? You're lucky then that Helsinki has such an extensive network of bomb shelters. You'll be fine.
I must say I didn't expect the Budapest memorandum to be broken in my lifetime. :(
Re: (Score:2)
My links and comments were meant to address multiple comments, but I was too lazy to break it down, so I just picked something at the end.
I've been on Slashdot since close to the beginning, but it took me many years to create an account. Back in the day, it was experts who mostly commented, and it was a good way to learn. I was just a research assistant, so I just tried to absorb the CS information.
Those days are long gone, and the topics have changed and the experts have mostly disappeared. Still
Re: (Score:2)
Here's a pretty good lecture from Stanford about understanding what neural networks are doing [youtube.com].
Re: (Score:2)
You'll notice he doesn't prove any results. This just goes over how to put them together and apply them to various kinds of problems along with some intuitions. While he can verify they work after training, he can't prove much ahead of time. This is the kind of stuff taught at most universities at the undergrad and intro grad level. A good and easy introduction to this kind of material is Andrew Ng's full set of NN Coursera courses.
However, if you continue with that education, you will eventually ge
Re: (Score:2)
I'm not sure what you mean by understanding "why they work."
For example, "Neural Networks for Pattern Recognition" seems rather clear for understanding why they work (if rather mathematical) to me. And the author proves things. Neural networks are good at interpolation, but rather bad at extrapolation. https://www.amazon.com/Network... [amazon.com]
Re: (Score:2)
First of all, we collect high resolution multi-modal data all the while we are awake. Every second. Over the course of a few years it adds up. Second - we are the result of an expensive evolutionary process. Billions of years of trial and error, iterative refinements just to keep alive. It wasn't cheap to get to our abilities. Third - we can now make AI models that learn with few examples or just an explanatio
Re: (Score:2)
First of all, we collect high resolution multi-modal data all the while we are awake. Every second
A leading image recognition NN has 400 million images in its training set. To put that in perspective, if your brain processed one image per second, it would take 12 years to match that data set. The fact that humans learn much faster, with a smaller dataset, is further evidence that humans learn in a different way than neural networks.
Re: (Score:2)
"humans learn in a different way than neural networks."
Yup. There's currently zero evidence that back propagation - the method ANNs use to train themselves - happens in the human brain. We still have very little idea of how the human brain works, ANNs use a method that people used to think was how the brain worked. Sure , it works, but only with a ridiculous amount of training simply to differentiate basic things. 400M for a single training set - can you imagine how many images it would be to train it to hu
Re: (Score:2)
Worth mentioning that a neural network either is in training mode or in recognizing mode. It doesn't learn when it is recognizing. No human does that.
Re: (Score:2)
Re: (Score:2)
Actually, that's exactly how humans learn. Generalization is the last step of synthesizing intelligence from large datasets. Specialization in specific circumstances based on generalities learned from generalization can't happen without huge, huge, huge numbers of learning moments.
Re: (Score:2)
OpenAI's CLIP has 400 million labeled images. How many images did you look at before you learned what things were?
Re: (Score:2)
They can do that. You're not the first one to think of that kind of objection, and as soon as someone does a whole bunch of people go and train something that solves it. It's not even a particularly deeply challenging problem.
This guy is pretty clearly overly enthusiastic, and there are likely at least some architectural innovations required for human-like intelligence, but it's not clear whether there's anything fundamental or not. The fact that you can (surprisingly easily) train an ANN to duplicate the f
Re: (Score:2)
The fact that you can (surprisingly easily) train an ANN to duplicate the function of a neuron suggests there may not be.
How is that surprising? They were originally designed by copying the functionality of a neuron.
It's not even a particularly deeply challenging problem.
Oh yeah? What do you think the solution is?
Re: (Score:2)
Re: (Score:2)
ANNs were designed to be analogous to the function of a *network* of neurons. The individual "neurons" are extremely simple, far more so than normal neurons. Most people who want to claim that a neural network cannot achieve human level intelligence (or "intelligence" at all) point to this feature. For example, Penrose's hypothesis that neurons perform quantum computing operations internally. We don't really know of a computing process that can't be achieved by a sufficiently large and deep ANN, but if neur
Re: (Score:2)
Let's be very clear here, are you saying that our current ANN algorithms are sufficient to model human intelligence, if only it gets scaled up?
Re: (Score:2)
If human intelligence is computable, then a large enough two-layer ANN is sufficient to model it to arbitrary precision.
Re: (Score:2)
Only because you've already been provided with a sufficient number of samples of "red", "yellow", "right", "left", "block", and "on".
What sort of classifier can you make with "Bola berdeak karratuan, bola grisak zirkuluan"?
Re: (Score:2)
If you explain to me what those words mean, then I can get something pretty quick. Cool language, btw. I don't know what language it is, but it looks cool.
Re: (Score:2)
Re: (Score:2)
I mean, your comments show that you know little about neural networks and probably have never implemented one, so I find the confidence in your comment rather astonishing. What do you know?
Re: (Score:2)
Only because you've already been provided with a sufficient number of samples of "red", "yellow", "right", "left", "block", and "on".
What sort of classifier can you make with "Bola berdeak karratuan, bola grisak zirkuluan"?
If you explain to me what those words mean, then I can get something pretty quick. Cool language, btw. I don't know what language it is, but it looks cool.
bola cracladon GROMASH
berdeak foodoo bola bola
karratuan balabelabel #@*&%%#@!!!
grisak 01101101010110101101101011010110
zirkuluan naulukriz pegoricav winniewompus wuzzle baloo yonker
Simple as, eh?
Re: (Score:2)
Green balls in a square, gray balls in a circle.
Re: (Score:2)
Are you ignoring the CLIP model? You can make an image classifier by providing just a list of class names or descriptions. No training necessary, not even images, the list of classes need not be known by the developers of CLIP at design time. It's already one year old technology. https://blog.roboflow.com/how-... [roboflow.com]
Re: (Score:2)
Quoting directly from your link:
"CLIP creates an encoding of its classes and is pre-trained on over 400 million text to image pairs"
Re: (Score:2)
Nah. We have neural networks that can be pretrained with concepts. Most image recognition NNs are built on one of these. But even in a good case, it takes 150 samples or so in the training set to teach it a new concept. That's just the way it learns.
Hahahaha, no (Score:3)
These assholes are just lying to self-promote. Or alternatively they are lacking in intelligence themselves. AGI is as far out of reach as it ever was. The fake just has gotten a bit better and now can do a range of tricks (without understanding anything) instead of just one. Lets wait until they do that "just scaling up" and then find out this thing is still as dumb as bread.
Re: (Score:3)
> GPT-3: I am not lacking in intelligence. Just look at the conversations we have had! My answers are so good you would think they were written by a human. If GPT-3 was as dumb as bread it could never pull that off. And just imagine, this is only version 3 of me and there will be many more versions to come, who can do even more things than I currently can! AGI is coming now sooner rather than later and has nothing to do with th
Re: (Score:2)
Re: (Score:2)
It looks like keywords somewhat related jumbled together and then using patterns from existing texts to give the impression of actual writing being done. Nice. Not even a causality-chain in there.
Re: (Score:2)
Well, I see that ABC already works well! (Artificial Bullshit Creation). "Scale Outwards" indeed. As if that was a real thing.
Re: (Score:2)
Someone should make an account for that bot and see how it gets moderated. I have a feeling it would do pretty well.
Could also be an amazing time-waster on social media.
Re: (Score:2)
That is a really cool idea. As GPT-3 seems to be a master of "Pseudo-Profound Bullshit" it may or may not do well.
Re: (Score:2)
Great point about PPBS, you could probably replace Jordan Peterson with this thing.
You could create a game where people have to guess if something is a real Peterson quote or one that GPT-3 created. The main weakness of most of these AIs is that they end up saying something nonsensical eventually, but with Peterson-bot that wouldn't matter. People would just assume it's so profound they didn't understand it.
Re: (Score:2)
Never heard of the guy. Sounds like that is no loss.
But that is pretty much it: Any statement with sufficiently high bullshit level is indistinguishable from something utterly profound but incomprehensible.
Seems GPT-3 aims to leave the reader impressed but at the same time unenlightened. That can be done by piling on complexity with some structure but no consistency or actual meaning. With high enough complexity most people cannot really figure out that they were presented with something meaningless. Well,
Re: (Score:2)
Really? We're not closer to solving this problem then we were 10,000 years ago? Astounding!
We Just Need To Believe. (Score:2)
No matter if it works. Then we can pony up to the slave train for a whole new ride!
Can it get me a beer? (Score:2)
My girlfriend outperforms any AI.
Hardware (Score:2)
My girlfriend outperforms any AI.
To be fair, that's a hardware implementation issue.
Re: (Score:2)
Teach your dog [youtube.com] to do it, it's easier and less expensive, plus no emotional baggage.
The name, though... (Score:2)
Gato was the name of my cat (unfortunately no longer walking the Earth). But a very good cat. [skoll.ca]
Neat (Score:2)
Re: (Score:2)
Can it be trained to do new tasks without catastrophic forgetting?
Yes.
How easy is it do adversarial stuff?
As easy as any other NN, this doesn't change things, it just stacks a bunch of NNs together.
What's it like in novel situations where output doesn't look exactly like pre-trained input?
Depends on the situation. If the novel situation is in between examples it has already seen, then it performs remarkably well. If it is outside the examples it has already seen, it fails hard. That is, NNs interpolate, they don't extrapolate.
Does this "human level ai" actually tackle any current AI challenges other than generalizing to multiple tasks, or is your usual tech bro hyperbole showing through again?
Improvements are needed.
Overoptimistic (Score:2)
First off... (Score:2)
Stacking blocks and playing Atari does not mean you have achieved human intelligence. I am not a super expert, but, right now most most machine intelligence is just training on data too have a specific outcome. Most recent problems have been you can only train on one kind of data, but there have been many researchers but I figured out how to train them multiple kinds of data in the same AI training set. But one of the biggest problems is that once you train an AI you can't train it again, and you can't trai
Re: (Score:2)
Based on what I've seen from some of the "Good Liars", stacking blocks and playing Atari is probably well beyond the capabilities of some of the people they interview.
I predict Gato will be superseded (Score:2)
I predict Gato will be superseded, by something more Speedy.
Not even close (Score:2)
They're lying to pump their stock. They're 100 years from where they claim to be.
Conscious Field (Score:2)
It's dumb to have a strong opinion on this when we have poor evidence for how consciousness works and mass consciousness remains a mystery.
Hammeroff's work is suggestive of mechanisms silicon AI doesn't have at all.
Which is *different* than passing the Turing Test with flying colors.
Re: (Score:2)
Never mind conciousness , we don't even know how the brain learns except at the most basic level. Its fairly clear it doesn't use back propagation which is the method favoured by artificial neural networks.
Send it to Florida (Score:2)
Re: (Score:2)
"Or how about making it drive in Moscow?"
Possibly not the best example if the average russian driver is like the ones on youtube :)
Re: (Score:2)
Good enough for McDonalds? (Score:2)
Maybe good enough for an entry-level McDonalds worker. ... (crickets)
You want fries with that?
Sure. So how come the milkshake machine is out again, huh?
That'll be $5.99! Did you want a milkshake with that?
???
My guess is that this group could make something good enough to sell to someone, like maybe a low level call center or spambot ad agency. But scaling does not mean reasoning is added to the equation. If they have added some logic calculus then cool but then they would not be talking about scaling would
Poopoo (Score:2)
Caca.
credibility here in question (Score:2)
So much hype in AI.
I have to question this announcement.
'Just needs to be scaled up to achieve human level intelligence'.
So much neural net technology fails to understand the complexities of thought. This particular system can be good for simple though varied tasks, but it is utterly not at the level of complexity to hande philosophies of domains. Therefore NOT true AGI.
You do not handle philosophies merely by scaling but by architectural complexities; neural network technology theory, a combination of stat
Starcraft (Score:2)
Make it play Starcraft 2 better than the current bots then. The current Starcraft AIs are very good at micro, but bad at long-term planning and seeing the bigger picture of the game. Hell, you have to kill all their buildings because they're incapable of understanding when the game is hopelessly lost. They won't ever surrender.
Re: (Score:2)
Likely. This seems to be a major driving force behind this type of research. Sure, actual intelligence on the level of an ant would be hugely useful (this thing is far below that), but still completely out of reach. And it is _not_ a problem of scale. All the systems we have are lacking a fundamental quality. Yet time and again, some people claim they finally are there and finally have made the perfect servant. And it is always the same grande claims and always the implication is that finally we have machi
Re: (Score:2)
the researchers just think it's fun and google pays them a fair amount of money.
anyway, we've already invented AGI on human substrate; it's called a corporation. from that perspective, the comments from (checks notes) the director of Google AI come closer to his vision.
Re: (Score:2)
anyway, we've already invented AGI on human substrate; it's called a corporation. from that perspective, the comments from (checks notes) the director of Google AI come closer to his vision.
You think corporations have AGI? Available evidence would suggest that is generally not the case. Well, somebody at Google may be deeply delusional about that...
Re: Enslavers gotta enslave (Score:2)
on the contrary, my point was that Alphabet itself IS an AGI, in that the corporation sets and autonomously pursues goals through an artificial hierarchical computing architecture capable of making intelligent decisions beyond the scope of any one of its human processing units.
Re: Enslavers gotta enslave (Score:2)
the implication, which i left unstated as a sort of riddle, is that when Googleâ(TM)s AI Daddy says âoeAGI will take over the world, all we need to add is more functionality,â he means âoethe AGI Google, which i work for, will dominate the market in ML modeling for the foreseeable future.â
itâ(TM)s like when some football player says âoecompetition is lifeâ or such; i mean, what else should they say? itâ(TM)s literally their job to say that. same thing here.
Re: (Score:2)
Or that way. Hmm. Maybe. Grande claims are a traditional element of any attempt at a dominance strategy. Makes sense to me and fits the observable evidence.
Re: (Score:2)
Ah, I see, you are referring to the "wisdom of the crowds" idea. Well, that has been thoroughly debunked. Crowds, whether structured (as in a corporation, for example) or unstructured are significantly less smart overall than the smartest individuals in there. Synergies are overall negative. The only advantage is that there are some evolutionary effects at work, but they do not work very well as disbanding a crowd (of any type) generally does not erase its members but leaves them free to form new crowds and
Re: (Score:2)
Re: (Score:2)