AI Godfather Geoff Hinton: "Deep Learning is Going To Be Able To Do Everything" (technologyreview.com) 221
An excerpt from MIT Technology Review's interview with Geoffrey Hinton: You think deep learning will be enough to replicate all of human intelligence. What makes you so sure?
I do believe deep learning is going to be able to do everything, but I do think there's going to have to be quite a few conceptual breakthroughs. For example, in 2017 Ashish Vaswani et al. introduced transformers, which derive really good vectors representing word meanings. It was a conceptual breakthrough. It's now used in almost all the very best natural-language processing. We're going to need a bunch more breakthroughs like that.
And if we have those breakthroughs, will we be able to approximate all human intelligence through deep learning?
Yes. Particularly breakthroughs to do with how you get big vectors of neural activity to implement things like reason. But we also need a massive increase in scale. The human brain has about 100 trillion parameters, or synapses. What we now call a really big model, like GPT-3, has 175 billion. It's a thousand times smaller than the brain. GPT-3 can now generate pretty plausible-looking text, and it's still tiny compared to the brain.
When you say scale, do you mean bigger neural networks, more data, or both?
Both. There's a sort of discrepancy between what happens in computer science and what happens with people. People have a huge amount of parameters compared with the amount of data they're getting. Neural nets are surprisingly good at dealing with a rather small amount of data, with a huge numbers of parameters, but people are even better.
A lot of the people in the field believe that common sense is the next big capability to tackle. Do you agree?
I agree that that's one of the very important things. I also think motor control is very important, and deep neural nets are now getting good at that. In particular, some recent work at Google has shown that you can do fine motor control and combine that with language, so that you can open a drawer and take out a block, and the system can tell you in natural language what it's doing. For things like GPT-3, which generates this wonderful text, it's clear it must understand a lot to generate that text, but it's not quite clear how much it understands. But if something opens the drawer and takes out a block and says, "I just opened a drawer and took out a block," it's hard to say it doesn't understand what it's doing.
I do believe deep learning is going to be able to do everything, but I do think there's going to have to be quite a few conceptual breakthroughs. For example, in 2017 Ashish Vaswani et al. introduced transformers, which derive really good vectors representing word meanings. It was a conceptual breakthrough. It's now used in almost all the very best natural-language processing. We're going to need a bunch more breakthroughs like that.
And if we have those breakthroughs, will we be able to approximate all human intelligence through deep learning?
Yes. Particularly breakthroughs to do with how you get big vectors of neural activity to implement things like reason. But we also need a massive increase in scale. The human brain has about 100 trillion parameters, or synapses. What we now call a really big model, like GPT-3, has 175 billion. It's a thousand times smaller than the brain. GPT-3 can now generate pretty plausible-looking text, and it's still tiny compared to the brain.
When you say scale, do you mean bigger neural networks, more data, or both?
Both. There's a sort of discrepancy between what happens in computer science and what happens with people. People have a huge amount of parameters compared with the amount of data they're getting. Neural nets are surprisingly good at dealing with a rather small amount of data, with a huge numbers of parameters, but people are even better.
A lot of the people in the field believe that common sense is the next big capability to tackle. Do you agree?
I agree that that's one of the very important things. I also think motor control is very important, and deep neural nets are now getting good at that. In particular, some recent work at Google has shown that you can do fine motor control and combine that with language, so that you can open a drawer and take out a block, and the system can tell you in natural language what it's doing. For things like GPT-3, which generates this wonderful text, it's clear it must understand a lot to generate that text, but it's not quite clear how much it understands. But if something opens the drawer and takes out a block and says, "I just opened a drawer and took out a block," it's hard to say it doesn't understand what it's doing.
Clay will fit any mold. (Score:2)
Comment removed (Score:5, Insightful)
Re: (Score:3, Funny)
I don't know if they will be able to do everything but based on how the relatively simple brains of domestic cats have effectively enslaved millions of humans already I'm pretty worried about what AI will do.
Re: (Score:2)
I am pretty sure the question you have to ask is:
Can human brains can do something that a Turing Machine cannot.
Nobody knows. There are some indicators that this may be the case for at least some people, but there are no solid facts. There are no indicators for the converse though, just some circular reasoning (a favorite of all religious and quasi-religious screw-ups) used by physicalists.
Re:Are brains more than Turing Machines? (Score:4, Interesting)
The problem right now with deep learning, and admittedly it's something I've spent less than three months playing with, is that while it could theoretically do anything, it's not a good replacement for a human's intuition or creativity.
For example:
OpenCV - Great it can recognize a face, however training models were largely done on white people, so they have white-bias for detecting faces. Whoops, better start that model over again with more variety in the the training materials which also means starting over from scratch in a lot of projects designed for facial recognition.
TacoTron2/WaveGlow/etc - ASR, ST and TTS. The ASR can achieve a WER of about 4% which is about as good as humans, but if you go listen to the training corpus, they are ALL, trained against the same six or so English language corpus, which are almost entirely white middle-age adults. No children, no edge-cases ( people with voices like Kristin Schaal or Stephanie Beard) and the voice corpus's are all narrative quality reading public domain material, which means there is a lack of words introduced in the last 70 years. The other problem? They are all universally designed for commercial applications (eg phone IVR's) and thus there is no standardization and you end up retraining your data, wasting months of processing time when a better NN vocoder or synth comes out. Also they use very low quality inputs, which results in some really low quality voice synths that "sound a little better than telephone conversations."
Genetic Algorithms (used for learning things like how to play Chess, Go, or Super Mario) - The AI can eventually figure out how to solve these games better than a human because it's FASTER at making decisions, not because it's better. Like two AI's I've seen play Mario, exploited glitches in their emulators, but ultimately failed to complete the game because it doesn't play it like a human, it just plays without any consideration for the path it takes.
Chatbots - Can not solve customer's issues, they are primarily designed to play queue-bounce. Chatbots can be designed to help customers pick the right solution, but they are largely (and websites of the same companies) are designed to bury human contact by trying to get the customer to help themselves, but really the result is more frustration. Even though any CSR in a call center will tell you that 90% of their calls are boring mundane things that customers could solve themselves if directed to it, they mostly want handholding for something. A chatbot can handle a bill payment, it should not handle an address change.
In all of the above situations, a human's own creativity or intuition, even if it leads to the wrong result, may eventually lead to the right result. Deep Learning however has no plasticity once it's put into production. Quite literately, when it's not in training mode, it can't learn. The end-user hardware that exists is not capable of training, hundreds of years worth of computational power is required when the device it runs on is a mobile device.
It is theoretically possible to create a "human-sized" AI, but it it might not be the best use of deep learning. It may be better to break a "human simulator" into several pieces, just like how our brains are several pieces. There are seven major parts of the brain. Language and Touch, Sight, Balance and Coordination, Hearing and Emotions, Thinking and Memories and Behavior and Movement , Breathing and Heart. So it would make sense to to fine-tune deep learning algorithms in a similar way. Find a way to make a common databases that several different algorithms can access.
As far as computational power goes however, we're unlikely to get enough NN power into something human-sized. It's just not going to happen. The best we can hope for on current chip processes is making 3D chips the size of a human skull, but the thermal properties would have to change quite significantly, and I doubt it makes sense. We might not even be able to get close enough without a warehouse full of GPU's this decade.
Re: (Score:2)
For example:
OpenCV - Great it can recognize a face, however training models were largely done on white people, so they have white-bias for detecting faces.
Humans are notoriously bad at recognizing people from other races. "They all look the same" has been a punchline for a long time. Failing the same way humans do, and for the same reason, seems like a vote in favor of the deep learning solutions.
They are all universally designed for commercial applications (eg phone IVR's) and thus there is no standardization and you end up retraining your data, wasting months of processing time when a better NN vocoder or synth comes out.
Should we be looking for standardization at this point? I could see arguments on either side. We want to try lots of things vs. we need to be able to compare the different things we're doing.
Also they use very low quality inputs, which results in some really low quality voice synths that "sound a little better than telephone conversations."
So we need bette
Re:Are brains more than Turing Machines? (Score:4, Insightful)
I see this line of thinking all the time, and all it really demonstrates is lack of understanding of *how* the "learning" in "machine learning" works. All your examples illustrate perceived deficiencies of various implementations, with nothing said of the actual underlying math or (neural) network architecture. I find it a bit weird how you talk about what you think is "theoretically possible", while demonstrating that you do not understand the theory.
It's as if, you were back in the late 1890's watching everyone building horseless carriages from scratch, and all of them suck in different ways, and you therefore claim the idea of a horseless carriage will never be as amazing as everyone imagines they could be.
Re: (Score:2)
I think it's clear that person has no idea what they're talking about.
Re:Are brains more than Turing Machines? (Score:4)
Well Humans are more than brains. We have a body too, born with a set of instincts, and drives. As much as we want to think of ourselves as a brain, controlling a Bio Suit with life support. The connection between the brain and our bodies is much more complicated. If you feel nervous, you may want to take some antacids, which will make you feel less nervous. Because what ever is bothering you, your brain sends a signal down to your stomach, which then creates a feeling, which you then pick up as being nervous in which you may dwell on it, and make it worse. If you take something that calms your stomach then you don't get the feedback, so your nervousness subsides.
You become more aggressive when you are angry or uncomfortable...
So these assets also come into play and create actions that a Computer cannot do. The AI that learned to play Tetris found out the best game play is to pause the game. Humans wouldn't do this, because they would get restless watching a static screen. So they will follow other options.
Re: (Score:2)
That's just an argument of complexity. There is no logical reason why we can't simulate that same level of complexity in computers to get the same behavior (apart from hardware capacity and existing software, of course).
Re: (Score:3)
There's also no evidence they are. We have insufficient data to make the determination.
Re: (Score:3)
There's also no evidence they are.
Yes there is.
A human mind can execute any algorithm that a TM can execute, albeit slowly. So there is clear evidence that our brains are at least as powerful as TMs.
There are problems that TMs cannot solve, such as the halting problem. Human minds are also unable to solve any of those problems. So there is also evidence that our minds are no more powerful than TMs.
Re: (Score:3)
Care to present your evidence that human minds are incapable of solving the halting problem? Because nobody has proven that yet. In fact, nobody even has a _definition_ of the halting problem for human minds. Or in other words: You are quite full of shit.
Re: Are brains more than Turing Machines? (Score:3)
3n+1 problem.
Re:Are brains more than Turing Machines? (Score:5, Interesting)
The human brain runs at around 30-40hertz however it is massively parallel with a lot of tricks to shortcut processing.
Lets say we look at a list of 10,000 numbers all of them are the same except for 1 number. A computer will check each number and find which one is different. We would look at all the numbers like it is one object, find a spot that isn't uniform with the pattern, drill down and then we figure out what number it is. We are good a patterns, but details take a lot of thought.
Re: (Score:3)
For any practical real-world Turing machine, the human mind can solve the halting problem.
Nope. An algorithm to solve Collatz's conjecture will fit on a 3x5 index card. Does it halt? We don't know. We may never know.
Thus, a human observer can solve the halting problem by building that real-world Turing machine and observing it until it exceeds that finite number of states.
There are halting 6-state TMs that have runtimes that far exceed the number of Planck-times between now and the heat death of the Universe. The lower bound is 3.5*10^18267 steps.
You invoke the "real-world" and then propose a procedure that is utterly impractical.
Re: (Score:2)
That's not really true. Turing machines aren't just some charming thought experiment some dude thought up. The concept is specifically designed to support a fairly extensive set of provable mathematical results. Without some big problems with our understanding of physics, Turing machines are (nearly) universal. It turns out there is actually one known beyond-Turing form of computation, quantum computation, but it's a fairly minor extension, which is theoretically understood quite well.
Basically, the human b
Re: (Score:3)
Quantum computing is only an extension of Turing Machines with regards to complexity, not with regards to computability.
Re: (Score:3)
There is no evidence that brains are not Turing machines.
That is not correct. Just one single example are microtubules in the brain, which appear to exhibit and be bound to quantum physics. There are far more physical structures and processes happening within the brain than just the neuron neural network we were all taught in school.
https://www.nature.com/article... [nature.com]
Re: (Score:3)
Quantum extensions to Turing machines are pretty well understood. They're not magic either. And the possible quantum effects in microtubules are *highly* speculative, and haven't really played out since they were first suggested. The paper you linked, which is from 2018, is about microtubules, but not about quantum effects.
Re:Are brains more than Turing Machines? (Score:4, Insightful)
Quantum effects are not an extension of Turing Machines with regards to computability. However, nobody has ever proven that the human intellect is generated by the human brain.
No, elimination of the form "what else could it be?" is not scientifically valid proof. It is circular reasoning.
Re: (Score:2)
Nobody's proven that humans aren't puppets controlled by magical unicorns either. Just because something hasn't been disproven doesn't mean it's worth considering.
Re: (Score:2)
Sure. But the scientifically sound stance is that we do not know and that it can go either way.
Hence it is a direction worth investigating and the added Physics that would come out of finding what consciousness is, for example, alone would already make it worthwhile. But it _is_ clear that we do not know enough at this time and some pretty fundamental models we use are flawed. (We already know that: Physics is fundamentally broken because there is no quantum-gravity in the theory. Nobody has any idea how ex
Re: (Score:2)
no I disagree: it's impossible to prove a negative, but that doesn't mean considering all possible hypotheses is scientifically valid. Otherwise, my unicorn theory is ac valid scientific hypotheses.
As for physics, it's not fundamentally broken. It is largely correct, but known to be incomplete. Einstein didn't stop Newton's laws from working in the vast majority of cases we're interested in. Physics works in that it makes accurate predictions, so accurate we haven't been able to find fault even when we know
Re: (Score:2)
Well, a theory that is inconsistent loses its predictive capabilities. I call that "fundamentally broken". And while we know Physics is incomplete as well, we have no idea what is missing. Unicorns, flying spaghetti monsters, magic. All possible and not actually less likely or anything. I do understand that many, many people have trouble with the unknown. But the nature of the unknown is that it _is_ unknown.
MOD PATENT UP (Score:2)
SERIOUSLY mod parent up and stop modding the grandparent and replies top the parent up.
Re: (Score:2)
There is also zero evidence that "free will" is anything more than an illusion.
Re: (Score:2)
With apologies to Richard O'Brien.
Yo Grark
Re: (Score:2)
No it does not "stand to reason."
Computers can process data asynchronously. The feature is called "multithreading," and it has been achieved (as in, it is not merely emulated by task-swapping). So this difference isn't even true.
Furthermore, you don't have a quantifiable definition of "free will." You have used the term in its vague, equivocation-inviting form that is used in common parlance, but unsuitable for philosophical dialogue.
1) When "free will" is used to distinguish the action-potential of a pe
Re: (Score:2)
Ah, so you aren't going to actually address the sound and well-formed arguments that I presented. You are instead just going to dismiss them with an hand-wave and return to the very position I just debunked.
Such an argument technique befits people who already know they are wrong.
In other words... (Score:5, Interesting)
In other words, deep learning will be able to replicate human intelligence and do anything when we figure out how to make the breakthroughs in deep learning that are required for it to be able to replicate human intelligence and do anything.
Did I really just waste 1 minute of my time reading that summary?
Re: (Score:3)
Did I really just waste 1 minute of my time reading that summary?
If that is all you got out of it, then, yes, you wasted your time.
But if you had a better understanding of the field, you would have gotten much more.
Geoff Hinton is a smart guy. He and his grad students have made numerous breakthroughs in AI, including the paper that launched the DL revolution in 2006. He knows WTF he is talking about.
In the past, he has expressed skepticism that DL could "do it all" and that AGI would come, at least partially, from other subfields of AI.
What he is now saying, is that he
Re: (Score:2)
In other words, deep learning will be able to replicate human intelligence and do anything when we figure out how to make the breakthroughs in deep learning that are required for it to be able to replicate human intelligence and do anything.
Did I really just waste 1 minute of my time reading that summary?
As I said, this person is an idiot. There is zero evidence any of his claims have substance.
Re: (Score:2)
Right now in 2020, we have self-driving cars that drive themselves into walls, AI cameras that focus on a referees bald head instead of the ball, and competing smart speakers what do not understand half of what we ask them.
I think that we're a solid 10 years away from having to worry about deep learning taking over any human jobs more thought intensive than a security guard.
Just like fusion power (Score:3)
Strong (i.e. human level intelligence or higher) AI is "yesterday, today and tomorrow's technology of tomorrow."
I mean, people in the field have a terrible track record. They've been saying real AI is just around the corner for fifty years. This is just more of the same.
Re: (Score:2)
50 years is not a long time. Humans were trying to make airplanes for a thousand years. Just because something is delayed 50 or 100 years doesn't mean it won't happen.
Re:Just like fusion power (Score:4, Interesting)
I really do not think humans were not trying to make airplanes for thousands of years.
I mean, there are 100 billion neurons in a human brain. There are 16 million neurons in the largest current AI. And that thing takes megawatts of power, where a human brain uses about 12 watts.
We're a LONG way off, even assuming there are no fundamental road blocks to our understanding of how the human brain even works.
Re: (Score:2)
Oh really?
Re:Just like fusion power (Score:5, Informative)
Yes. Really. Hans Moravec stated that John McCarthy founded the Stanford AI project in 1963 “with the then-plausible goal of building a fully intelligent machine in a decade.”
Let that sink in. In 1963, top researchers thought human scale AI was a decade away.
Source: https://www.openphilanthropy.o... [openphilanthropy.org]
Re: (Score:2)
Re:Just like fusion power (Score:4, Interesting)
Yeah, there was a period where all the wrong guesses kinda caught up with AI researchers and made the field look bad. The smarter ones stopped making predictions.
Then came neural networks, and specifically some breakthroughs made in the last ten years, which finally proved useful at solving certain types of problems. And the hype train started back up again.
I mean, if we can get a supercomputer to simulate a network with as many neurons as a mouse brain then a human brain can't be far behind, right?
Let's not think about the fact that this "mouse brain" does not have nearly as many connections as a real mouse brain, does not simulate neurotransmitters nor the actual brain chemistry, nor does it have as much plasticity, or any ability to self repair. And let's just pretend we are absolutely certain real brains don't make use of any quantum effects (to be fair, they probably don't) or any other effects we don't understand (they probably do.)
If we do that, then yeah! We are SO close to real AI! Again.
Re: (Score:3)
Pretty much. There seems to be some mental dysfunctionality that many AI researchers have: They deal with the fact that their creations may stay limited. Hence they make grande claims. And this one, in particular, has learned nothing from the history of his field. It is utterly pathetic.
Re: (Score:3)
Heh, when I was just a budding little AI enthusiast, the hype was all about expert systems. All the cool kids were learning Prolog. Fuzzy logic was expected to be a big thing. I think that was like the second or third wave of AI hype.
But to be fair, neural networks, and especially the more recent additions to the field like GANs have actually lived up to some of the hype and produced useful products. I honestly don't think there are any fundamental roadblocks to full human-level AI, I don't subscribe to qua
Re: (Score:3)
Quantum effects serving as any kind of interface to anything would mean there are hidden local variables, which have been ruled out by experiment, Quantum collapse is truly random, which leaves no room for it being a generator of "free will." It is absolutely not a conduit for information flowing into local space-time from "someplace else."
However, even if the universe is completely deterministic, the concept of free will is crucial to the human experience. I mean, you could break the universe down to const
Re: (Score:3)
Strong (i.e. human level intelligence or higher) AI is "yesterday, today and tomorrow's technology of tomorrow."
I mean, people in the field have a terrible track record. They've been saying real AI is just around the corner for fifty years. This is just more of the same.
Indeed. And they made completely ludicrous predictions like "when computers have more transistors than the human brain, they will be more intelligent than humans" (Marvin "the idiot" Minsky). The track-record of these people is so bad, anything they say has a high probability of not happening. I still do not know whether they are all idiots savants or exceptionally dishonest.
Re: (Score:2)
The earlier AI researchers had little to no understanding of actual neurology. That's the problem. They were operating under the delusion that things that are easy for a human to do would be easy to program. They really thought of the human brain as a sort of computer, because they had no other model to work with. I mean, they weren't even using neural networks. Why would they? The hardware can't matter, right? It's all logic and language. That's what a human brain does, right? Processes instructions, just
Re: (Score:2)
Well, given what neurosciences currently can do (or rather cannot do), this may take a while. Also, it is unclear whether emulating a brain will do the job. There is still no physical basis for consciousness, intelligence (real one) and free will. For the 2nd and 3rd thing, we do not even reliably know whether they exist or not. With that, it is not scientifically sound to claim all that was needed for an intelligent machine (AGI, i.e. what smart humans do) is to emulate the brain.
Re: (Score:2)
Exactly this. We do not understand what consciousness actually is yet. To try to emulate it without understanding it is pure folly. Anyone who says we will at some point have a machine that can think like a human, without explaining how a human thinks, is living in a fantasy world. Not to say it isn't possible, just that we don't yet know if it is possible, let alone how hard it might be.
Another idiot... (Score:4, Insightful)
Same level if insight as Marvin "the idiot" Minsky. No, it is not about the number of computing elements. It may not be about physical hardware at all, but if it is, a human brain is far, far more complex than a simplistic counting metric ("number of synapses", for example) captures. What you need is the connection space and the parameter space for each individual synapse. Then you arrive at something that is a bit outside of the reach of technology and will remain so for a long, long time.
However, if actual insight is required to solve a problem (which likely requires self awareness), then you can scale these "deep" ANNs as far as you like, they will never be able to do it. All these things can do is interpolate the data they have been trained on, regardless of how large that training data and regardless how large the network. They are completely and fundamentally incapable of doing anything original.
Hence this person is either stupid (see above) or a liar.
Re:Another idiot... (Score:4, Interesting)
Totally agree.
Where "deep learning" AI techniques are interesting and useful for many problems in this world, it is not able to do anything on it's own. You have to spend a LONG time tuning, fussing over and training deep networks and that's after you have engineered the system to solve a specific problem.
I was in a group that for a college class on Machine Learning was working on our final project. We where training a deep network to play "Breakout" (the old Atari game). It took days and days of fussing, training and tuning and we had only improved our average game score by a about 10 points. It was cool to watch the network sorta play the game, but it was obvious that this "deep learning" thing was a LOT of human effort for very small gains in game playing ability. This was just a simple game, as problems get more complex the effort required is geometrically increasing.
There is no way "deep learning" is going to "do everything". Machine learning is just NOT the solution folks think it is, nor is it capable of doing any odd task you can dream up. It requires a lot of human effort to set up, is only suitable for a unique set of problems, really isn't that good at learning and provides only an estimated solution, which is considered "good" when it does better than a random guess.
When I see headlines like this, I instantly know the author doesn't have a clue, has never really understood machine learning and has fallen for the hype, which is about as misleading as it can be about ML, worse than campaign ads.
Re: (Score:2)
You have to spend a LONG time tuning, fussing over and training deep networks
To be fair, training a human brain embedded in a child requires a lot of fussing and training as well.
Re: (Score:2, Informative)
You have to spend a LONG time tuning, fussing over and training deep networks
To be fair, training a human brain embedded in a child requires a lot of fussing and training as well.
The outcome is also rather random and usually not very impressive. For example, most people use a completely made-up model of how the world works (religion). That would be really not good in any "smart" automation. And there is a lot that is needed in addition, for example a high-bandwith environmental interface. People to focus on, etc.
Re: Another idiot... (Score:2)
Children have to be taught facts, but they learn to recognise faces for example with very limited training data - usually mummy and daddies. Ditto cat, dog, table etc. When an ANN can fully recognise objects from a handful of training examples rather than thousands or millions then maybe the BS from the AI industry might have some merit.
Re: (Score:2)
The issue I see here is that training a human, compared to training a machine learning algorithm is expensive and slow-paced, and it takes a fuckton of practice to put insight to any use
Therefore, we'll quite possibly end up with massive cheapening of services that are now thought
Re: (Score:2)
Indeed. And there are a lot of intermediate-level jobs, "AI" could do better once trained. For example, I just asked an insurance customer support person about a simple thing regarding my retirement insurance (this is in Europe). She apparently is incapable of even understanding what I want (we are on email 4 now...). There are about 10 things you can do with this insurance and this is one of them. This would not be hard to automatize. Another agent at another insurance (mine is split and I wanted to know w
Re: (Score:2)
Engineering is going away too, well ok, parts of it are going away, which makes more competition for fewer jobs, and many of the people in Engineering type jobs are no where close to engineering or even lead technician level expertise.
And yet we are pushing more people to those technical type roles regardless of aptitude because that's where the jobs are yet pay managers/sales more.
Re: (Score:2)
That said, let's be realistic. *if* 90% of people lose their jobs through being substituted with no replacements, there's going to be social revolts that will make the industrial revolution seem downright peaceful in comparison.
Re:Another idiot... (Score:2)
They are completely and fundamentally incapable of doing anything original.
False. Computers are already creating original works of art (including landscapes, human faces, others) that have the same qualities as human-created works of art in the same category (they look realistic, like something that could exist, even though they are not copies of something that does exist).
One example [generated.photos] among many.
It is true that the brain is complex. And it is also true that current AI solutions are nowhere near "doing wha
Re: (Score:2)
They are completely and fundamentally incapable of doing anything original.
False. Computers are already creating original works of art (including landscapes, human faces, others) that have the same qualities as human-created works of art in the same category (they look realistic, like something that could exist, even though they are not copies of something that does exist).
Nope. That is not "original work" in the sense I used. It is merely a sifting of the output of a randomized process according to some interpolation of what the ANN was trained on. That many humans are incapable of understanding this is not a surprise. There is, in fact, a whole type of art that mocks the stupidity of the observer (Dadaism).
Incidentally, my comment was about ANNs. And they will never be able to do anything original, because the very Mathematics they are based on does not allow them to do tha
Re: (Score:2)
It is merely a sifting of the output of a randomized process according to some interpolation of what the ANN was trained on.
For the most part, that's is all human-created art is. If a human paints a picture of some landscape, that human is merely assembling elements of other landscapes that they have seen. It's the same process, but happening in a brain instead of a circuit board.
You listed one specific category of art that is not like this. By doing so, you moved the goal-posts. Your original statement
Re: (Score:3)
He did not work on and neither got that Turing Award for the area he makes claims in here. He got it for improving training performance. An idiot is somebody that vastly overestimates his insights in a specific area. This guy is an idiot.
Re: (Score:2)
An idiot is somebody that vastly overestimates his insights in a specific area.
Nope, that is not what an idiot is, from the dictionary [merriam-webster.com]:
1: a foolish or stupid person
2 dated, now offensive : a person affected with extreme intellectual disability
Your definition is wrong.
Further, your definition would not apply to Geoff. He is objectively more educated and more accomplished in this field than most of the human race. That is the opposite of what idiot means, both by your definition AND the correct definition.
Y
Re: Another idiot... (Score:2)
You might want to introduce this genius to the works of Kurt Godel first then before he makes any more idiot predictions.
Glad we wont need academics or schools (Score:2)
Re: (Score:2)
Just the filthy rich in their ivory towers and the slums for the rest of humanity..
Right. Just like only the ultra-rich have laptops and cellphones. Whatever.
Look at the richest people in the world today. They are not the people that invented technology. They are the people that successfully brought technology to the mass market.
There is no reason to believe that the same won't happen with AGI.
The obvious first application for AGI is to stick it into an Alexa-dot and sell it for $39.
Re: (Score:2)
Every dollar that gets into their pile is a dollar someone else goes without. And we loose any gain from someone else using it better taking a chance.
The reason Big Government, Marxism, Monopolies and Socialism fail for the masses is because centralized planning can never compete with those practicing mass distributed risk taking themselves on a smaller sc
Re: (Score:2)
Every dollar that gets into their pile is a dollar someone else goes without.
Zero-sum fallacy [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
Pop music? (Score:2)
Yes, of course, even a small shell script can do that one.
My problem with futurists is they don't give us a reasonable time frame. Will deep learning and AI have these capabilities in the next 20 years? If it's in 50 years, maybe I don't care too much because I'll be way too old by then to care about more than slowly chewing applesauce. If it's 100 years? keeping the climate and society from unraveling before then is the bigger challenge than AI.
Re: (Score:2)
They do not give any time-frame for these predictions, because there is zero evidence they will ever become true. Hence they implicitly select "after I am dead" so nobody can call them out on the crap they talked.
Wrong questions (Score:2)
Re: (Score:2)
Once we get a fully developed AI, how do we convince it to stop playing Call of Duty and do the job we built it for
Similar to how we manage humans: charge them for the electricity needed to run, pay them a salary for doing their job, and threaten to take it away if they slack off... Oh crap, we're going to have to pay these things a wage now too...
Re: (Score:2)
Yes. And convince them to not make fake news reports about humans killing all the AIs.
Re: (Score:3)
A brain is just more than synapses (Score:3)
There are many things that separate the human brain from machines. The first thing is there are centers that do different tasks like the prefrontal cortex which helps make decisions contains personality ect. All these centers have to talk to each other, when they do incorrectly you get 'mental problems' .
Another thing is brain waves, which sync up different parts of the brain. If these don't work I suppose you get seizures.
Since we don't have a really great understanding of how or why the human brain works in many ways, how would we understand an AI? I suppose an AI may exhibit 'mental problems' also. I think that its hubris to say that we could create anything like the human mind in a short timeframe .
Re: (Score:2)
We have no clue how the human brain produces phenomena like 'reasoning' or 'self awareness'. Therefore there is no way you can build machines or write software that has those qualities. 'Machine learning' focuses on what is likely the overall most trivial aspect of how a brain works, the most mechanical aspect of it -- which just illustrates how little anyone understands about the overall issue.
Humans appear to be quick to an
Not Qualia (Score:2)
But if something opens the drawer and takes out a block and says, "I just opened a drawer and took out a block," it's hard to say it doesn't understand what it's doing.
Not so...
If a computer detects a 700nm lightwave and says "red" it has not experienced the color red. ( As you have in your mind) This is the hard problem of consciousness. It knows no more of "red" than a camera...
Re: (Score:2)
Re: (Score:2)
Open AI.log.txt :
Activated Actuator1; call MoveTo("Actuator1",{coordinate_x},{coordinate_y},{coordinate_z})
Called PatternMatch(VisualSensor1)
PatternMatch1() returned "Drawer"
Activated Actuator1.1 through Actuator 1.5; called Actuator1Cmd("contract,1,5")
Activated Actuator2; called Actuator2Cmd("pull")
Called PatternMatch(VisualSensor1)
PatternMatch1() returned "Block"
(and so on)
It's not 'saying' anything. It's not capable of 'saying' anything at all. All it does is execute the code that a human wrote.
it's hard to say it doesn't understand what it's doing.
It doesn't 'understand what it's doing' because 'understand' implies 'reasoning' and this software is completely incapable of 'reasoning', 'thinking', 'cognition', and so on, because we don't even have the vaguest notion of how any of that works in our own brains, let alone any clue how to write software or build machines that have any of those qualities.
So-called 'AI', in it's curre
Re: (Score:2)
Humans need to stop anthropomorphizing inanimate objects. We've been doing that for thousands of years and we're going to wreck ourselves if we don't knock it off.
I agree! Computers hate it when we do that.
Like in the 60s? (Score:2)
You keep using that word..... (Score:2)
"Deep learning" doesn't mean what you think it means..
To everybody who reads "Deep Learning" or "AI" and are tempted to buy into the hype or that it somehow can work miracles on a computer, making programming obsolete, PLEASE hear me. There is nothing magic about Artificial Intelligence or Machine Learning. It requires knowledgeable people to engineer a Machine Learning solution. It requires a lot of work to find and validate usable training data. It takes time to tune the solution and train it. It t
No, it is NOT. (Score:3)
Please, humans, for fuck's sake, knock off the 'magical thinking' nonsense before you wreck yourselves!
More is Less (Score:2)
There isn't a single species on this planet that excels at any single task by collecting more data. Simply put, the novice becomes the expert by learning more, but the student becomes the master by reducing that knowledge to only what is needed.
Birds fly in flocks, at high-speeds, in formations, with far better guidance systems than any tesla, fighter jet, or air force one. Do you intend to convince me that the bird-brain in my local goldfinch is monitoring air-pressure, wind speed, altitude, rain, and a
Call me ... (Score:2)
Re: (Score:2)
Re: (Score:2)
The Turing test was passed several years ago, and is routinely done today.
Re: (Score:2)
The Turing test was passed several years ago, and is routinely done today.
Not really. A dumbed down version of the Turing test was passed using trickery. It just means the people running the test are not very smart or wanted to generate a specfic outcome.
Re: (Score:2)
Re: (Score:2)
Why do my reflexes have their own ridiculous 'autocorrect'?
Re: (Score:3)
They'll never pass a Turing test. They'll never be able to commit to any kind of values that allow them to answer basic questions like, "Is life worth living? Should I obey the law?" Etc.
Most of the "news" stories on machine learning are exactly what you say, hype.
People just don't understand AI so they are subject to the hyperbolic rhetoric and breathless reporting on the wonders of AI. Yes, AI can do some interesting things, but most people don't see the work that goes into those applications. Machine Learning is only applicable to specific kinds of problems, requires humans to engineer the solution, set up the system, fuss over the tuning parameters, find and validate the training da
Re: (Score:2)
Are humans good at answering those questions? To pass a Turing test you don't have to outrun the gods (those who have derived and committed to The Best values, as measured by .. hey, waitaminute); you just have to do about as well as humans do.
Those particular questions seem pretty easy to game, too, so that the AI might perform better (i.e. more consistently) than humans. "Life is worth living" can be taught as a value itself (not even a mere answer to the question), by putting AIs into an evolutionary com
Re: (Score:2)
No one fucking knows. And there is absolutely no way to know ahead of time. Period. Any statement to the contrary is either ignorance or self-promoting lies. Or both.
Futurism is a very important philosophical thought process. You would be naive to think otherwise.
Separately, context is important. This is a guy whose theories, writings, and teachings are heavily incorporated in all modern popular AI frameworks. He's being interviewed in an MIT publication whose target audience is mostly students and alumni. MIT has a large and well-funded AI department. Why would he say anything other than the things he's saying?
Finally, the quotes in TFS never give a timeline. He neve
Re: (Score:2)
Re: (Score:3)
The problems are:
* Geoff Hinton doesn't know what the fuck he is talking about. Instead of trying to simulating human intelligence -- which he will fail miserably at -- he should start with something simpler, Gee, like maybe an Amoeba (which doesn't even have brain cells) or Earth Worm which have a mere 302 neurons.. You have to learn to walk before you can run.
* Hand-waving all the problems away with "deep learning" is pure fucking hubris which is the OP's point and correct.
* Scientists don't have a fuck
Re: (Score:2)
Any statement to the contrary is either ignorance or self-promoting lies. Or both.
Indeed. And that a Turing Award winner indulges in this is just repulsive. This person should really know better.
Re: (Score:2)
I do not disagree. But when you get such an award, you gain visibility. And that comes with responsibilities. This person apparently has not understood that at all.
Re: (Score:2)
You have a point. There are those that do not fall for this and keep a level head and a realistic view even in such a situation. And then there are people like this one. Does not change the fact that I find this behavior repulsive, though.
Re: (Score:2)
But if something opens the drawer and takes out a block and says, "I just opened a drawer and took out a block," it's hard to say it doesn't understand what it's doing.
This sounds just like somebody in the seventies saying: "But if you present a photo of a dog to a computer and it responds with the word "dog", it's hard to say it doesn't understand what it's seeing."