Artificial Intelligence Pioneer Says We Need To Start Over (axios.com) 175
Steve LeVine, writing for Axios: In 1986, Geoffrey Hinton co-authored a paper that, four decades later, is central to the explosion of artificial intelligence. But Hinton says his breakthrough method should be dispensed with, and a new path to AI found. Speaking with Axios on the sidelines of an AI conference in Toronto on Wednesday, Hinton, a professor emeritus at the University of Toronto and a Google researcher, said he is now "deeply suspicious" of back-propagation, the workhorse method that underlies most of the advances we are seeing in the AI field today, including the capacity to sort through photos and talk to Siri. "My view is throw it all away and start again," he said. Other scientists at the conference said back-propagation still has a core role in AI's future. But Hinton said that, to push materially ahead, entirely new methods will probably have to be invented. "Max Planck said, 'Science progresses one funeral at a time.' The future depends on some graduate student who is deeply suspicious of everything I have said."
I wish they'd change terminology (Score:5, Insightful)
Expert systems aren't AI, and pattern-matching algorithms aren't AI. AI is something that can creatively solve problems based on unreliable inputs and abstracting specific experience to general cases.
The problem there is we don't even understand how that works in theory, so modeling and developing an actually AI based on that model is impressively difficult.
Personally, I think we'll get there (understanding intelligence) faster by trying to replicate a mammalian brain in silicon that we will trying to bash out new algorithms.
Re:I wish they'd change terminology (Score:4, Insightful)
There's an error in the current definition of, "AI."
The "I" part is for intelligence and it's obvious what "intelligence," we mean.
It's certainly not the intelligence of a sunflower.
It's human intelligence.
To duplicate that, a machine will have to work like that.
Any facsimile is a miss.
Re: (Score:2, Interesting)
I disagree. It's easier to define intelligence than you think and it doesn't need a human.
Intelligence is the ability to take inputs from the environment, make a mental model and override your instinctual programming with the updated knowledge from the model.
Entirely separate from that is free-will, consciousness and self-awareness.
Re: (Score:3)
That is not a generative model of intelligence, at best a critical description of some of its aspects.
Re: (Score:1)
I disagree. No one wants to fly around in an airplane that operates the same way a bird does. Useful artificial flight is different and there's no reason to expect that useful artificial intelligence should resemble human intelligence.
Never mind the fact that the whole definition of "intelligence" is still up for broad debate.
Re: I wish they'd change terminology (Score:2)
Re: (Score:1)
There is already a practical example of this: existing AI can flag videos, ads, social network posts etc. as "suspicious" so that a human examiner can review it for a keep/cut decision. This reduces the need for an army of human reviewers. Such filter bots will get incrementally better over time so that increasingly less human intervention is needed (or more content can be reviewed without hiring more reviewers). The suspicious-content detection bots are still "useful" even though they are not human.
Re: (Score:1)
The "I" part is for intelligence and it's obvious what "intelligence," we mean.... It's human intelligence.
No it isn't. It is the intelligence of a fruit fly. In a decade, we may be ready for the intelligence of a mouse.
Re: (Score:1)
"Strong AI" is always 5 years off.
Every 10 years or so, we revisit the definition of what we expect from "Strong AI" - thus ensuring that the goalposts will remain firmly 5 years in the future.
When "I Robot" robots are fetching your drycleaning for you, growing vegetables in your home garden and cooking your meals, they still won't be "Strong AI" because their imaginative abilities are limited to preprogrammed fusions of existing narratives.
Re: (Score:2)
I agree.
"Intelligent computers will have the ability to commit suicide if Facebook is down." ~ © 2017 CaptainDork
Re: (Score:1)
I get so tired of this meme parading as fresh insight.
$silver_bullet is always five years off.
See, if you say ten years, no-one pays attention because ten years is a long ways away and you've already lost the attention war.
If you say two years, some well-informed wise ass will probably start to make irritating (and accurate) observations based on proximate data.
But five years is the Goldilocks condition: just right.
Most of the time you can cut to the chase and simply s/silv
Re: (Score:2)
Every 10 years or so, we revisit the definition of what we expect from "Strong AI" - thus ensuring that the goalposts will remain firmly 5 years in the future. When "I Robot" robots are fetching your drycleaning for you, growing vegetables in your home garden and cooking your meals, they still won't be "Strong AI" because their imaginative abilities are limited to preprogrammed fusions of existing narratives.
Pretty much, the thing is if you take it to the limit humans rarely do something truly novel. For most of history people have learned a craft or trade that was passed down, parent to child, master to apprentice. In modern times schools and universities pass tons of knowledge to pupils and students, including university I've spent 17 years of my life in school. And yes, it's a little more than rote passing of knowledge but if you've never, ever seen or heard descriptions of anyone starting a fire it's not so
Re: (Score:2)
I think you mean always 50 years off.
Re: (Score:2)
No, that is fusion power. AI is nebulously only 5 or 10 years off, and always will be, because once some piece is reduced to an algorithm, it isn't AI anymore and everything else that wasn't being worked on gets called the "real" AI, as opposed to the crap that we wasted our time with before "now" (for varying nows) and is so simple.
Re: (Score:2)
For a start, we can't define "human intelligence".
intelligence
n noun
1 the ability to acquire and apply knowledge and skills.
2 a person with this ability.
3 the gathering of information of military or political value. Øinformation gathered in this way.
4 archaic news.
DERIVATIVES
intelligential adjective (archaic).
ORIGIN
Middle English: via Old French from Latin intelligentia, from intelligere 'understand', variant of intellegere 'under
Re: (Score:2)
intelligence
n noun
1 the ability to acquire and apply knowledge and skills.
knowledge: The psychological result of perception and learning and reasoning
learning: The cognitive process of acquiring skill or knowledge
These things can be defined, but you get some circular definitions pretty quickly.
Re: (Score:2)
And a fundamental flaw in the whole thing is that the magic so many are seeking that they call "True AI" is really not intelligence. I will argue it is Free Will. Intelligence is just a detail along the way... a detail that is pretty well solved but cannot by itself be human-like. Free will is to perceive possibilities, weight them against each other, and execute the preferred. Our world is filled with patterns and we are able to affect them through actions. Our minds therefore receive interaction patt
Re: (Score:2)
Free will is to perceive possibilities, weight them against each other, and execute the preferred.
I agree except to add that intelligence includes things like the option to just say, "No."
The computer might not be in a mood to right now.
Re: (Score:2)
most of the software I see these days is 'cargo cult"
Re: (Score:1)
This is just marketing BS and people deciding about scientific funding are not immune to it. So "automation" and "statistical classification" became "weak AI" and sometimes just AI (even though there is not even a hint of "I" in this type of "AI"). "Classifier parametrization" became "machine learning" (learning requires insight, none of that is to be had here though). There are numerous other atrocities against language and reason, all perpetrated to make things sound grand and to get more money.
As to stro
Re: (Score:2, Insightful)
> I think we now have collected ample evidence that either our grasp of Physics is fundamentally incomplete, or that purely physical constructs cannot be intelligent.
Ahh. You believe in magic.
> And "replicating a mammalian brain"? That will not be within the grasp of humanity for thousands of years and likely never.
https://en.wikipedia.org/wiki/... [wikipedia.org]
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
You seem to be unaware of the definition of "magic". Makes you look pretty dumb....
Also, you basically seem to imply that consciousness is an "emergent property" of complexity (because Physics sure does not have a mechanism for it), and that means you do not understand Physics at all.
You also seem to be lacking the basic knowledge required to actually understand the references you gave. They do not say what you think they say...
Re: (Score:2)
Re: (Score:2)
BTW, I only believe in science and in what I can reasonably explain, but have no idea how to explain consciousness /. posters ...
And should us tell that? You believe you have no consciousness?
Well, I believe that is true for plenty of
Re: (Score:2)
So you don't believe in consciousness? Then how can you be responding to comments on Slashdot? Are you a bot?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
>Nope. I outright state it is a result of a physical mechanism we do not understand as yet.
What's the evidence for this? There's lots of evidence all sorts of other brain and mind observed properties are a consequence of known physical properties and material behaviors.
What counts as 'physical mechanism'? Not well appreciated dynamics in the various types of synapses & neurons? Unlike artificial neural net
Re: (Score:2)
The hippocampal prosthesis is a proof of concept in rats. Have you actually read the wikipedia page? It is full of "must", "should", "will", "may" and so on. Not exactly as if it were working.
So far we've been able to semi-conclusively simulate the brain of C. Elegans, a brain with 302 neurons. This is debated, by the way. The Human brain has about 10^11 neurons. That is approximately 8-9 orders of magnitude more. That represents 2^30 or 30 doublings, or another 50-60 years of "Moore's law", which is alread
Re: (Score:2)
Expert systems aren't AI, and pattern-matching algorithms aren't AI.
As of today (and the foreseeable future) nothing is AI. There is no such thing, and there are not even any foundations or architectural direction to follow. Why? Because we still can't define what intelligence is. We have it - we arrogantly proclaim - but we don't know exactly what it is, and we have not the slightest idea how it works.
When someone can explain - in detail - how Kekule went to sleep and dreamt the benzene ring's structure, we will be starting to get a handle on what intelligence is.
To my kno
Re: (Score:2)
>we still can't define what intelligence is. We have it - we arrogantly proclaim - but we don't know exactly what it is, and we have not the slightest idea how it works.
But we know something we vaguely define as 'intelligence' seems to exist, and we believe we live in a universe with consistent laws of physics (at least on local scales). We know we can, in theory, replicate what already exists. We have good reason to believe that an intelligent review of the processes - if we can figure them out - can
bicycle vs. the moon (Score:3, Interesting)
Just imagine what the human mind's distributed representation of the "intelligence" concept would look like. Clever animate entities (and most associations therewith) are way off in their own private corner of vector space compared to just about everything else.
When the gap is this large, the enormous void in between somehow becomes a non-object (to superficial cognition) and so people just begin to presume that we need to jump the gap, rather than slowly
Re: (Score:2)
I would split things as follows:
AI refers to development of computational solutions that traditionally required human intelligence.
Deep Learning attempts to emulate how the brain learns.
That said, I'm of the view that 99.9% of the academic literate, at a minimum, needs to be placed atop the next solstice bonfire.
Re: (Score:1)
Actually expert systems are AI.
I would suggest to read up the definitions of the guys who are working in the field of AI instead of defining your own ones.
That helps in communication enormously if you find a common vocabulary!
Re: (Score:2)
Re: I wish they'd change terminology (Score:1)
Re: (Score:2)
Re: (Score:2)
And be self aware enough to modify itself.
Re: (Score:2)
I think AI should simply refer to any kind of man-made system to solve problems (get from condition A to condition B)--but I will not claim that is human-like. Every so often, I notice the definition of AI on wikipedia changes. I don't think there is any consensus. Perhaps that is why people use the terms "True AI" or "Artificial General Intelligence" (AGI). The field legitimately doesn't know what it's seeking. It has a vague notion and differing beliefs on what should qualify. Expert Systems were or
Re: (Score:2)
The terminology has been in a constant state of change.
1950s - Electronic brains
1960s - Perceptrons
1970s - Neural networks
1980s - Expert systems
1990s - Intelligent agents
2000s - Machine learning
2010s - Deep learning
Give it a couple of years, new terminology will turn up.
Re: (Score:2)
Expert systems aren't AI, and pattern-matching algorithms aren't AI.
Spoken like someone who's sole understanding of what "AI" is comes from movies and the popular press. Go to an AI conference, ask the people there what AI is, then tell them their life's work isn't AI.
See how much traction your views get there...
Re:I wish they'd change terminology (Score:5, Informative)
You may be interested in OpenWorm. See: http://www.openworm.org/ [openworm.org]
They are working on simulating a worm. We can't replace individual neurons, but C. elegans is simple enough that we might be able to simulate it to the degree that we really understand it. An insect is way, way beyond what we can do now, and of course even simple vertebrates are a pipe dream. But, we're making progress. It's an open question of exactly which processes we need to simulate at what level.
As for replacing individual neurons, you'd have to know what they do in situ. Obviously, they receive signals, and they fire off other signals, but the strength of the connections change over time, the intercellular environment changes, the overall level of activity changes, they age, etc., so it's not just replacing a single neuron with a static piece of electronics; it would have to have both short and long term dynamics, and we would have to know what they are. And we don't yet.
31 years is now 4 decades? (Score:2)
Also however important back-propagation is, it is hardly the entire foundation of AI. From my perspective AI is proceeding apace. There are many AI methods. Yes some core algorithms should be reexamined, as should anything in science or industry. We see some stuff that seems to lag in how much improvement we expected (general intelligence), and yet others that are leaping ahead of where we thought they would be like machine learning and pattern recognition. Eventually all the threads will start to come
Re: (Score:2)
Re: (Score:1)
The issue is that simply training a backprop system will only get you so far.
Another issue is people thinking that people are training backprop systems.
Backprop is the training system.
Why the fuck you pretending to know anything? Dishonest fuck much?
He is not wrong (Score:1)
Likely he is not right either, because AI beyond statistical classification ("weak AI") may well be impossible, but trying new things is at the core of actual research. Although other approaches have been used in other fields and have failed to produce any hint of intelligence as well. For example automated theorem proving found that it cannot really be used to _find_ theorems, because the universe is a bit too small and short-lived to build the machinery for that. It is a very good tools in verifying tools
Re:He is not wrong (Score:5, Insightful)
>Likely he is not right either, because AI beyond statistical classification ("weak AI") may well be impossible
Nature did it with meat. Meat is not special. We have to learn how to replicate the mechanisms - which involves first understanding the mechanisms. Both of those are daunting tasks, but not fundamentally impossible.
If you think they are, then you must believe intelligence is a product of a supernatural process, and your theories are not appropriate for a science-based discussion site.
Re: (Score:2)
Nature did it with meat. Meat is not special. We have to learn how to replicate the mechanisms - which involves first understanding the mechanisms. Both of those are daunting tasks, but not fundamentally impossible.
What is the basis of your statement "meat is not special"? I mean regards to intelligence? Maybe meat is fundamentally special when it comes to producing high-level intelligence?
I'm not implying any supernatural mechanisms here. Just that what "meat" does may not be reproducible in silicon. Has anyone built a computer that grows a destroyed circuit back? Meat is pretty special. It regenerates. It reproduces. It learns. It evolves. What else on Earth does that?
Perhaps the only way to build artificial (human
Re: (Score:2)
I suspect that we're just not smart enough to design a machine as smart as we are, and we never will be.
Re: (Score:2, Insightful)
>stop pretending your anti-science quasi-religious fundamentalist beliefs are science
Project much? What the hell is wrong with you? You're the one making supernatural claims, not I.
>"Nature did it with meat" has no scientific basis.
Wow. So the fact that we can observe evolution and our fellow humans, make predictions, test them... 'no scientific basis'.
> All Science has is interface observations. And even a child these days knows that what you can observe on the outside of a box is not necessaril
Re: (Score:1)
"Nature did it with meat" has no scientific basis. All Science has is interface observations.
What are you on about? Yes, science has only interface observations... we have only interface observations, and in fact we observe nothing whatsoever directly. Everything is perceived through layers of translation and understood through layers of theory. You're trying to argue that nothing is understandable, which is clearly false.
Getting back to the point, what we have observed is that humans are intelligent. We don't know in detail what that means but we can describe large categories of cognitive abilit
Re: (Score:2)
Fascinating. The dumbing-down is in full swing when on gets moderated down to -1, Troll for pointing out the scientific state-of-the-art.
Re: (Score:2)
Claiming something is "obvious" and hence must be true is not Science. It is wishful thinking. Care to prove your assertion? Oh, right, you cannot.
Re: (Score:3)
>Using silicone semi brute strength to emulate "meat" may be infeasible as we are rapidly reaching silicone's physical limit.
Have a look into memristors, a new toy that could be very useful for making artificial brains.
Then consider that nature 'figured out' how to be more efficient by using more switches with lower thresholds and taking the average, while we tend to juice transistors to ensure a strong '1' or '0'.
And finally... silicon. Silicone is not particularly useful in computers except as a seala
Re: (Score:2)
Then consider that nature 'figured out' how to be more efficient by using more switches with lower thresholds and taking the average, while we tend to juice transistors to ensure a strong '1' or '0'.
More importantly nature does not impose any synchronization or avoid feedbacks. Meanwhile most of A.I. work is done in an extremely, pedantically so, synchronized feedback-less framework.
And good thing... these systems wont collectively decide to kill all humans until there is at least some internal feedback.
Re: (Score:1)
No, its not impossible, but it requires vastly better hardware that actually mimic or model processes we see in neurons and not the simple stuff we can do with silicon today. We are at the level where we can maybe in a decade and with specialized hardware build a roach brain in silicon. We are nowhere close to a human or even a cat brain. The reality is we are more hardware limited than software limited.
Re: (Score:3)
If you think that, then you have no clue what the limits on software complexity that can still be handled are. Sure, we are hardware-limited and we will be that for the foreseeable future. But the little overlooked fact here is that we have no clue what the software actually should do in order to simulate a brain, so even if we had the hardware, we would not be any closer to the result.
Also, why assume that just scaling the thing up makes it suddenly be intelligent? That is a baseless assumption as that has
Re: (Score:3)
> Assuming a purely physical apparatus could attain all these is neither supported by our current understanding of Physics nor does it have any scientific base. It is a belief. And, as it turns out, the follower of this belief ("physicalists") use pretty much the same faulty argumentation techniques so common with religious fanatics.
There you go again - the third time in this discussion by my rough count. You deride the idea that physical processes could create intelligence as a product of the faith of
Re: (Score:2)
You are a moron. What you do is circular reasoning. And you do not even recognize that. Incidentally, this level of reasoning is about as sophisticated as what the religious fuckups do.
Also, your last sentence gives you away nicely: The laws of Physics are not something to "believe" in. They are something to verify. And they are incomplete at this time, as anybody that cared to find out knows. You obviously did not.
Re: (Score:2)
>The reality is we are more hardware limited than software limited.
Well, I'm not sure it's fair to call it 'software' anyway. It's more like 'firmware', in that the organization of the hardware is the basic 'OS'. And there may be some process going on in a brain that is so much more efficient than attempting to model it in a computer that it's effectively beyond us until we do manage to mimic a biological brain in hardware.
A set of known unknowns?
Re: He is not wrong (Score:1)
And bats don't fly by flappingbtheir wings.
dirty word (Score:2)
The future of AI is a dirty word "stereotyping".
The brain works by making associations, and then drawing stereotypes from them. Every time I've seen a dog or hooded man in a dark alley, it has attacked me. I stereotype dogs and hooded men in dark alleys as being scary and run from them. But then one day, I meet a green hooded man with a bow in the alley, and he saves me from the dog. I have to 'learn' by reshaping my stereotype to include men in green hoods.
Stereotypes get a bad name due to people the r
Re: (Score:1)
Supervised v. Unsupervized learning (Score:1)
Backpropogation is a form of supervised heuristic learning where you have to know the desired output and so it works backwards. In that context it's about perfect. We don't have any algorithmic techniques in an unsupervised learning context that are as good. Expectation maximization and blind signal separation algorithms all generally suck balls. The goal is unsupervised learning that works as efficiently as backpropogation. I suspect this is what he is saying but since this article and his language aren't
I've said it for years (Score:3)
Re: (Score:3)
We might never have AI. Or we might eventually get AI, and it turns out to be no better than what humans can do. Despite that, weak AI ("automation") is not a joke, but very useful. As it turns out, many things we though required intelligence, actually do not. And hence many tasks are open to automation.
Re: (Score:2)
Re: (Score:2)
These days, automation (which includes statistical classifiers) is often called "weak AI", for marketing reasons. I do agree that this is an abuse of terminology.
Not only that (Score:2)
Re: (Score:2)
Ah, yes, Minsky the moron. That guy never understood what computers can and cannot do. Probably became too important too fast and never got a grasp on reality. I am really glad he is dead, his massive disservice to the field is impressive.
That said, most of the "AI community" is actually doing good work. Most of it is also not called "AI" though. For example, robotics was smart and made sure they did not get lumped in with the "visionaries".
Nope (Score:2)
The AI community needs to be much more cautious and circumspect.
It's not the fault of the AI community. They are always very cautious about the claims they make.
The misconceptions about AI on the part of the laity is all down to the PR and marketing peeps making claims about things they know nothing about.
Disingenuous news article (Score:2)
it's not 2026 (Score:1)
we may be into the fourth decade since 1986 but it's been 31 years since not 40+.
Sometimes People Forget, or Guess (Score:2)
Re: (Score:2)
Consider a nueron will not fire 1 time in 10. To simulate forgetting.
Artificial neurons do that too. It is called dropout [wikipedia.org].
Bringing AI into the mainstream office structure (Score:1)
The future may be table-oriented AI (TOAI) [github.com].
It uses tools and/or conventions more typical of a regular office and thus allows AI problems to be split up and analyzed in a modular team-oriented fashion. Tables are easier to relate to than traditional neural nets (without a lot of training, at least). TOAI allows compartmentalizing AI tasks to distribute to staff (tasks, sub-tasks, etc.), and encourages a kit-oriented approach (modularization).
For example, you may have 3 sub-teams: 1) pattern/test makers, 2) r
Start Over Doing What? (Score:5, Insightful)
Deep learning and other related machine learning techniques are proving very useful for a wide range of tasks. We don't need to "start over" to advance useful machine learning techniques.
Hinton seems to mean to get "strong AI". Yes, I read TFA, but the strength of Axios articles is that they are very short, but that is also their weakness. Very little is actually said in TFA.
We are a long, long way from anything that emulates a natural neural system at any level.
Consider Caenorhabditis elegans. Every cell in this simple worm has been mapped, also the development of every cell from a single cell has been mapped (male worms have 1031 cells). We know every cell in its nervous system (there are 302), and every cell that each cell is connected to, and we know the type of connections for all. What's more we have completely sequenced its genome. We know more about this little multi-cell organism than any other multi-cell animal on the planet.
Since we know every cell in its nervous system, and every connection between every cell, we must be able to emulate this worm's "brain"! Heck we must be able to "upload" the worm's brain to a computer! Right? Right?
No.
We are still working on understanding the functioning and capabilities of a single neuron in its brain. That has proven so complex as to defy characterization thus far. We are essentially nowhere in understanding how this 302 cell brain works despite decades of effort.
Meanwhile Kurzweil has changed his prediction of "when computers will have human-level intelligence" from 2020 to 2029. I guess believing it was going to happen in the next 26 and a half months was cutting it a little too close. I have been reading about his predictions about AI for a couple of decades now and have yet to see any explanation of how he imagines this is going to happen - other than his expectations about hardware capabilities, and that there is still an unspecified "software issue" that needs to be solved. Indeed.
Re: (Score:2)
We are still working on understanding the functioning and capabilities of a single neuron in its brain. That has proven so complex as to defy characterization thus far. We are essentially nowhere in understanding how this 302 cell brain works despite decades of effort.
Nice to see someone else who appreciates the complexity of the problem.
As I see it, a big part of the problem is, that unlike some simple mechanism that you can take apart, examine and measure the parts, then reassemble into a working machine again, you can't do that with living cells let alone a living brain, and we currently lack sufficient instrumentation to really properly observe how brain cells work, let alone the entire 'system' in action, therefore deducing what's really going on involves a lot o
Re: (Score:3)
Thanks!
And then there is the issue of whether we really need to emulate how natural brains work to get strong AI.
There is a Russell and Norvig quote that I rather like because it does help reveal the important issues: “The quest for ‘artificial flight’ succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics.”
Most people I have discussed AI with, and know of this (well known) quote draw the conclusion from t
Re: (Score:2)
Meanwhile Kurzweil has changed his prediction of "when computers will have human-level intelligence" from 2020 to 2029. I guess believing it was going to happen in the next 26 and a half months was cutting it a little too close. I have been reading about his predictions about AI for a couple of decades now and have yet to see any explanation of how he imagines this is going to happen - other than his expectations about hardware capabilities, and that there is still an unspecified "software issue" that needs to be solved. Indeed.
Please please please never associate Kurzweil (who is basically a media personality) with real AI researchers. Nothing Kurzweil has ever said about AI is more informed than speculation.
Re: (Score:2)
A
Re: (Score:2)
Of course he means "strong" AI. Many of us oldsters refuse to let the term AI get reassigned to mean something lame every time a new set of kids comes along wanting to claim they have made progress. They need to go give their lame stuff a name of its own instead of weaseling the real thing off as "strong AI" or AGI.
And he's right, the current "AI" methods may give us much safer cars, able to step in and save us from most accidents at the cost of some false positives, but they will not give us the "take a
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Every sub part of the brain isn't intelligent.
Many have obvious, easily implementable functions.
When you read about people with broken brains, you can easily see how mammal intelligence is composed of multiple subsystems. Even chimps and dogs have self awareness, the concept of object permanence, surprise, joy, affection, and some even humor.
We need to be very careful of A.I. research.
A successful A.I. could be 500 years away. Or it might happen next year.
We need things like
* Power limits with analog unh
Re: (Score:2)
I hope he's wrong, because replacing back-prop would be a real son of a bitch.
Forget AI, it's time for time travel! (Score:2)
I'm not ready for 2026 yet!
Difference engine using cluster analysis (Score:1)
Math Is Over Too (Score:1)
Because I Solved Diffie-Helman Exchange For Catalytic Conversion: https://pastebin.com/ZVvLYYiV [pastebin.com]
Hebbian learning was always the more fundamental (Score:2)
Hebb showed us the way forward right from the start, yet we still managed to get stuck with backpropagation and perceptrons time and fucking time again.
And let me add... (Score:2)
"The future depends on some graduate student who is deeply suspicious of everything I have said."
...he will be an aggressively creative male. Oh, wait a minute, he couldn't get a seat. Well, never mind.
Everyone says computers are too slow... (Score:2)
But, as Vernor Vinge pointed out in one of his stories (True Names - 1981), who says it needs to run in real time?
Maybe we're aiming to high right now. We want to simulate what we're capable of doing, at the same speed that we can do it. Why?
We talk about mapping the neurons in a worm, and replacing the worms brain with silicon to see if it can still act like a worm. Simulate the rest of the darn worm, and it's environment, and see what happens instead.
If it takes weeks or months of processing to give a
Re: (Score:3)
Well, "God" is a transparent pseudo-explanation for those weak of mind, but the physicalists (fundamentalists that believe everything is just matter and energy) are not much better. Both use belief-based strategies of dealing with the unknown and both are anti-science.
When it comes to consciousness, intelligence and free will, the scientific state-of-the art is "nobody has a clue". Anybody actually thinking scientifically is able to live with that, but that approach is beyond a great many people. Hence they
Re: (Score:2)
Re: (Score:2)
And there you do exactly what is _not_ done in Science. In Science, a question remains _open_ until there is evidence to close it. You are doing the opposite thing and that is pure belief and has nothing at all to do with Science. Fail.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
And there you have demonstrated again that you do not understand Science at all. Because you just predicted that Dragons do not exist, and you have done so without a shred of proof. Pathetic.
Re: (Score:2)
You are kidding yourself. You have closed this question with a pseudo-answer. Very likely you are one of the many people that cannot stand an open question. Incidentally, listing options is completely scientific, even if you do not like them.
Re: (Score:2)
When it comes to consciousness, intelligence and free will, the scientific state-of-the art is "nobody has a clue". Anybody actually thinking scientifically is able to live with that, but that approach is beyond a great many people. Hence they invent stupid pseudo-explanations.
Accepting that "nobody has a clue" is not scientific at all because the basis of science is questioning in a systematic manner. That some or many people believe in religion, philosophy, etc. is a direct result of the utter failure of the scientific method to produce answers to how and why the world and life exists. Many of these questions fundamentally deal with past events, and many scientific textbook statements cannot be tested by strong scientific methods, leading to dogmatic belief under the nomencla