Can We Build a Human Brain Into a Microchip? 598
destinyland writes "Can we imprint the circuitry of the human brain onto a silicon chip? It requires a computational capacity of 36.8 petaflops — a thousand trillion floating point operations per second — but a team of European scientists has already simulated 200,000 neurons linked up by 50 million synaptic connections. And their brain-chip is scaleable, with plans to create a superchip mimicking 1 billion neurons and 10 trillion synapses. Unfortunately, the human brain has 22 billion neurons and 220 trillion synapses. Just remember Ray Kurzweil's argument: once a machine can achieve a human level of intelligence — it can also exceed it."
Undue Credit to Kurzweil (Score:5, Interesting)
Just remember Ray Kurzweil's argument: once a machine can achieve a human level of intelligence â" it can also exceed it.
Ray Kurzweil is a brilliant computer scientist and brought us many improvements -- maybe even the invention of -- the electronic musical keyboard.
But that is not his argument. I laughed when I read that as the concept was presented to me in sci-fi novels before Kurzweil's time. The earliest I (or Wikipedia) can trace the intelligence explosion [wikipedia.org] theory back to is Irving John Good who, in 1965, said [archive.org]:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
This was popularized by Vernor Vinge which is where I recalled reading about it. There are many reasons to celebrate Raymond Kurzweil [wikipedia.org]. In my opinion, his is "work" in nutrition and his near-religion called futurology are not in those reasons. He has become a vocal proponent of a dream to become god-like. I do not share that dream and I wish him the best of luck in his endeavors. I just cringe every time I read of the "singularity being near" or the ability to live forever coming about. If it's going to happen, just sit back and let it happen. I feel he has done a great disservice to the field of artificial intelligence by promising unrealistic things in interviews to the lay person. Disappointment is a sure fire way to get yourself branded as a snake oil salesman religious nut.
Predictions for the future are for sci-fi books and movies, don't get into the habit of being a scientist in an interview with a reputable magazine or web site telling them what is about to happen. Example:
Kurzweil projects that between now and 2050 technology will become so advanced that medical advances will allow people to radically extend their lifespans while preserving and even improving quality of life as they age. The aging process could at first be slowed, then halted, and then reversed as newer and better medical technologies became available. Kurzweil argues that much of this will be a fruit of advances in medical nanotechnology, which will allow microscopic machines to travel through one's body and repair all types of damage at the cellular level.
And that's easily criticized:
Biologist P.Z. Myers has criticized Kurzweil's predictions as being based on "New Age spiritualism" rather than science and says that Kurzweil does not understand basic biology. Myers also claims that Kurzweil picks and chooses events that appear to demonstrate his claim of exponential technological increase leading up to a singularity, and ignores events that do not.
Re: (Score:2, Insightful)
++
I agree 100%. I still don't understand why this charlatan gets so much press on Slashdot. Probably because it causes people like you and I to post.
Re:Undue Credit to Kurzweil (Score:5, Insightful)
I agree 100%. I still don't understand why this charlatan ...
Well, despite my overly critical initial post I will waste karma with further speculation on Kurzweil. He's actually not a charlatan. He's just stepping outside of his field and extrapolating out some of the things that have been achieved ... and using some unrealistic exponential curve to guide his predictions.
The man has experience great success -- both in business and academia -- throughout his lifetime. But past 1990 he's made a few inventions to help learning and disabled students. Which is great. Unfortunately he's found that writing books, holding symposiums and giving speeches about fantastic science fiction is what draws attention and resources. So he keeps doing it. It results in a lot press and I'm sure his aging body might drive him to hope and fund a singularity before he dies.
While this singularity is a romantic idea, it's just not based on science. He's lost sight of what he once did musical hardware that advanced synthetic music far beyond the rate at which it normally would have run. And now his efforts are not designated to realistic goals but instead loftier goals that no one can achieve. What's worse is that it depends on crosses between fields he's simply not an expert in.
You might be able to argue that he's a charlatan now but in my mind he's Thomas Edison turned Nostradamus. He's pulled out all the stops that relegate normal scientists to the scientific process and has passed optimism onto fantastical dreams. He can write all the books he wants but until he gets back to what made him great -- actually implementing something and leaving a legacy of working examples -- he runs of the risk of tarnishing his reputation.
Re: (Score:3, Insightful)
yeah, they'll never make a computer that can solve problems the way a human can until they get computers to become absolutely focused- if I tell it to run i++ a quadrillion times, I want to see an answer! I don't want to come back five minutes later and see that it's decided to play solitaire instead!
Re:Undue Credit to Kurzweil (Score:4, Insightful)
I feel he has done a great disservice to the field of artificial intelligence by promising unrealistic things in interviews to the lay person. Disappointment is a sure fire way to get yourself branded as a snake oil salesman religious nut.
A disappointed public threatens research funding, but an unprepared public threatens chaos.
I'm more concerned with making sure we're thinking ahead to the radical change that is likely to come, be it in 10 years or 40, than to be concerned that lay people will distrust AI researchers.
Re:Undue Credit to Kurzweil (Score:5, Insightful)
From TFA: imprint the circuitry of the human brain using transistors on a silicon chip?
No, not on binary circuits we can't. We might simulate the brain, or even model the brain, but we won't imprint it.
The brain is a parallel processor.
Tremendously paralell; and it's a multimode analog design, not a single mode digital design. There are many different kinds of brain cells, with both chemical and electrical components.
We can model an atomic explosion, but we understand the physics behind an atomic explosion. We have hardly begun to understand how the brain works. We'll have cures for all mental ilnesses before we can accurately model the brain, because if you can't fix a broken machine you don't understand how it works, and even sometimes if you can fix a broken machine you still may not understand that machine completely.
When you model an atomic explosion, there is no radiation released. A model is not the real thing.
There is no test for sentience. Without such a test it would be impossible to kow if you have succeeded in accurately modeling it.
Re:Undue Credit to Kurzweil (PS) (Score:3, Interesting)
A disappointed public threatens research funding, but an unprepared public threatens chaos
And a simulated intelligence that doesn't truly think or feel may get "machine rights". I wish these guys would read Dune; the jihad was was not against the thinking machines, but against the men who used the thinking machines to enslave their fellow men.
And, when they can model a fly's brain and build an artificial fly, I'll be a hell of a lot more impresses than their simply "modeling" 200k out of BILLIONS of brain c
Re: (Score:3, Interesting)
I find the arguement puzzling that we would only have to design a machine that's "smarter" (however that's defined...) than a human. Then the machine could design still smarter machines, etc, etc, until you get an intilligence explosion.
While that sounds plausable we have to remember that not a SINGLE person designed the machine. It was the work of hundreds or perhaps thousands of people, over time, designing and improving the individual components and software. No one person could have done such a feat
Complexity orders of magnitude bigger (Score:5, Insightful)
Having some personal understanding of both, I heartily agree. Lets separate out wishful thinking and esoteric "knowing" - both are merely ungrounded speculation.
Myers also claims that Kurzweil picks and chooses events that appear to demonstrate his claim of exponential technological increase leading up to a singularity, and ignores events that do not.
I once seriously considered a strategy for building and artificial brain with a veteran professor of computer science. Examining the problem I gave up when I realised that the individual cells are "intelligent". I think this is vitally important How does the "mind" of a protozoa work? They can navigate obstacles, identify and assimilate food, run away from danger, and have a 20 minute memory. We can assume that a single neurone may well have all of these capabilities and more. I believe that we may be myopically focused on nodes and connections, without considering just how complex and capable a single node is.
So the complexity of the problem is probably an order of magnitude beyond 22 billion neurones and 220 trillion connections. Then consider the effect of 1000s of unknown neurotransmitters - and we know little about the "known" ones, such as serotonin and dopamine, except that they have a profound effect. And _then_, consider that the brain has structure, and we know comparatively little about that structure, and only a few hints about the algorithms and data structures that it uses.
I don't recall the paper but (Score:4, Interesting)
Around 2012 the tech will exist to map the whole human brain; not a living one, just the resolution needed to get all the cells and connections-- maybe 2015... and it'll probably have to be a dead brain that doesn't move. Brain scans already gets quite small on living human brains; but I heard this estimate about 6 years ago and it sounds reasonable.
Not understanding how the brain works will always be a problem; its a nonlinear approximation (of the number 42?) as far as our general understanding of it goes--- even if the brain is just an analog version of such a math problem, those problems almost instantly scale beyond our grasp with only a few variables involved (just think in terms of linear algebra problems and how basic they have to be to "solve;" which doesn't necessarily mean we really fully understand the answers we get. For example, infinity--we work with it, get the concept but we never will fully understand it. )
Computing power grows at certain rates; one can use that combined with an estimate of how many transistors it takes per simulated neuron (or something like that) and estimate at what point we will have the power to load the brain scan data in and start trying to simulate a model of a real brain. Using custom designed chips and circuitry only make shorten the estimate as does clever new ways to simulate processes.
I'm guessing around 2030 but its hard to say. Doesn't mean that when somebody tries it something will happen...may have to give the thing simulated I/O as well to get anything from it. My guess is politics will be the worst problem as this kind of research gets closer to science fiction.
Re:Complexity orders of magnitude bigger (Score:4, Interesting)
Protozoa are simple, they just have a number of triggers with some memory. It can be hard to determine all of them but once you're done, it should be simple to simulate them.
And neurons are studied quite well enough. So far we don't see any 'superintelligent' behavior from simple neurons. There are subtle things that we might have missed (like recently discovered neurotransmitter spillover), but are they essential?
Personally, I think that we might be able to simulate brain. It will probably require several more breakthroughs, but I'd bet it will be possible.
Re:Undue Credit to Kurzweil (Score:5, Funny)
In fact, implementation would be trivial.
10 PRINT "What?"
20 PRINT "I don't understand"
30 PRINT "Where's the tea?"
40 GOTO 10
Re: (Score:3, Funny)
In fact, implementation would be trivial.
10 PRINT "What?" 20 PRINT "I don't understand" 30 PRINT "Where's the tea?" 40 GOTO 10
What?
Re: (Score:3, Informative)
err... no. Electronic keyboards go back at least this far...Ondes Martenot [wikipedia.org]
Can We Build a Human Brain Into a Microchip? (Score:4, Insightful)
"Can We Build a Human Brain Into a Microchip?"
No.
There. Fixed that for you. (Score:5, Insightful)
"Can We Build a Human Brain Into a Microchip?"
Not YET.
Re: (Score:3, Funny)
Re: (Score:3, Funny)
This got me wondering if silicon based females have carbon implants.
Re: (Score:3, Insightful)
If you gradually increase the lightness of black, at what point does it become white?
The fact that there is no clear boundary does not mean that there is not a useful distinction -- the ancients spotted that logical fallacy: the continuum fallacy [wikipedia.org]
.
Re: (Score:2)
Go ahead.
Maybe then it can assign probabilities to the various unintended consequences.
Then again, why? You people can't even successfully manage a currency or your banks. How will you deal with super-intelligent machines without ethical guidelines? Or do I repeat myself? :-)
Re: (Score:3, Funny)
Stop crushing our scifi nerd pipe dreams, you bastard!
Interesting, but... (Score:5, Interesting)
Re:Interesting, but... (Score:4, Insightful)
While the CPU/RAM model is not the way the brain works (I suppose), but it can be used to run a "virtual machine" that itself does work like the human brain does.
I don't think they are trying to simulate a human brain just by throwing a bunch of hardware together...
Re:Interesting, but... (Score:4, Funny)
Running the human brain in a virtual machine creates lots of overhead.
Re: (Score:2)
Emulating x86 on x86 - low overhead.
Emulating POWER on x86 - high overhead.
Emulating quantum computer on x86 - extremely high overhead.
Emulating brain on x86 - ?
Re:Interesting, but... (Score:5, Funny)
Emulating brain on x86 - ?
Priceless?
Re:Interesting, but... (Score:5, Funny)
Profit!
Re:Interesting, but... (Score:5, Insightful)
Why should we try to create an artificial brain in the computing lab when it would be much easier to do it in the genetic engineering lab?
Re:Interesting, but... (Score:5, Funny)
The former doesn't start smelling funny when you leave it on the lab counter overnight.
Re: (Score:3, Funny)
The former doesn't start smelling funny when you leave it on the lab counter overnight.
"My dog doesn't smell!"
"You gave him a bath?"
"No, I cut off his nose!"
Re: (Score:3, Funny)
Re: (Score:3, Insightful)
But can a brain be emulated in computer hardware? I don't think it can. Certainly not with existing technology, and I don't think we are any closer to it now than we were in the 1970s.
The two main problems I see are that computers only understand boolean logic, and they only do what they are told to do. No matter how fast you make them, or how much memory you throw at them, you can't get round that without taking the technology in a completely different direction, and that just isn't happening at the mom
Re:Interesting, but... (Score:5, Interesting)
Re:Interesting, but... (Score:5, Informative)
But what if the brain works by exploiting all of the effects of molecules, proteins, ions, electrical charges, even quantum effects at a molecular level? We have seen that evolution is excellent at finding very clever ways of exploiting whatever resources are available. It is possible that the only way to simulate a brain is to simulate every single atom involved within a brain. For obvious reasons a computer made of 'n' atoms cannot simulate a brain made of 'n' atoms as fast as that brain can work.
I don't know that this is true, but it certainly brings up the possibility that it may be impossible to simulate a brain faster than a brain works, or better than a brain.
Or on a slightly less pessimistic level, perhaps a "synapse" could be encapsulated in a software object, but the number of variables that make each synapse's position, arrangement, and connections unique are staggering and would require a machine to be thousands of times more powerful than a real brain in order to simulate it. That would move our "singularity" out til we have computers that can process as much as 22,000 billion neurons and 220,000 trillion synapses. I wonder if someone better at math and physics could calculate the bare minimum energy required for the negative-entropy to store 220,000 trillion somewhat complex pieces of information. I recall reading a calculation that the ZFS filesystem has the theoretical (but not practical) limit of enough information that the minimum energy required to actual encode that information would be enough to boil the Earth.
Re:Interesting, but... (Score:5, Interesting)
First... there is no requirement that the computer cannot be some x*n atoms.
Second... I'm not sure that this would be the case:
It's quite possible that, say, only 1% of the atoms in the brain are required for the brain activity we'd like to simulate. Off the top of my head (ha!) some examples would be those atoms involved in nutrient uptake, metabolism, and waste removal. I'm sure there're also atoms like those that give length to axons... those don't need 1:1 representation, a timed loop could represent them. Or all the neurotransmitters, those atoms could be instead represented by a few bits used as a counter.
Basically, my argument boils down to this: I don't think the goal would be to build a simulacrum of the brain. Just a simulation of the brain. This gives lots of room for making things more efficient (though maintaining accuracy would, of course, be necessary).
Re:Interesting, but... (Score:5, Interesting)
Something like this will be possible one day, but my layperson's understanding of how the brain works is fundamentally different from how computers work.
According to Turing, all sufficiently complicated computing devices are equivalent. The architecture may be entirely different, but there's no reason in principle one cannot be simulated on the other.
At the very least, we know the brain obeys the laws of physics. A computer can simulate the laws of physics. Therefore, a computer can simulate the brain.
Re:Interesting, but... (Score:5, Interesting)
According to Turing, all sufficiently complicated computing devices are equivalent ...
Correct me if I'm wrong but I believe that was said of binary systems? Can you prove to me that the lowest form of information in the brain is the bit? Are neurons only 'on or off'? Is it just discharge or not discharge? I am no neurologist but I believe that small non-binary charges can be held by neurons that may influence thought. Neurons are fairly complex cells that have many complex dendrites -- some being multipolar instead of bipolar.
At the very least, we know the brain obeys the laws of physics.
Unfortunately we have a very incomplete set of laws for physics.
This may shock you but I assure you that there are things going on in the human brain that no physicist, biologist or biophysicist can explain. Hell, we can't even draw a definite line between what is chemical/physical and what is purely neurological function. There may not even be a line to draw. Although we are making advances, we are still in the dark about a lot of basic things in the human mind let alone discovering the detailed inner workings of the thing we call 'consciousness.' Can you tell me why it is that enlarged regions of our brain make us so much more 'intelligent' than mice or whales?
I hope for a huge breakthrough but it is nothing more than childish hope. My gut feeling is that we are much much farther from the 'intelligence explosion' than the futurologists think.
Re:Interesting, but... (Score:5, Interesting)
I am a neuroscientist and I can tell you for sure that the basic form of the information in a brain is not a linear bit. But it does obey the laws of physics, and everything we know points to it following pretty mundane physics. The whole 'quantum state' theory of consciousness is pretty weak and unable to explain a lot of really basic phenomenon of the brain.
However, the real trick of human intelligence is not simply the number of neurons http://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons [wikipedia.org] but rather the particular pattern of the network which allows us to detect and manipulate extremely complex patterns which a significant amount of noise. I think we will get to the point one day where we can replicate a human level intelligence, but getting 20 billion things into a organized pattern is just that start of that process.
And, even then, we don't need to worry about an 'intelligence explosion' because a) there are probably some pretty hard laws on the relationship between size and complexity, which is almost certainly non-linear and b) the knowledge needed to create this human level intelligence won't be understandable to any single human. It has already take teams of people working together for combined millions of man hours to get to where we are today. Even if this computer we make was capable of thinking at the level of 2x human, it will take many machines a long time before progressing to the next level of understanding of a complex non-linear phenomenon such as intelligence.
Re:Interesting, but... (Score:4, Interesting)
Neurons are fairly complex cells that have many complex dendrites -- some being multipolar instead of bipolar.
So our binary computing brain simulator would have Manic Depression? [wikipedia.org]
On a more serious note, you're right; we don't even know what sentience is. Maybe water is sentient; we are, after all, something like 70% water.
To misquote Chief Dan George's character in Little Big Man (because it's from memory and I haven't seen that movie in a while), "The Human Being [people of his tribe] think everything is alive. The people, the buffalo, the trees, even the rocks. But the white man thinks nothing is alive, and if he suspects something is alive he'll kill it."
Re:Interesting, but... (Score:4, Insightful)
I see no reason to believe we have "free will". As far as I can tell, whether we have free will or not is irrelevant to anything important. We have "will", and that is sufficient.
Re: (Score:3, Informative)
Well, it's not being done that way. The idea behind what the Europeans are doing is to simulate actual neuronal behavior. The results were quite interesting in that it seems to behave much like a real piece of neural tissue (http://www.technologyreview.com/biomedicine/19767/).
Easy! (Score:4, Funny)
All you have to do is pick the right person [www.cbc.ca] and you can greatly reduce the number of neurons you'll need to model.
How about the converse (Score:4, Interesting)
with DRM (Score:3, Insightful)
Re:How about the converse (Score:5, Funny)
I'm more interested in whether or not we can build a microchip into a human brain. At least then I might be able to remember my wife's anniversary...
You could try remembering your anniversary instead. :-)
Re:How about the converse (Score:4, Funny)
If there was only some other way that you could store information in a mechanical system for (perhaps automatic) retrieval and display at a later date.
Why? (Score:2)
Why would we want to? There is already an excess of human brains available on the planet. What purpose would it serve to build more?
Re:Why? (Score:5, Insightful)
How many of those can work 24/7/365 on a single subject with 100% concentration?
Or how about how many of those can you scale down to fit into a shoebox or smaller (while they are till operative) or scale up by linking them in a cluster (preferably of the Beowulf kind)?
Re: (Score:2)
"How many of those can work 24/7/365 on a single subject with 100% concentration?"
you mean besides WoW players~
The key will be not to implement anything they think up without fully understanding it ourselves. Also, designing in the love and respect of the human race.
Also, if we emulatate a specific persons brain, does that mean the emulation wil behave like that person? Can we create a chip thats in a specific 'state' and therefore have all the memories created as well?
If we make 100 of these things, and th
Re: (Score:2)
Re: (Score:2)
Yes, and most of them aren't even being used!
Of course, we know what happens to a muscle that isn't exercised...
Re: (Score:2)
Why would we want to? There is already an excess of human brains available on the planet. What purpose would it serve to build more?
mmmmmm, braaaaaaaaaains!
Interesting question. (Score:5, Funny)
Do you work in management?
I hope this technology comes to fruition (Score:2)
Re: (Score:3, Interesting)
Re: (Score:3, Interesting)
but will you want to scream and be unable to do so?
Also would you really want to be Borg? Not only will your brain deteriorate, but your body will as well. I've given some real thought to how the Borg likely started and this sums it up really. They added the ability to communicate with a central computer and each other electronically (it's faster afterall), and next thing you know the hive mind was born. /. even shows that hive mentality is possible in humans, this would simply enshrine it.
Mind you, so l
Re:I hope this technology comes to fruition (Score:5, Interesting)
The Ship of Theseus, also known as Theseus's paradox, is a paradox that raises the question of whether an object which has had all its component parts replaced remains fundamentally the same object.
I for one... (Score:2)
The Mueller-Fokker Effect? (Score:2)
Re: (Score:2)
That's an oldie but a goody (mid to late 70s IIRC, read it as a teenager). Nice to see someone else remembers it.
I always thought there should be an actual "Old Cold Dacron Heart" you could listen to while looking for your car in a big lot on a rainy day.
Re: (Score:2)
You might enjoy "Kiln People" by David Brin. They figure out how to copy people into golems then upload the day's memories (should you want them) into your real life brain.
The copies only last for a day, and you can't make copies of the copies.
It's a pretty good book.
Intersting tidbit (Score:2)
is that mimicking a brain in hardware starts to show actually intellect.
It will be interesting to see how that plays out in larger scale tests.
Re: (Score:3, Funny)
Do you mean that, while in the process of simulating human intellect, the simulator itself becomes self-aware? Then what if the simulacrum becomes aware of the simulator? Would it create a metaphysical singularity, or just blow the stack?
Inquiring minds want to know.
-dZ.
Quality of simulation (Score:5, Insightful)
Even if we have a chip capable of simulating the same number of neurons and synapses as the human brain, that will not magically form an artificial life-form. I know little about simulated neural networks, but I do know that they are only a very rough approximation of the workings of the human brain. We still don't understand all the intricacies of the neural and chemical interactions that occur to a sufficient level to properly simulate all of them.
Re: (Score:2, Interesting)
Not true.
Simulation of a brain* has shown behavious one would expect in an actual brain.
So yes, it does look like imitating the brain will cause intelligence.
This is very cool, and I hop it pans out to large Simulations.
It could mean that intellect comes from the organization of the brain, a by products of the evolutionary need for memory.
*limited set of emmulated neurons, really.
Re:Quality of simulation (Score:5, Interesting)
Sure we can... (Score:5, Informative)
I see a strange arrogance and egocentricity in trying to design robots to be exactly like us, why not think outside the box? Why are upright, bipedal robots always portrayed as the ultimate? There are most certainly more efficient and better designs than the one we are saddled with, this is just how we happened to evolve, we are simply the current end of one branch of the evolutionary tree.
Re:Sure we can... (Score:5, Insightful)
The way we evolved can be a hint about efficiency. For example, bipedal movement turned out to be pretty efficient on a human scale, while eight legs like a spider are not. Therefore, it is important to know *why* things evolved the way they did. Was it because of energy efficiency? Adaptation to local predators? etc.
Re: (Score:3, Interesting)
It's your birthday... (Score:2)
...someone gives you a calfskin wallet. You've got a little boy, he shows you his butterfly collection, plus the killing jar. You're watching television...suddenly you realize there's a wasp crawling on your arm.
We are getting closer to Eldon Tyrell's replicants...and I for one welcome our mircochip brained overlords.
Re: (Score:2)
How about:
"I for one welcome our limited lifespan mircochip brained overlords."
Re: (Score:3, Funny)
If they look like Sean Young, i welcome them too.
Exceeding (Score:2, Interesting)
I could use a built in graphing calculator or spell check.
Re: (Score:2)
One word (Score:2, Interesting)
The real question is (Score:2)
Can we build a microchip into a human brain?
Re: (Score:2)
Why did you post this?
Oh wait, I've seen this on Who's Line is it anyway, you talk with questions!... Um, Is that your ferret or are you just happy to see me?
It's not just the parallelism (Score:5, Interesting)
It's the reconfigurable nature of the human brain that's unique and powerful. If all you did was take one person, listed all of the skills of that person -- all of the things he knew; all of the skills in smell, touch, sight and taste; all of the cognitive reasoning ability -- then you could create a chip to simulate those skills. Algorithms for image recognition, feature extraction, speech recognition, etc. are all available that are very very close to what humans can do.
But the thing that separates humans is that it didn't take hundreds of years of mathematical development to come up with these algorithms. The human brain develops these algorithms through changes in its structure from birth. At about age 10, speech recognition specialized and tailored to the dialect, language and tones that the person hears has developed on its own.
That type of structural formation and learning is what would need to happen in silicon to make a truly intelligent machine. Neuron clusters emulated using transistors would need to be able to dynamically form connections to other neuron clusters. There'd have to be some type of distributed learning algorithm encoded in the operation of each individual neuron.
Speech recognition is easy. Image recognition is easy. Developing a distributed, scalable, self-modifying architecture that can learn all of those and more on its own with nothing more than training samples is the difficult part.
Comment removed (Score:3, Interesting)
Re: (Score:2)
Which leads me to wonder...what does a flawless brain look like exactly?
Tasty! Mmmmm, brains!
Re: (Score:2)
what does a flawless brain look like exactly?
Here, I'll show you mine.
Re: (Score:3, Interesting)
The parts of the brain "geared toward bodily functions" is crucial to the functioning to the brain as a whole. The brain interaction with genitalia is just one example.
Your post brings up another good point though: Before the brain is thorougly constructed, the input streams into the brain need to be thoroughly understood as well.
And, where does the brain stop? The spinal column? The nervous system? Hormones?
This is so cool!
some humans, you could - others need a little more (Score:2)
These we can do already - but why bother?
Can it kill people? (Score:2)
The question is, whether we can put a brain on a chip smart enough to procreate and kill human beings.
it doesn't need to be smarter than that to destroy human kind. And once humanity is eliminated, no one will care if computer chips can mimic our brains.
From the article (Score:4, Insightful)
Hawkins believes computer scientists have focused too much on the end product of artificial intelligence. Like B.F. Skinner, who held that psychologists should study stimuli and responses and essentially ignore the cognitive processes that go on in the brain, he holds that scientists working in AI and neural networks have focused too much on inputs and outputs rather than the neurological system that connects them.
I agree with this quote. A lot of computer scientists try to build artificial intelligence without really understanding how their own brain works. It is really too bad because they have an unusually observable specimen right in their own head. Genetic learning? Is that how you feel you learn personally? Of course this question can't answer everything about artificial intelligence, but it can definitely help and is too often ignored.
Also, one thing that isn't clear from the article is whether the synapses will be static, or whether they can move and grow, just as human brain synapses can.
"thousand trillion"? (Score:2)
Re: (Score:3, Insightful)
Now that your brain's on a chip (Score:2)
...we're installing Windows. haha
already done (Score:4, Funny)
Not only that (Score:2)
But the actual brain can change the synapses over time, making new ones and obsoleting old ones. I'd like to see some silicon do THAT. I wouldn't worry, we'll still be boss for a while.
Just think (Score:3, Funny)
Just think what might happen if Apple got the patent on these suckers and brought them to market as the personal implant - the IThink?
Imagine waking up morning and Ithinking "I'd like to fall in love today", so you make a mental link to the App Store and download "Love" for £1.95. On your way to work, you spot someone that takes your fancy, so you make a quick connection and download Flirt for a further £2. Things go well: Entertain £2, ShowYouCare £3.30, Intimate £10. A while passes and you're happily married (or have both downloaded LiveInSin-Noshame), so Broody is added to the bill.
What a wonderful life..well, if you download 'Harmony'
Randomness (Score:5, Informative)
It requires far more than that. According to some, the microtubules on the cytoskeletons of the cells themselves can be processing units. Raise the bar a few orders of magnitude in that case.
Nanotech (Score:3, Interesting)
Not even close (Score:5, Interesting)
BTW, current estimates are more like 100 billion neurons and upwards of 300-500 trillion synaptic connections.
However, numbers aside, the human brain is not merely a complex collection of neurons and interconnected synapses. Complexity is only one very basic factor, another, more critical, factor is organization. We don't even know where to start in the organization of these artificial neural networks to emulate a human brain.
WARNING! COMPUTER ANALOGY: It's not the number and density of interconnected transistors that make a Xeon, it's the organization.
human? (Score:3, Interesting)
Re: (Score:2)
Memsistors are nothing like neurons....
Neurons are incredibly complex nodes with a built-in structural formation algorithm; an algorithm that's not understood at all.
Memsistors store current values.
Re: (Score:2, Informative)
Lots of stupid things have been said. People generally only use 10%-20% of their brains at any given moment. They use nearly all of it through the course of the day.
Re: (Score:3, Informative)
Lots of "things" are said and lots of things are wrong. "people only use 10% of their actual brain power" is belongs to both groups.
Though an alluring idea, the "10 percent myth" is so wrong it is almost laughable, says neurologist Barry Gordon at Johns Hopkins School of Medicine in Baltimore. Although there's no definitive culprit to pin the blame on for starting this legend, the notion has been linked to the American psychologist and author William James, who argued in The Energies of Men that "We are making use of only a small part of our possible mental and physical resources." It's also been associated with to Albert Einstein, who supposedly used it to explain his cosmic towering intellect.
source: http://www.scientificamerican.com/article.cfm?id=people-only-use-10-percent-of-brain [scientificamerican.com]
Re:Do they need to map the entire brain (Score:4, Funny)
There are people who use 100% of their neurons simultaneously on a daily basis.
We call them epileptics.
Re: (Score:2)
Humanity has not yet designed a smarter human.
Many children are smarter than their parents. Sometimes it's by design. Parents will give their children better learning tools and opportunities than they had.