Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Supercomputing

Can We Build a Human Brain Into a Microchip? 598

destinyland writes "Can we imprint the circuitry of the human brain onto a silicon chip? It requires a computational capacity of 36.8 petaflops — a thousand trillion floating point operations per second — but a team of European scientists has already simulated 200,000 neurons linked up by 50 million synaptic connections. And their brain-chip is scaleable, with plans to create a superchip mimicking 1 billion neurons and 10 trillion synapses. Unfortunately, the human brain has 22 billion neurons and 220 trillion synapses. Just remember Ray Kurzweil's argument: once a machine can achieve a human level of intelligence — it can also exceed it."
This discussion has been archived. No new comments can be posted.

Can We Build a Human Brain Into a Microchip?

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohn@gSTRAWmail.com minus berry> on Thursday August 06, 2009 @11:53AM (#28974567) Journal

    Just remember Ray Kurzweil's argument: once a machine can achieve a human level of intelligence â" it can also exceed it.

    Ray Kurzweil is a brilliant computer scientist and brought us many improvements -- maybe even the invention of -- the electronic musical keyboard.

    But that is not his argument. I laughed when I read that as the concept was presented to me in sci-fi novels before Kurzweil's time. The earliest I (or Wikipedia) can trace the intelligence explosion [wikipedia.org] theory back to is Irving John Good who, in 1965, said [archive.org]:

    Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

    This was popularized by Vernor Vinge which is where I recalled reading about it. There are many reasons to celebrate Raymond Kurzweil [wikipedia.org]. In my opinion, his is "work" in nutrition and his near-religion called futurology are not in those reasons. He has become a vocal proponent of a dream to become god-like. I do not share that dream and I wish him the best of luck in his endeavors. I just cringe every time I read of the "singularity being near" or the ability to live forever coming about. If it's going to happen, just sit back and let it happen. I feel he has done a great disservice to the field of artificial intelligence by promising unrealistic things in interviews to the lay person. Disappointment is a sure fire way to get yourself branded as a snake oil salesman religious nut.

    Predictions for the future are for sci-fi books and movies, don't get into the habit of being a scientist in an interview with a reputable magazine or web site telling them what is about to happen. Example:

    Kurzweil projects that between now and 2050 technology will become so advanced that medical advances will allow people to radically extend their lifespans while preserving and even improving quality of life as they age. The aging process could at first be slowed, then halted, and then reversed as newer and better medical technologies became available. Kurzweil argues that much of this will be a fruit of advances in medical nanotechnology, which will allow microscopic machines to travel through one's body and repair all types of damage at the cellular level.

    And that's easily criticized:

    Biologist P.Z. Myers has criticized Kurzweil's predictions as being based on "New Age spiritualism" rather than science and says that Kurzweil does not understand basic biology. Myers also claims that Kurzweil picks and chooses events that appear to demonstrate his claim of exponential technological increase leading up to a singularity, and ignores events that do not.

    • Re: (Score:2, Insightful)

      ++

      I agree 100%. I still don't understand why this charlatan gets so much press on Slashdot. Probably because it causes people like you and I to post.

      • by eldavojohn ( 898314 ) * <eldavojohn@gSTRAWmail.com minus berry> on Thursday August 06, 2009 @01:04PM (#28975835) Journal

        I agree 100%. I still don't understand why this charlatan ...

        Well, despite my overly critical initial post I will waste karma with further speculation on Kurzweil. He's actually not a charlatan. He's just stepping outside of his field and extrapolating out some of the things that have been achieved ... and using some unrealistic exponential curve to guide his predictions.

        The man has experience great success -- both in business and academia -- throughout his lifetime. But past 1990 he's made a few inventions to help learning and disabled students. Which is great. Unfortunately he's found that writing books, holding symposiums and giving speeches about fantastic science fiction is what draws attention and resources. So he keeps doing it. It results in a lot press and I'm sure his aging body might drive him to hope and fund a singularity before he dies.

        While this singularity is a romantic idea, it's just not based on science. He's lost sight of what he once did musical hardware that advanced synthetic music far beyond the rate at which it normally would have run. And now his efforts are not designated to realistic goals but instead loftier goals that no one can achieve. What's worse is that it depends on crosses between fields he's simply not an expert in.

        You might be able to argue that he's a charlatan now but in my mind he's Thomas Edison turned Nostradamus. He's pulled out all the stops that relegate normal scientists to the scientific process and has passed optimism onto fantastical dreams. He can write all the books he wants but until he gets back to what made him great -- actually implementing something and leaving a legacy of working examples -- he runs of the risk of tarnishing his reputation.

    • by cpu_fusion ( 705735 ) on Thursday August 06, 2009 @12:38PM (#28975389)

      I feel he has done a great disservice to the field of artificial intelligence by promising unrealistic things in interviews to the lay person. Disappointment is a sure fire way to get yourself branded as a snake oil salesman religious nut.

      A disappointed public threatens research funding, but an unprepared public threatens chaos.

      I'm more concerned with making sure we're thinking ahead to the radical change that is likely to come, be it in 10 years or 40, than to be concerned that lay people will distrust AI researchers.

      • by mcgrew ( 92797 ) on Thursday August 06, 2009 @01:39PM (#28976369) Homepage Journal

        From TFA: imprint the circuitry of the human brain using transistors on a silicon chip?

        No, not on binary circuits we can't. We might simulate the brain, or even model the brain, but we won't imprint it.

        The brain is a parallel processor.

        Tremendously paralell; and it's a multimode analog design, not a single mode digital design. There are many different kinds of brain cells, with both chemical and electrical components.

        We can model an atomic explosion, but we understand the physics behind an atomic explosion. We have hardly begun to understand how the brain works. We'll have cures for all mental ilnesses before we can accurately model the brain, because if you can't fix a broken machine you don't understand how it works, and even sometimes if you can fix a broken machine you still may not understand that machine completely.

        When you model an atomic explosion, there is no radiation released. A model is not the real thing.

        There is no test for sentience. Without such a test it would be impossible to kow if you have succeeded in accurately modeling it.

        • A disappointed public threatens research funding, but an unprepared public threatens chaos

          And a simulated intelligence that doesn't truly think or feel may get "machine rights". I wish these guys would read Dune; the jihad was was not against the thinking machines, but against the men who used the thinking machines to enslave their fellow men.

          And, when they can model a fly's brain and build an artificial fly, I'll be a hell of a lot more impresses than their simply "modeling" 200k out of BILLIONS of brain c

        • Re: (Score:3, Interesting)

          by Jorgandar ( 450573 )

          I find the arguement puzzling that we would only have to design a machine that's "smarter" (however that's defined...) than a human. Then the machine could design still smarter machines, etc, etc, until you get an intilligence explosion.

          While that sounds plausable we have to remember that not a SINGLE person designed the machine. It was the work of hundreds or perhaps thousands of people, over time, designing and improving the individual components and software. No one person could have done such a feat

    • by microbox ( 704317 ) on Thursday August 06, 2009 @12:53PM (#28975663)
      Biologist P.Z. Myers has criticized Kurzweil's predictions as being based on "New Age spiritualism" rather than science and says that Kurzweil does not understand basic biology.

      Having some personal understanding of both, I heartily agree. Lets separate out wishful thinking and esoteric "knowing" - both are merely ungrounded speculation.

      Myers also claims that Kurzweil picks and chooses events that appear to demonstrate his claim of exponential technological increase leading up to a singularity, and ignores events that do not.

      I once seriously considered a strategy for building and artificial brain with a veteran professor of computer science. Examining the problem I gave up when I realised that the individual cells are "intelligent". I think this is vitally important How does the "mind" of a protozoa work? They can navigate obstacles, identify and assimilate food, run away from danger, and have a 20 minute memory. We can assume that a single neurone may well have all of these capabilities and more. I believe that we may be myopically focused on nodes and connections, without considering just how complex and capable a single node is.

      So the complexity of the problem is probably an order of magnitude beyond 22 billion neurones and 220 trillion connections. Then consider the effect of 1000s of unknown neurotransmitters - and we know little about the "known" ones, such as serotonin and dopamine, except that they have a profound effect. And _then_, consider that the brain has structure, and we know comparatively little about that structure, and only a few hints about the algorithms and data structures that it uses.
      • by bussdriver ( 620565 ) on Thursday August 06, 2009 @02:17PM (#28977083)

        Around 2012 the tech will exist to map the whole human brain; not a living one, just the resolution needed to get all the cells and connections-- maybe 2015... and it'll probably have to be a dead brain that doesn't move. Brain scans already gets quite small on living human brains; but I heard this estimate about 6 years ago and it sounds reasonable.

        Not understanding how the brain works will always be a problem; its a nonlinear approximation (of the number 42?) as far as our general understanding of it goes--- even if the brain is just an analog version of such a math problem, those problems almost instantly scale beyond our grasp with only a few variables involved (just think in terms of linear algebra problems and how basic they have to be to "solve;" which doesn't necessarily mean we really fully understand the answers we get. For example, infinity--we work with it, get the concept but we never will fully understand it. )

        Computing power grows at certain rates; one can use that combined with an estimate of how many transistors it takes per simulated neuron (or something like that) and estimate at what point we will have the power to load the brain scan data in and start trying to simulate a model of a real brain. Using custom designed chips and circuitry only make shorten the estimate as does clever new ways to simulate processes.

        I'm guessing around 2030 but its hard to say. Doesn't mean that when somebody tries it something will happen...may have to give the thing simulated I/O as well to get anything from it. My guess is politics will be the worst problem as this kind of research gets closer to science fiction.

      • by Cyberax ( 705495 ) on Thursday August 06, 2009 @03:04PM (#28977913)

        Protozoa are simple, they just have a number of triggers with some memory. It can be hard to determine all of them but once you're done, it should be simple to simulate them.

        And neurons are studied quite well enough. So far we don't see any 'superintelligent' behavior from simple neurons. There are subtle things that we might have missed (like recently discovered neurotransmitter spillover), but are they essential?

        Personally, I think that we might be able to simulate brain. It will probably require several more breakthroughs, but I'd bet it will be possible.

    • by SmurfButcher Bob ( 313810 ) on Thursday August 06, 2009 @01:08PM (#28975891) Journal

      In fact, implementation would be trivial.

      10 PRINT "What?"
      20 PRINT "I don't understand"
      30 PRINT "Where's the tea?"
      40 GOTO 10

      • Re: (Score:3, Funny)

        In fact, implementation would be trivial.

        10 PRINT "What?" 20 PRINT "I don't understand" 30 PRINT "Where's the tea?" 40 GOTO 10

        What?

    • Re: (Score:3, Informative)

      Ray Kurzweil is a brilliant computer scientist and brought us many improvements -- maybe even the invention of -- the electronic musical keyboard.

      err... no. Electronic keyboards go back at least this far...Ondes Martenot [wikipedia.org]
  • by Anonymous Coward on Thursday August 06, 2009 @11:53AM (#28974571)

    "Can We Build a Human Brain Into a Microchip?"
      No.

    • by denzacar ( 181829 ) on Thursday August 06, 2009 @11:59AM (#28974681) Journal

      "Can We Build a Human Brain Into a Microchip?"

      Not YET.

    • Go ahead.

      Maybe then it can assign probabilities to the various unintended consequences.

      Then again, why? You people can't even successfully manage a currency or your banks. How will you deal with super-intelligent machines without ethical guidelines? Or do I repeat myself? :-)

    • Re: (Score:3, Funny)

      by 4D6963 ( 933028 )

      Stop crushing our scifi nerd pipe dreams, you bastard!

  • Interesting, but... (Score:5, Interesting)

    by bennomatic ( 691188 ) on Thursday August 06, 2009 @11:54AM (#28974593) Homepage
    Something like this will be possible one day, but my layperson's understanding of how the brain works is fundamentally different from how computers work. The hard-wired CPU/RAM model is just not a perfect parallel, so while we can and will improve on machines that learn, it's going to be different from the wetware that is constantly growing, changing, forming new connections and interacting with internal, external and imagined stimuli.
    • by quadrox ( 1174915 ) on Thursday August 06, 2009 @11:57AM (#28974655)

      While the CPU/RAM model is not the way the brain works (I suppose), but it can be used to run a "virtual machine" that itself does work like the human brain does.

      I don't think they are trying to simulate a human brain just by throwing a bunch of hardware together...

    • by jonbryce ( 703250 ) on Thursday August 06, 2009 @12:11PM (#28974901) Homepage

      Why should we try to create an artificial brain in the computing lab when it would be much easier to do it in the genetic engineering lab?

      • by sabernet ( 751826 ) on Thursday August 06, 2009 @12:29PM (#28975225) Homepage

        The former doesn't start smelling funny when you leave it on the lab counter overnight.

        • Re: (Score:3, Funny)

          by mcgrew ( 92797 )

          The former doesn't start smelling funny when you leave it on the lab counter overnight.

          "My dog doesn't smell!"

          "You gave him a bath?"

          "No, I cut off his nose!"

      • Re: (Score:3, Funny)

        by uberjoe ( 726765 )
        Creating an actual human brain with all the support equipment necessary is pretty easy in the bedroom too.
    • by Whorhay ( 1319089 ) on Thursday August 06, 2009 @12:13PM (#28974927)
      We may not be able to build a chip that it's self perfectly mimicks the human brain. But we can very likely build a chip that can process the software necessary to simulate the brain. Think of it as a programming problem where you have object classes for each major type of cell in the brain. You then have to keep track of which ones are connected to which others at any one time. The real difficulty will be in allowing the individual cells to change their behavior over time and depending on the stimulus they have individually recieved. Otherwise the brain simulation would not be capable of learning and growing, but would instead be stuck at whatever development stage it was created at.
      • by hoggoth ( 414195 ) on Thursday August 06, 2009 @12:35PM (#28975323) Journal

        But what if the brain works by exploiting all of the effects of molecules, proteins, ions, electrical charges, even quantum effects at a molecular level? We have seen that evolution is excellent at finding very clever ways of exploiting whatever resources are available. It is possible that the only way to simulate a brain is to simulate every single atom involved within a brain. For obvious reasons a computer made of 'n' atoms cannot simulate a brain made of 'n' atoms as fast as that brain can work.

        I don't know that this is true, but it certainly brings up the possibility that it may be impossible to simulate a brain faster than a brain works, or better than a brain.

        Or on a slightly less pessimistic level, perhaps a "synapse" could be encapsulated in a software object, but the number of variables that make each synapse's position, arrangement, and connections unique are staggering and would require a machine to be thousands of times more powerful than a real brain in order to simulate it. That would move our "singularity" out til we have computers that can process as much as 22,000 billion neurons and 220,000 trillion synapses. I wonder if someone better at math and physics could calculate the bare minimum energy required for the negative-entropy to store 220,000 trillion somewhat complex pieces of information. I recall reading a calculation that the ZFS filesystem has the theoretical (but not practical) limit of enough information that the minimum energy required to actual encode that information would be enough to boil the Earth.

        • by Red Flayer ( 890720 ) on Thursday August 06, 2009 @01:40PM (#28976377) Journal

          For obvious reasons a computer made of 'n' atoms cannot simulate a brain made of 'n' atoms as fast as that brain can work

          First... there is no requirement that the computer cannot be some x*n atoms.

          Second... I'm not sure that this would be the case:

          It is possible that the only way to simulate a brain is to simulate every single atom involved within a brain.

          It's quite possible that, say, only 1% of the atoms in the brain are required for the brain activity we'd like to simulate. Off the top of my head (ha!) some examples would be those atoms involved in nutrient uptake, metabolism, and waste removal. I'm sure there're also atoms like those that give length to axons... those don't need 1:1 representation, a timed loop could represent them. Or all the neurotransmitters, those atoms could be instead represented by a few bits used as a counter.

          Basically, my argument boils down to this: I don't think the goal would be to build a simulacrum of the brain. Just a simulation of the brain. This gives lots of room for making things more efficient (though maintaining accuracy would, of course, be necessary).

    • by Hatta ( 162192 ) * on Thursday August 06, 2009 @12:17PM (#28974997) Journal

      Something like this will be possible one day, but my layperson's understanding of how the brain works is fundamentally different from how computers work.

      According to Turing, all sufficiently complicated computing devices are equivalent. The architecture may be entirely different, but there's no reason in principle one cannot be simulated on the other.

      At the very least, we know the brain obeys the laws of physics. A computer can simulate the laws of physics. Therefore, a computer can simulate the brain.

      • by eldavojohn ( 898314 ) * <eldavojohn@gSTRAWmail.com minus berry> on Thursday August 06, 2009 @12:34PM (#28975313) Journal

        According to Turing, all sufficiently complicated computing devices are equivalent ...

        Correct me if I'm wrong but I believe that was said of binary systems? Can you prove to me that the lowest form of information in the brain is the bit? Are neurons only 'on or off'? Is it just discharge or not discharge? I am no neurologist but I believe that small non-binary charges can be held by neurons that may influence thought. Neurons are fairly complex cells that have many complex dendrites -- some being multipolar instead of bipolar.

        At the very least, we know the brain obeys the laws of physics.

        Unfortunately we have a very incomplete set of laws for physics.

        This may shock you but I assure you that there are things going on in the human brain that no physicist, biologist or biophysicist can explain. Hell, we can't even draw a definite line between what is chemical/physical and what is purely neurological function. There may not even be a line to draw. Although we are making advances, we are still in the dark about a lot of basic things in the human mind let alone discovering the detailed inner workings of the thing we call 'consciousness.' Can you tell me why it is that enlarged regions of our brain make us so much more 'intelligent' than mice or whales?

        I hope for a huge breakthrough but it is nothing more than childish hope. My gut feeling is that we are much much farther from the 'intelligence explosion' than the futurologists think.

        • by dalhamir ( 1423303 ) on Thursday August 06, 2009 @12:57PM (#28975735)

          I am a neuroscientist and I can tell you for sure that the basic form of the information in a brain is not a linear bit. But it does obey the laws of physics, and everything we know points to it following pretty mundane physics. The whole 'quantum state' theory of consciousness is pretty weak and unable to explain a lot of really basic phenomenon of the brain.

          However, the real trick of human intelligence is not simply the number of neurons http://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons [wikipedia.org] but rather the particular pattern of the network which allows us to detect and manipulate extremely complex patterns which a significant amount of noise. I think we will get to the point one day where we can replicate a human level intelligence, but getting 20 billion things into a organized pattern is just that start of that process.

          And, even then, we don't need to worry about an 'intelligence explosion' because a) there are probably some pretty hard laws on the relationship between size and complexity, which is almost certainly non-linear and b) the knowledge needed to create this human level intelligence won't be understandable to any single human. It has already take teams of people working together for combined millions of man hours to get to where we are today. Even if this computer we make was capable of thinking at the level of 2x human, it will take many machines a long time before progressing to the next level of understanding of a complex non-linear phenomenon such as intelligence.

        • by mcgrew ( 92797 ) on Thursday August 06, 2009 @02:35PM (#28977435) Homepage Journal

          Neurons are fairly complex cells that have many complex dendrites -- some being multipolar instead of bipolar.

          So our binary computing brain simulator would have Manic Depression? [wikipedia.org]

          On a more serious note, you're right; we don't even know what sentience is. Maybe water is sentient; we are, after all, something like 70% water.

          To misquote Chief Dan George's character in Little Big Man (because it's from memory and I haven't seen that movie in a while), "The Human Being [people of his tribe] think everything is alive. The people, the buffalo, the trees, even the rocks. But the white man thinks nothing is alive, and if he suspects something is alive he'll kill it."

    • Re: (Score:3, Informative)

      Well, it's not being done that way. The idea behind what the Europeans are doing is to simulate actual neuronal behavior. The results were quite interesting in that it seems to behave much like a real piece of neural tissue (http://www.technologyreview.com/biomedicine/19767/).

  • Easy! (Score:4, Funny)

    by Anonymous Coward on Thursday August 06, 2009 @11:55AM (#28974611)

    All you have to do is pick the right person [www.cbc.ca] and you can greatly reduce the number of neurons you'll need to model.

  • by Locke2005 ( 849178 ) on Thursday August 06, 2009 @11:55AM (#28974617)
    I'm more interested in whether or not we can build a microchip into a human brain. At least then I might be able to remember my wife's anniversary...
  • Why would we want to? There is already an excess of human brains available on the planet. What purpose would it serve to build more?

    • Re:Why? (Score:5, Insightful)

      by denzacar ( 181829 ) on Thursday August 06, 2009 @12:05PM (#28974799) Journal

      How many of those can work 24/7/365 on a single subject with 100% concentration?

      Or how about how many of those can you scale down to fit into a shoebox or smaller (while they are till operative) or scale up by linking them in a cluster (preferably of the Beowulf kind)?

      • by geekoid ( 135745 )

        "How many of those can work 24/7/365 on a single subject with 100% concentration?"
        you mean besides WoW players~

        The key will be not to implement anything they think up without fully understanding it ourselves. Also, designing in the love and respect of the human race.

        Also, if we emulatate a specific persons brain, does that mean the emulation wil behave like that person? Can we create a chip thats in a specific 'state' and therefore have all the memories created as well?

        If we make 100 of these things, and th

      • Thank you!!! I thought nobody had posted some kind of analogy with a Beowulf cluster. ./ would be doomed!
    • Yes, and most of them aren't even being used!

      Of course, we know what happens to a muscle that isn't exercised...

    • Why would we want to? There is already an excess of human brains available on the planet. What purpose would it serve to build more?

      mmmmmm, braaaaaaaaaains!

    • by Anonymous Coward on Thursday August 06, 2009 @12:30PM (#28975247)

      Do you work in management?

  • Within the next couple decades. My biggest dream is to live long enough to be able to explore other planets and solar systems. Replacing our brains with chips is likely the only way we'll be capable of doing this within the next few hundred years, if not ever.
  • I for one welcome our human brain on a microchip overlords. My wife is a grad student in anatomy neuroscience. Her work is like figuring out what a computer system does by analyzing the components inside one of many chips. We still have no idea how the brain works, where consciousness comes from. I hope projects like this (simulations, modeling, wild crazy speculative experiments) increase our understanding of how it works.
  • Anyone read this book [amazon.com]? The idea is that someone figures out how to capture the state of a human brain on some special tapes. Comedy, of course, ensues.
    • That's an oldie but a goody (mid to late 70s IIRC, read it as a teenager). Nice to see someone else remembers it.

      I always thought there should be an actual "Old Cold Dacron Heart" you could listen to while looking for your car in a big lot on a rainy day.

    • You might enjoy "Kiln People" by David Brin. They figure out how to copy people into golems then upload the day's memories (should you want them) into your real life brain.

      The copies only last for a day, and you can't make copies of the copies.

      It's a pretty good book.

  • is that mimicking a brain in hardware starts to show actually intellect.
    It will be interesting to see how that plays out in larger scale tests.

    • Re: (Score:3, Funny)

      by dzfoo ( 772245 )

      Do you mean that, while in the process of simulating human intellect, the simulator itself becomes self-aware? Then what if the simulacrum becomes aware of the simulator? Would it create a metaphysical singularity, or just blow the stack?

      Inquiring minds want to know.

              -dZ.

  • by Tacvek ( 948259 ) on Thursday August 06, 2009 @12:01PM (#28974719) Journal

    Even if we have a chip capable of simulating the same number of neurons and synapses as the human brain, that will not magically form an artificial life-form. I know little about simulated neural networks, but I do know that they are only a very rough approximation of the workings of the human brain. We still don't understand all the intricacies of the neural and chemical interactions that occur to a sufficient level to properly simulate all of them.

    • Re: (Score:2, Interesting)

      by geekoid ( 135745 )

      Not true.

      Simulation of a brain* has shown behavious one would expect in an actual brain.

      So yes, it does look like imitating the brain will cause intelligence.
      This is very cool, and I hop it pans out to large Simulations.
      It could mean that intellect comes from the organization of the brain, a by products of the evolutionary need for memory.

      *limited set of emmulated neurons, really.

    • by ausekilis ( 1513635 ) on Thursday August 06, 2009 @12:49PM (#28975567)
      As one of my professors once said: "How do we go from billions of neural synapses to midget wrestling?" While amusing, it points out one of our great unknowns. Biologists and neuroscientists (some psychologists) understand things at the synapse level, and how the chained firing happens in neurons. Then psychologists understand normal behavior by examining abnormal behavior, but that's at a much higher level. We simply don't know how to map out what's in between.
  • Sure we can... (Score:5, Informative)

    by thisnamestoolong ( 1584383 ) on Thursday August 06, 2009 @12:01PM (#28974725)
    ...but why would we? The brain was assembled by natural selection -- a process that can only improve and work with what it already has, which is hardly ideal. The human brain is certainly amazing, but it is not perfect. There are certainly better, faster, and more efficient ways of designing the superhuman AIs of the future. Looking at the brain will give us a good road map, but is not the end-all be-all.

    I see a strange arrogance and egocentricity in trying to design robots to be exactly like us, why not think outside the box? Why are upright, bipedal robots always portrayed as the ultimate? There are most certainly more efficient and better designs than the one we are saddled with, this is just how we happened to evolve, we are simply the current end of one branch of the evolutionary tree.
    • Re:Sure we can... (Score:5, Insightful)

      by ardor ( 673957 ) on Thursday August 06, 2009 @12:07PM (#28974827)

      The way we evolved can be a hint about efficiency. For example, bipedal movement turned out to be pretty efficient on a human scale, while eight legs like a spider are not. Therefore, it is important to know *why* things evolved the way they did. Was it because of energy efficiency? Adaptation to local predators? etc.

    • Re: (Score:3, Interesting)

      by Extremus ( 1043274 )
      Mode parent up! Do not seems reasonable to consider the "conscience" as a phenomena inherent to the biological brain. In fact - and this is somewhat ironic -, the most successful intelligent systems are based in cognitivist approaches, which are a little bit far away from the conexionist approach. While I do not believe that this situation will last much longer (given the difficulties in programming symbolic reasoning systems), I do not also believe that the brain simulation approach is the only way to go,
  • ...someone gives you a calfskin wallet. You've got a little boy, he shows you his butterfly collection, plus the killing jar. You're watching television...suddenly you realize there's a wasp crawling on your arm.

    We are getting closer to Eldon Tyrell's replicants...and I for one welcome our mircochip brained overlords.

  • Instead of recreating a human brain why don't they figure out how to wire a processor into the human brain to improve it.

    I could use a built in graphing calculator or spell check.
  • One word (Score:2, Interesting)

    by therpham ( 953844 )
    Cylons!
  • Can we build a microchip into a human brain?

  • by imgod2u ( 812837 ) on Thursday August 06, 2009 @12:04PM (#28974785) Homepage

    It's the reconfigurable nature of the human brain that's unique and powerful. If all you did was take one person, listed all of the skills of that person -- all of the things he knew; all of the skills in smell, touch, sight and taste; all of the cognitive reasoning ability -- then you could create a chip to simulate those skills. Algorithms for image recognition, feature extraction, speech recognition, etc. are all available that are very very close to what humans can do.

    But the thing that separates humans is that it didn't take hundreds of years of mathematical development to come up with these algorithms. The human brain develops these algorithms through changes in its structure from birth. At about age 10, speech recognition specialized and tailored to the dialect, language and tones that the person hears has developed on its own.

    That type of structural formation and learning is what would need to happen in silicon to make a truly intelligent machine. Neuron clusters emulated using transistors would need to be able to dynamically form connections to other neuron clusters. There'd have to be some type of distributed learning algorithm encoded in the operation of each individual neuron.

    Speech recognition is easy. Image recognition is easy. Developing a distributed, scalable, self-modifying architecture that can learn all of those and more on its own with nothing more than training samples is the difficult part.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Thursday August 06, 2009 @12:05PM (#28974805)
    Comment removed based on user account deletion
    • Which leads me to wonder...what does a flawless brain look like exactly?

      Tasty! Mmmmm, brains!

    • by Zashi ( 992673 )

      what does a flawless brain look like exactly?

      Here, I'll show you mine.

    • Re: (Score:3, Interesting)

      by MarkvW ( 1037596 )

      The parts of the brain "geared toward bodily functions" is crucial to the functioning to the brain as a whole. The brain interaction with genitalia is just one example.

      Your post brings up another good point though: Before the brain is thorougly constructed, the input streams into the brain need to be thoroughly understood as well.

      And, where does the brain stop? The spinal column? The nervous system? Hormones?

      This is so cool!

  • Face it some people can't make up their minds. That puts the level of complexity of their brains somewhere beneath a simple OR logic gate. Other people would need a random number generator to emulate their brains.

    These we can do already - but why bother?

  • The question is, whether we can put a brain on a chip smart enough to procreate and kill human beings.

    it doesn't need to be smarter than that to destroy human kind. And once humanity is eliminated, no one will care if computer chips can mimic our brains.

  • From the article (Score:4, Insightful)

    by phantomfive ( 622387 ) on Thursday August 06, 2009 @12:19PM (#28975041) Journal

    Hawkins believes computer scientists have focused too much on the end product of artificial intelligence. Like B.F. Skinner, who held that psychologists should study stimuli and responses and essentially ignore the cognitive processes that go on in the brain, he holds that scientists working in AI and neural networks have focused too much on inputs and outputs rather than the neurological system that connects them.

    I agree with this quote. A lot of computer scientists try to build artificial intelligence without really understanding how their own brain works. It is really too bad because they have an unusually observable specimen right in their own head. Genetic learning? Is that how you feel you learn personally? Of course this question can't answer everything about artificial intelligence, but it can definitely help and is too often ignored.

    Also, one thing that isn't clear from the article is whether the synapses will be static, or whether they can move and grow, just as human brain synapses can.

  • That's not what a nerd would say...it's called a quadrillion. These larger number set aren't that hard to remember... the prefixes are from Latin. Bi-, Tri-, Quad-, Quint-, Hex-... We already use them in name of some of our months.
    • Re: (Score:3, Insightful)

      It's worse than that. The term for large numbers above 999,999,999 differs depending on which scale [wikipedia.org] you've learned. Using a thousand trillion is a term that is only correct in the long scale, but I'm fairly sure they meant the short scale trillion times 1000 (aka quadrillion), as long scale thousand trillion is equivalent to a sextillion in the short scale, and we're not that complex.
  • ...we're installing Windows. haha

  • by blackfrancis75 ( 911664 ) on Thursday August 06, 2009 @12:34PM (#28975315)
    Hi, BrainChip here - just logging on to let you know I do exist. Cheers, - BrainChip.
  • But the actual brain can change the synapses over time, making new ones and obsoleting old ones. I'd like to see some silicon do THAT. I wouldn't worry, we'll still be boss for a while.

  • Just think (Score:3, Funny)

    by Linker3000 ( 626634 ) on Thursday August 06, 2009 @01:00PM (#28975781) Journal

    Just think what might happen if Apple got the patent on these suckers and brought them to market as the personal implant - the IThink?

    Imagine waking up morning and Ithinking "I'd like to fall in love today", so you make a mental link to the App Store and download "Love" for £1.95. On your way to work, you spot someone that takes your fancy, so you make a quick connection and download Flirt for a further £2. Things go well: Entertain £2, ShowYouCare £3.30, Intimate £10. A while passes and you're happily married (or have both downloaded LiveInSin-Noshame), so Broody is added to the bill.

    What a wonderful life..well, if you download 'Harmony'

  • Randomness (Score:5, Informative)

    by Burnhard ( 1031106 ) on Thursday August 06, 2009 @01:08PM (#28975893)

    It requires a computational capacity of 36.8 petaflops -- a thousand trillion floating point operations per second

    It requires far more than that. According to some, the microtubules on the cytoskeletons of the cells themselves can be processing units. Raise the bar a few orders of magnitude in that case.

  • Nanotech (Score:3, Interesting)

    by seven of five ( 578993 ) on Thursday August 06, 2009 @01:14PM (#28975967)
    Back in the mid-80s, Drexler's Engines of Creation had some things to say about reverse-engineered brains. From what I remember, a specialized nanomechanical processor could emulate a neuron in a fraction of the volume. The functioning of a human brain could be done in a package about a cm^3. The main concern was thermodynamics--how fast you could run the thing before heat became too much of a problem.
  • Not even close (Score:5, Interesting)

    by joeyblades ( 785896 ) on Thursday August 06, 2009 @01:21PM (#28976063)

    BTW, current estimates are more like 100 billion neurons and upwards of 300-500 trillion synaptic connections.

    However, numbers aside, the human brain is not merely a complex collection of neurons and interconnected synapses. Complexity is only one very basic factor, another, more critical, factor is organization. We don't even know where to start in the organization of these artificial neural networks to emulate a human brain.

    WARNING! COMPUTER ANALOGY: It's not the number and density of interconnected transistors that make a Xeon, it's the organization.

  • human? (Score:3, Interesting)

    by ZenDragon ( 1205104 ) on Thursday August 06, 2009 @01:36PM (#28976295)
    Neurons and Synapses all all that do not make a "person." There is much more to human intelligence that I do not believe a machine could ever achieve. That is certainly not to say that we wouldnt be able to "grow" a machine with the personality of a human. In other words a human brain interfaced to a machine. The very fact that humans think as they do implies that it would be possible, but I do not believe man understands enough about their own nature, nor will we ever understand enough to actually re-create our minds in a machine from scratch.

Don't get suckered in by the comments -- they can be terribly misleading. Debug only code. -- Dave Storer

Working...