Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing

Can We Build a Human Brain Into a Microchip? 598

destinyland writes "Can we imprint the circuitry of the human brain onto a silicon chip? It requires a computational capacity of 36.8 petaflops — a thousand trillion floating point operations per second — but a team of European scientists has already simulated 200,000 neurons linked up by 50 million synaptic connections. And their brain-chip is scaleable, with plans to create a superchip mimicking 1 billion neurons and 10 trillion synapses. Unfortunately, the human brain has 22 billion neurons and 220 trillion synapses. Just remember Ray Kurzweil's argument: once a machine can achieve a human level of intelligence — it can also exceed it."
This discussion has been archived. No new comments can be posted.

Can We Build a Human Brain Into a Microchip?

Comments Filter:
  • by Anonymous Coward on Thursday August 06, 2009 @12:53PM (#28974571)

    "Can We Build a Human Brain Into a Microchip?"
      No.

  • by quadrox ( 1174915 ) on Thursday August 06, 2009 @12:57PM (#28974655)

    While the CPU/RAM model is not the way the brain works (I suppose), but it can be used to run a "virtual machine" that itself does work like the human brain does.

    I don't think they are trying to simulate a human brain just by throwing a bunch of hardware together...

  • by denzacar ( 181829 ) on Thursday August 06, 2009 @12:59PM (#28974681) Journal

    "Can We Build a Human Brain Into a Microchip?"

    Not YET.

  • by Tacvek ( 948259 ) on Thursday August 06, 2009 @01:01PM (#28974719) Journal

    Even if we have a chip capable of simulating the same number of neurons and synapses as the human brain, that will not magically form an artificial life-form. I know little about simulated neural networks, but I do know that they are only a very rough approximation of the workings of the human brain. We still don't understand all the intricacies of the neural and chemical interactions that occur to a sufficient level to properly simulate all of them.

  • Re:Why? (Score:5, Insightful)

    by denzacar ( 181829 ) on Thursday August 06, 2009 @01:05PM (#28974799) Journal

    How many of those can work 24/7/365 on a single subject with 100% concentration?

    Or how about how many of those can you scale down to fit into a shoebox or smaller (while they are till operative) or scale up by linking them in a cluster (preferably of the Beowulf kind)?

  • Re:Sure we can... (Score:5, Insightful)

    by ardor ( 673957 ) on Thursday August 06, 2009 @01:07PM (#28974827)

    The way we evolved can be a hint about efficiency. For example, bipedal movement turned out to be pretty efficient on a human scale, while eight legs like a spider are not. Therefore, it is important to know *why* things evolved the way they did. Was it because of energy efficiency? Adaptation to local predators? etc.

  • with DRM (Score:3, Insightful)

    by DaveSlash ( 1597297 ) on Thursday August 06, 2009 @01:09PM (#28974863)
    to erase everything you read when the license expires
  • by jonbryce ( 703250 ) on Thursday August 06, 2009 @01:11PM (#28974901) Homepage

    Why should we try to create an artificial brain in the computing lab when it would be much easier to do it in the genetic engineering lab?

  • From the article (Score:4, Insightful)

    by phantomfive ( 622387 ) on Thursday August 06, 2009 @01:19PM (#28975041) Journal

    Hawkins believes computer scientists have focused too much on the end product of artificial intelligence. Like B.F. Skinner, who held that psychologists should study stimuli and responses and essentially ignore the cognitive processes that go on in the brain, he holds that scientists working in AI and neural networks have focused too much on inputs and outputs rather than the neurological system that connects them.

    I agree with this quote. A lot of computer scientists try to build artificial intelligence without really understanding how their own brain works. It is really too bad because they have an unusually observable specimen right in their own head. Genetic learning? Is that how you feel you learn personally? Of course this question can't answer everything about artificial intelligence, but it can definitely help and is too often ignored.

    Also, one thing that isn't clear from the article is whether the synapses will be static, or whether they can move and grow, just as human brain synapses can.

  • by divisionbyzero ( 300681 ) on Thursday August 06, 2009 @01:25PM (#28975157)

    ++

    I agree 100%. I still don't understand why this charlatan gets so much press on Slashdot. Probably because it causes people like you and I to post.

  • by cpu_fusion ( 705735 ) on Thursday August 06, 2009 @01:38PM (#28975389)

    I feel he has done a great disservice to the field of artificial intelligence by promising unrealistic things in interviews to the lay person. Disappointment is a sure fire way to get yourself branded as a snake oil salesman religious nut.

    A disappointed public threatens research funding, but an unprepared public threatens chaos.

    I'm more concerned with making sure we're thinking ahead to the radical change that is likely to come, be it in 10 years or 40, than to be concerned that lay people will distrust AI researchers.

  • by microbox ( 704317 ) on Thursday August 06, 2009 @01:53PM (#28975663)
    Biologist P.Z. Myers has criticized Kurzweil's predictions as being based on "New Age spiritualism" rather than science and says that Kurzweil does not understand basic biology.

    Having some personal understanding of both, I heartily agree. Lets separate out wishful thinking and esoteric "knowing" - both are merely ungrounded speculation.

    Myers also claims that Kurzweil picks and chooses events that appear to demonstrate his claim of exponential technological increase leading up to a singularity, and ignores events that do not.

    I once seriously considered a strategy for building and artificial brain with a veteran professor of computer science. Examining the problem I gave up when I realised that the individual cells are "intelligent". I think this is vitally important How does the "mind" of a protozoa work? They can navigate obstacles, identify and assimilate food, run away from danger, and have a 20 minute memory. We can assume that a single neurone may well have all of these capabilities and more. I believe that we may be myopically focused on nodes and connections, without considering just how complex and capable a single node is.

    So the complexity of the problem is probably an order of magnitude beyond 22 billion neurones and 220 trillion connections. Then consider the effect of 1000s of unknown neurotransmitters - and we know little about the "known" ones, such as serotonin and dopamine, except that they have a profound effect. And _then_, consider that the brain has structure, and we know comparatively little about that structure, and only a few hints about the algorithms and data structures that it uses.
  • Re:Why? (Score:1, Insightful)

    by Anonymous Coward on Thursday August 06, 2009 @01:57PM (#28975741)

    The key will be not to implement anything they think up without fully understanding it ourselves. Also, designing in the love and respect of the human race.

    Then we should drop the whole AI field. The whole point is to implement stuff we don't understand ourselves.

    You see there are many, many trivial things we little idea about how to actually do it. Walking. Reading. Talking. Listening... the list is quite long.

    We do not have, for most problems, any real hope of solving those problems any time soon. But AI's shown itself to be capable of solving those problems for us, and dictating the answers to us.

    That's how AI works, that's why it's used. And it's getting used more and more.

    Also, if we emulatate a specific persons brain, does that mean the emulation wil behave like that person? Can we create a chip thats in a specific 'state' and therefore have all the memories created as well?

    We are not capable of reading out even a single synapse value, so 100 trillion is a bit out of reach. Therefore emulating a specific person is out of the question. Before such a thing becomes possible huge advances are needed in physics, and biology (if you want to keep your test subject alive). Of course, if we succeed in creating a digital person, these limits will not apply to him (/her).

    That means that we can only emulate the architecture of the brain, in hopes that such an emulated system would create a new person.

    What no-one is talking about is that said person will obviously have all the intellectual capacity of the average newborn, and will need to be raised in order to have him execute any useful function.

    Obviously such a person would have the same limitations as a normal human being has, emotionally I mean. He(/she) may be able to see more difficult relations faster, but that's it. They'd need sleep (even though we might be able to accelerate it). They would not be able to concentrate 100% of the time ...

    The idea is that you'd only build up parts of brains. Only the eyes, and have them somehow transmit the 3d structure of the scene before them to us. Or simulate the hearing system, then have it dictate what it heard to us.

    Any "full" AI would simply be a (simulated) person.

    If we make 100 of these things, and then treate them all differently, will they start to behave differently?

    Defineately yes. We have 0 idea of what makes them tick (otherwise we wouldn't need them and use other ways to accomplish these things), so the potential for everything humans do is there. Even if you give 100 human kids educations as identical as possible they will not turn out identical. Likely the same will be true of simulated humans. Simulated humans will have similar feelings as normal humans. Including the potential for loving children. Including the potential killing thousands by steering planes into buildings for some false cruel desert "god".

  • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Thursday August 06, 2009 @02:04PM (#28975835) Journal

    I agree 100%. I still don't understand why this charlatan ...

    Well, despite my overly critical initial post I will waste karma with further speculation on Kurzweil. He's actually not a charlatan. He's just stepping outside of his field and extrapolating out some of the things that have been achieved ... and using some unrealistic exponential curve to guide his predictions.

    The man has experience great success -- both in business and academia -- throughout his lifetime. But past 1990 he's made a few inventions to help learning and disabled students. Which is great. Unfortunately he's found that writing books, holding symposiums and giving speeches about fantastic science fiction is what draws attention and resources. So he keeps doing it. It results in a lot press and I'm sure his aging body might drive him to hope and fund a singularity before he dies.

    While this singularity is a romantic idea, it's just not based on science. He's lost sight of what he once did musical hardware that advanced synthetic music far beyond the rate at which it normally would have run. And now his efforts are not designated to realistic goals but instead loftier goals that no one can achieve. What's worse is that it depends on crosses between fields he's simply not an expert in.

    You might be able to argue that he's a charlatan now but in my mind he's Thomas Edison turned Nostradamus. He's pulled out all the stops that relegate normal scientists to the scientific process and has passed optimism onto fantastical dreams. He can write all the books he wants but until he gets back to what made him great -- actually implementing something and leaving a legacy of working examples -- he runs of the risk of tarnishing his reputation.

  • by Hatta ( 162192 ) * on Thursday August 06, 2009 @02:07PM (#28975883) Journal

    I see no reason to believe we have "free will". As far as I can tell, whether we have free will or not is irrelevant to anything important. We have "will", and that is sufficient.

  • by Anonymous Coward on Thursday August 06, 2009 @02:34PM (#28976267)

    Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

    No. What this does not take into consideration is the fact that any such machine would occupy the same physical reality we humans do, therefore its resources for developing and building new machines would be limited, and in turn limit the speed at which this can occur and prevent the "explosion" or "singularity" or whatever you choose to call it. At some point hardware needs to be built, and this requires manufacturing facilities to be built, and materials to be manufactured and transported. These things probably can't be optimized by orders of magnitude.

    It's not even inconceivable that further generations of ultraintelligent machines would take a longer time to create than the previous ones, especially since the machines probably wouldn't be in a hurry; a truly intelligent machine would probably aim for indefinite sustainability and limit its own use of natural resources to ensure the future versions of itself will be secure.

  • by ShadowRangerRIT ( 1301549 ) on Thursday August 06, 2009 @02:36PM (#28976289)
    It's worse than that. The term for large numbers above 999,999,999 differs depending on which scale [wikipedia.org] you've learned. Using a thousand trillion is a term that is only correct in the long scale, but I'm fairly sure they meant the short scale trillion times 1000 (aka quadrillion), as long scale thousand trillion is equivalent to a sextillion in the short scale, and we're not that complex.
  • by mcgrew ( 92797 ) on Thursday August 06, 2009 @02:39PM (#28976369) Homepage Journal

    From TFA: imprint the circuitry of the human brain using transistors on a silicon chip?

    No, not on binary circuits we can't. We might simulate the brain, or even model the brain, but we won't imprint it.

    The brain is a parallel processor.

    Tremendously paralell; and it's a multimode analog design, not a single mode digital design. There are many different kinds of brain cells, with both chemical and electrical components.

    We can model an atomic explosion, but we understand the physics behind an atomic explosion. We have hardly begun to understand how the brain works. We'll have cures for all mental ilnesses before we can accurately model the brain, because if you can't fix a broken machine you don't understand how it works, and even sometimes if you can fix a broken machine you still may not understand that machine completely.

    When you model an atomic explosion, there is no radiation released. A model is not the real thing.

    There is no test for sentience. Without such a test it would be impossible to kow if you have succeeded in accurately modeling it.

  • by digitig ( 1056110 ) on Thursday August 06, 2009 @03:31PM (#28977347)

    If you gradually increase the lightness of black, at what point does it become white?

    The fact that there is no clear boundary does not mean that there is not a useful distinction -- the ancients spotted that logical fallacy: the continuum fallacy [wikipedia.org]

    .

  • by ferespo ( 899921 ) on Thursday August 06, 2009 @03:35PM (#28977429)

    Correct me if I'm wrong but I believe that was said of binary systems?

    It applies to any kind of computation, using binary or decimal or any representation system.

    In fact we are talking about the Church-Turing thesis here.
    http://en.wikipedia.org/wiki/Church_thesis/ [wikipedia.org]

    The Church-Turing thesis has been alleged to have some profound implications for the philosophy of mind.[37] There are also some important open questions which cover the relationship between the Church-Turing thesis and physics, and the possibility of hypercomputation. When applied to physics, the thesis has several possible meanings:
    1. The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has also been termed the strong Church-Turing thesis (not to be confused with the previously mentioned SCTT) and is a foundation of digital physics.
    2. The universe is not equivalent to a Turing machine (i.e., the laws of physics are not Turing-computable), but incomputable physical events are not "harnessable" for the construction of a hypercomputer. For example, a universe in which physics involves real numbers, as opposed to computable reals, might fall into this category.
    3. The universe is a hypercomputer, and it is possible to build physical devices to harness this property and calculate non-recursive functions. For example, it is an open question whether all quantum mechanical events are Turing-computable, although it is known that rigorous models such as quantum Turing machines are equivalent to deterministic Turing machines. (They are not necessarily efficiently equivalent; see above.) John Lucas and, more famously, Roger Penrose[38] have suggested that the human mind might be the result of some kind of quantum-mechanically enhanced, "non-algorithmic" computation, although there is no scientific evidence for this proposal.

    There are many other technical possibilities which fall outside or between these three categories, but these serve to illustrate the range of the concept.

  • by jonbryce ( 703250 ) on Thursday August 06, 2009 @04:42PM (#28978485) Homepage

    But can a brain be emulated in computer hardware? I don't think it can. Certainly not with existing technology, and I don't think we are any closer to it now than we were in the 1970s.

    The two main problems I see are that computers only understand boolean logic, and they only do what they are told to do. No matter how fast you make them, or how much memory you throw at them, you can't get round that without taking the technology in a completely different direction, and that just isn't happening at the moment.

    Obviously I'm not going to say that this will never happen. Such a statement can only ever be proved wrong, but I do think the biology lab is most likely place for a synthetic brain.

  • by Lord Bitman ( 95493 ) on Thursday August 06, 2009 @05:17PM (#28979005)

    yeah, they'll never make a computer that can solve problems the way a human can until they get computers to become absolutely focused- if I tell it to run i++ a quadrillion times, I want to see an answer! I don't want to come back five minutes later and see that it's decided to play solitaire instead!

  • by Anonymous Coward on Thursday August 06, 2009 @07:09PM (#28980281)

    "Biologist P.Z. Myers has criticized Kurzweil's predictions as being based on "New Age spiritualism" rather than science and says that Kurzweil does not understand basic biology. Myers also claims that Kurzweil picks and chooses events that appear to demonstrate his claim of exponential technological increase leading up to a singularity, and ignores events that do not."

    And where is Myers data? A biologist commenting on a computer scientist's technology data.....

    "I agree 100%. I still don't understand why this charlatan gets so much press on Slashdot. Probably because it causes people like you and I to post."

    Looking at trends, gathering data, and then taking a best guess about the future makes him a charlatan? Thats a little harsh. He's a little nutty with vitamins, but that is only because he believes his own trend research enough to want to live long enough to see the things he predicts.

    He might be putting too much faith in the validity of his data, but it certainly doesn't qualify him as a charlatan, nor illogical.

  • by Wandering Idiot ( 563842 ) on Thursday August 06, 2009 @11:42PM (#28982263)
    No matter in how much detail you examine the hardware of a computer, you can tell nothing about it until you turn it on, that is until it becomes alive so to speak. The basic performance characteristics of a computer are not determined by hardware, but by software. Software is not subject to the usual laws of physics, such for example gravity. Because software is not a material object, it can be transmitted at the speed of light and can be endlessly copied. Even if computer hardware could be made as complex as the human brain, it would still have to be programmed.

    You're being silly. Stop being silly. If you examine the hard drive platters of a computer in the correct way, you can indeed see the encoding of the software. If you similarly studied the rest of the hardware in enough detail, you would be able to understand how the software interacts with the hardware (assuming you're able to grasp that the PC is supposed to be hooked to a power source). This would be difficult, but is indeed possible, and similar things have been done in real-world reverse engineering.

    In the same way, I see no reason in principle that the brain can't by understood completely by a fine-grained enough understanding of all its physical components and how they interact. Which is easier said than done, given the complexity of the brain, but we're talking general principles here.

    "Software", and "information" are useful abstract grouping concepts, but that doesn't make them magic. Information has to have some representation in the physical world (whether it be in the neural patterns of a human, a stone tablet, etc) otherwise it can't be said to exist.


    The Bible characterizes a person as being essentially a living spirit or soul, living in a physical body. It tells us that someday, after we die physically, the software of the soul will be loaded into a new more capable body which lives forever.

    Interestingly, you have something in common with proponents of strong AI and mind uploading in this, in that you consider the mind and consciousness to be information, and hence transferrable to substrates other than the original body. The difference being that they don't posit a substrate "outside of the physical world", which is almost literally a meaningless phrase unless you can describe it more fully.


    We have to take all this on faith at the present time

    Who's this "we" you speak of? Not to put too fine a point on it, but some of us like having actual reasons for the things we think, which is pretty much the exact opposite of faith. (Although in practice, people who take things on faith generally do have reasons for their beliefs, just not very good ones from the point of view of rationality [i.e. "believing X makes me feel better about the universe, whether or not it's true"] )

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...