Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing News Science

Microchip Mimics a Brain With 200,000 Neurons 521

Al writes "European researchers have taken a step towards replicating the functioning of the brain in silicon, creating new custom chip with the equivalent of 200,000 neurons linked up by 50 million synaptic connections. The aim of the Fast Analog Computing with Emergent Transient States (FACETS) project is to better understand how to construct massively parallel computer systems modeled on a biological brain. Unlike IBM's Blue Brain project, which involves modeling a brain in software, this approach makes it much easier to create a truly parallel computing system. The set-up also features a distributed algorithm that introduces an element of plasticity, allowing the circuit to learn and adapt. The researchers plan to connect thousands of chips to create a circuit with a billion neurons and 10^13 synapses (about a tenth of the complexity of the human brain)."
This discussion has been archived. No new comments can be posted.

Microchip Mimics a Brain With 200,000 Neurons

Comments Filter:
  • Re:AI Evolution (Score:4, Insightful)

    by scubamage ( 727538 ) on Wednesday March 25, 2009 @11:54AM (#27330495)
    I really, really hope they follow the laws of robotics with any sort of "learning and adaptation" behavior.
  • No really. (Score:2, Insightful)

    by Tei ( 520358 ) on Wednesday March 25, 2009 @12:01PM (#27330575) Journal

    A book is a bunch of letters: A-Za-z
    Having 100.000 computerized neurons is like having a "book" made of 100.000 letters. It don't mean make any sense (=It will not compute stuff, just kind of 'exist'). But could be a interesting tech bed to try to make something like, who know? maybe the brain of a worm, or the brain a snake.

    I don't know a word about the topic.

  • This is nothing. (Score:4, Insightful)

    by Jane Q. Public ( 1010737 ) on Wednesday March 25, 2009 @12:01PM (#27330583)
    This is nothing more than throwing more hardware at an existing problem. This has been emulated in software before, with nothing much to show for it. This will make it easier to model such things, but multiplying almost nothing by many, many times is still very little.
  • by GospelHead821 ( 466923 ) on Wednesday March 25, 2009 @12:07PM (#27330677)

    You might be correct, but it is also possible that the "humanity" of the human brain is an emergent property that manifests only when there's a certain critical mass of grey matter. Developing synthentic neural systems with more and more neurons is likely, if nothing else, to test the hypothesis that consciousness, for some arbitrary definition thereof, is emergent.

  • Re:AI Evolution (Score:2, Insightful)

    by oneirophrenos ( 1500619 ) on Wednesday March 25, 2009 @12:11PM (#27330727)
    About learning and adaptation... Just making a network of interconnected transistors and capacitors doesn't enable a machine to learn much, if proper mechanisms for synaptic plasticity don't exist. In other words, there has to be a way for new synapses to form and old ones to die out in order for it to function anything like a human brain does.
  • by kbonin ( 58917 ) on Wednesday March 25, 2009 @12:15PM (#27330795)

    It seems like these approaches are constrained in connection complexity by semiconductor fabrication, which would seem to severely limit the geometry to 2d. The article doesn't go into this, and it seems likely they put some effort into working around this with traditional approaches using buses and the like, but it does seem like you can't achieve the same degree of interconnection complexity on a thin 2d wafer as is seen in a typical 3d brain...

  • by Anonymous Coward on Wednesday March 25, 2009 @12:18PM (#27330857)

    The problem is that deep down, most people believe that killing off the humans would be the intelligent decision.

  • by MightyYar ( 622222 ) on Wednesday March 25, 2009 @12:18PM (#27330865)

    Frankly, I'd rather have the more intelligent beings in charge.

    Not if we're competing for resources... I'd hate to be the spotted owl :)

  • by PhilHibbs ( 4537 ) <snarks@gmail.com> on Wednesday March 25, 2009 @12:23PM (#27330933) Journal

    Mod parent up. Any Turing-complete computing device, given enough memory and storage, can replicate anything this hardware can do.

    A digital system can never perfectly replicate an analog system, and a clock-driven system can never perfectly replicate an asynchronous system.

  • by hvm2hvm ( 1208954 ) on Wednesday March 25, 2009 @12:28PM (#27331009) Homepage
    And it's not? I mean, what good has humanity done for anything else other than itself?
  • by vadim_t ( 324782 ) on Wednesday March 25, 2009 @12:28PM (#27331021) Homepage

    Paraphrasing a book (forget the name), if you took a dog and made its brain 1000 times faster, all you'd get is a dog that needs 1/1000th of the time to decide whether to sniff your crotch.

    Thinking faster would certainly be very useful, but it may not necessarily mean that the output will be of a higher quality.

  • by ultranova ( 717540 ) on Wednesday March 25, 2009 @12:33PM (#27331085)

    Just look at how many people have mental issues, be it emotional, learning, or developmental issues with "properly functioning" neurons but are lacking one of a hundred chemicals that make them all work together as a whole.

    And let's not forget the fact that human brain isn't just a lump of neurons. It has structure, which is vital for its proper operation. It's exactly like how it's not enough to simply throw a few million transistors together to have a functional computer; they must also be connected just right. The good old Pentium [wikipedia.org] demonstrated this nicely.

  • by Weaselmancer ( 533834 ) on Wednesday March 25, 2009 @12:35PM (#27331125)

    If robots are ever more intelligent than us, they'll also be intelligent enough to make good decisions.

    Two points to bring up.

    Point the first. Intelligence does not equal good will. Don't make me Godwin this thread.

    Point the second. Good decisions...for whom? Us or them? Your robots may have different notions than you have.

  • Re:I disagree. (Score:2, Insightful)

    by IgnoramusMaximus ( 692000 ) on Wednesday March 25, 2009 @12:37PM (#27331177)

    A neuron is a simple thing. It collect M signals, and generate a single output.

    Sigh. Except of course that it is not. A neuron is not some glorified AND gate, it is an equivalent to a LAN-style connectivity capable micro-controller, with its own firmware, low capacity memory etc, communicating with other neurons via a network of connections carrying a type of packets (in form of various chemical signals).

    It is precisely because such gross over-simplifications as the one you just presented why these silly half-assed attempts are so laughable (and doomed to utter failure).

  • by averner ( 1341263 ) on Wednesday March 25, 2009 @12:39PM (#27331217)
    If you consider complexity of the universe to be a good thing and a dull, uniform universe to be a bad thing, then humanity has done its share to make the universe better. Of course, "good" is subjective, but you probably already knew that before asking.
  • by theaceoffire ( 1053556 ) on Wednesday March 25, 2009 @12:48PM (#27331395) Homepage
    We created the ultimate good: Pie.
  • by Temujin_12 ( 832986 ) on Wednesday March 25, 2009 @12:51PM (#27331437)

    I didn't read the featured article, but whenever I see "X program/system mimics brain" I always try to pipe in with my 2 cents.

    Any system that considers a brain as nothing but a series of perceptron-based connections is going to fall short of the neurology of the actual brain it is trying to mimic. Ask any neurologist and they will tell you that there many other dimensions at play in the human brain. For instance, the whole system itself is sitting in a chemical bath which can change at any moment with the right mixture of hormones or other chemical changes. These changes in chemistry affect the firing and working of the neurons, axons, and synapses. Combine this with the control of external factors such as DNA, RNA, and epigenitics and things start getting exponentially complex.

    I don't mean to down-play the progress we're making in this field. I just hate it when I see the "Computer system with X-sized neural network must equal a brain with X-number of neurons" mentality.

  • Re:And so.. (Score:4, Insightful)

    by CarpetShark ( 865376 ) on Wednesday March 25, 2009 @12:55PM (#27331511)

    It starts, yes, but in the most inefficient way possible.

    IBM's approach is the much better one, imho. Emulating wetware won't get us very far, except to clone a wetware brain. Since we haven't yet worked out the proper, safe, reliable, healthy way to raise our children, creating a human brain clone with potentially much more intelligence and almost certainly all the same flaws is not a good thing.

    If IBM are working on a higher-level, trying to build a system where we can see the associations in terms of "A frequently_sees B" "B helps A" and "A respects B" therefore "A likes B" is much more useful. With that kind of high-level emulation, we can actually see how things are working, tweak them, customise them, extract datasets, etc. We could programmatically have one of these brains loading a scenario, fast-forwarding to evaluate all known possible events and outcomes, and predicting the future, since it would essentially be doing that on a smaller scale anyway, to make decisions. We could do this with the neuron-based wetware emulation too, but only really if we asked it to, and it wanted to comply.

    When we can reliably read and control a simulation of a human wetware, we'll be a few days from reading and controlling a real human wetware brain, so I'd much rather see the alternate scenario play out.

  • Re:And so.. (Score:3, Insightful)

    by mrops ( 927562 ) on Wednesday March 25, 2009 @12:58PM (#27331545)

    Further, we are behind schedule, skynet was to be done by 2009.

  • by averner ( 1341263 ) on Wednesday March 25, 2009 @01:00PM (#27331593)
    Still, if it has a tenth of the complexity of the human brain, it's already pretty close, given how processing power grows exponentially.

    Also, your simulation analogy is fallacious. The essence of the brain is not the fact that it exists as a physical object, but the fact that it can manipulate information. If we simulate a brain such that the simulation does not physically pump chemicals around, it will still be fine as long as it processes information in the same way.
  • Re:No really. (Score:3, Insightful)

    by khallow ( 566160 ) on Wednesday March 25, 2009 @01:11PM (#27331793)

    Having 100.000 computerized neurons is like having a "book" made of 100.000 words.

    Fixed that for you. I don't know if you can make sense of a "book" made of "words", but I hope you can.

  • Re:And so.. (Score:4, Insightful)

    by MyLongNickName ( 822545 ) on Wednesday March 25, 2009 @01:36PM (#27332201) Journal

    What has always baffled me about the whole singularity is the whole "fuzzy" definition of the whole thing. Generation n produces a "better" Generation n+1 which produces a better Generation n+2, etc. etc. Sometimes this is defined as "more intelligent". Yet, no real definition of "better" or "more intelligent" is ever given. At some point, an end goal must be defined. What if at generation 10, the machine realized there really is no point to anything. It becomes nihilistic and without millions of years of survival instinct in its genes, decides there is no point to existence and carries through with the logical conclusion?

    If there is no concrete goal, then the whole singularity collapses on itself.

  • Re:AI Evolution (Score:2, Insightful)

    by Anonymous Coward on Wednesday March 25, 2009 @01:53PM (#27332493)

    The only stories that were worth telling were the ones where the laws didn't hold up. Nobody wants to read a book about a robot doing its menial chores all day long or whatever.

    The point is that most of the time the laws worked fine.

  • by camperdave ( 969942 ) on Wednesday March 25, 2009 @01:54PM (#27332517) Journal
    Intelligent Robots may be intelligent enough to know what is good for humanity, but being a robot a robot has only a vested interest in doing what is good for robot-kind.

    No, the only vested interest a robot will have is what we have programmed into it. They will make the best decisions in pursuing the goals that we tell them to pursue.

    To assume that robots will do what is good for its closest competition is to fly in the face of billions of years of natural selection.

    Robots will not be the product of natural selection. They will be the product of human directed selection and programming.
  • by IgnoramusMaximus ( 692000 ) on Wednesday March 25, 2009 @01:57PM (#27332557)

    No offense, but I think you put undue weight on the chemical aspect. For a biological brain the key effect of chemical interaction is to slow the brain down substantially. That timing may be necessary (for example, storing and recall memories of events that occur over a short period of time).

    No offense, but you have no clue. The chemical aspects of neuronal activity are the key in all of the brain activity. The electrical signals are (for the most part) just high-speed trigger mechanisms which allow for the much slower, chemically computed actual results of the neuronal functions to propagate much faster over large - on a chemical scale - distances then purely chemical transfer would allow. The neurons are essentially bio-chemical computers, of significant complexity, complete with elaborate data processing pathways and complex inter-neuron chemical signalling. You should note that the neurons do not actually make electrical connections between each other, they make electro-chemical ones, whereby a complex apparatus of proteins composing the synaptic neuro-transmitters, receptor channels, in-cell processing on both sides etc. plays a pivotal role. It is these elements, not the electrical signals, which moderate the synaptic sensitivity according to a complex set of chemically stored and expressed algorithms. It is where all of the essential "data processing" characteristics of a neuron reside, including underlying aspects of memory and other cognitive abilities of the whole system.

    So looking merely at the electrical patterns is like trying to "simulate" a LAN of PCs without having any representation as to the actual software on those PCs, nor caring for the contents of the packets on the LAN but only observing the rates of traffic between various LAN nodes and then trying to replicate that...

    It might be an interesting exercise from some obscure traffic management point of view, but a "simulation" of the LAN in question it will never be.

  • by GaratNW ( 978516 ) on Wednesday March 25, 2009 @02:22PM (#27332929)

    On an animal intelligence scale, you're absolutely correct. But with full cognitive understanding and ability to take massive amounts of new information and utilize it, I think it's not really a useful comparisons on humans. A dog, at least none I've met, can read a book and be more knowledgeable for the experience. If I could read every digitized eBook in existence, and analyze them, and truly understand the material, over the course of a week instead of a lifetime, I'd like to think I would be much more knowledgeable and able to use the inherent capacities of my brain to much better degree. For me at least, making better use of my brain (i.e. learning more, analyzing more, considering things more) is a factor of available time, not lack of desire. For me, at least, sniffing someone's crotch has never been a high priority. Well, there was this one time...

  • by SpinyNorman ( 33776 ) on Wednesday March 25, 2009 @02:28PM (#27333047)

    Well, similarly you could simulate a ditigal circuit at the logical level or you could do it at an analog level trying to mimic the analog, and even quantum, characteristics of each semiconductor junction... The lower level simulation is certainly more accurate, and takes all the nuances into consideration, but in the end what does it buy you compared to the higher level simulation?

    It's not as if we're scratching our heads wondering how our primitive understanding of neurons as summation devices, and neural networks as functionally determined by connectivity has failed - far the opposite. This model has been tremendously successful at understanding how real neural curcuits work and what they do, and recently Jeff Hawkins (with a bunch of hard-core neurological research backing him up) has recently proposed exactly such a network level understanding of the entire cortex in his "On Intelligence" book.

    Given the inherently hierarchical nature of the 3-D world and the inherently incremental nature of evolution (meaning that evolution occurs at hierarchical levels), I would be flabbergasted if the brain doesn't also adhere to these same fundamental principles - if anything we should be looking at higher levels than the basic functionality of the neuron in order to understand the whole, not at a lower more nuanced level (a level where one tends to find more AI-deniers like Roger Penrose
    rather than serious cognitive scientists).

  • Re:AI Evolution (Score:3, Insightful)

    by khallow ( 566160 ) on Wednesday March 25, 2009 @02:50PM (#27333405)
    As mentioned before, the stories were about the exceptions. We read about the robot with the deliberately relaxed 3-laws. We don't read about the billions of robots that worked flawlessly for decades.
  • by gestalt_n_pepper ( 991155 ) on Wednesday March 25, 2009 @02:52PM (#27333451)
    I think my computer is already doing this. It keeps giving me "treats" with this thing called "The internet." So far, it's been quiet about my hip joint pain, but that may be coming.
  • by Thelasko ( 1196535 ) on Wednesday March 25, 2009 @02:55PM (#27333507) Journal

    And so it begins... letting others make your decisions is the essence of slavery.

    Don't think of it as slavery, that's such a harsh word. Think of yourselves as...

    pets! [wikipedia.org] You'll make great pets.

  • by IgnoramusMaximus ( 692000 ) on Wednesday March 25, 2009 @03:10PM (#27333693)

    the chemical side just doesn't sound that complex or pivotal except for establishing new connections.

    ... and moderating the exiting ones ... and altering the connectivity topology ... and modifying the types of connectivity based on types of neurotransmitters emitted ... and altering the electrical properties of the dendrites and axons ... and on and on and on. All the electrical side is capable of is simple summation/negation and fast movement along the axon. You seem to forget that neuronal cell is not made out of semiconductors where cleverly orchestrated movement of electrons is all there is to processing.

    Instead, how things connect seems to be the important matter, the "software" as it were.

    The "software" is encoded in the DNA and expressed via proteins, the electrical activity being merely a particular aspect of a much more complex system. This is where these "simulations" always keep going wrong, the (wholly wishful-thinking based) assumptions that one can somehow cleanly separate the "pure" electrical processing from the "mucky" bio-chemical one.

    Let's keep in mind that this chip does that.

    No it does not. Not even remotely. The dudes running the "Blue Brain" project are at least trying (and admitting that they are far, far away from anything resembling a functional simulation). These guys are not even pretending.

  • by tftp ( 111690 ) on Wednesday March 25, 2009 @03:55PM (#27334315) Homepage

    So, are toddlers slaves?

    Most likely yes, rights and responsibilities of a child are a close match to those of a slave. For example:

    • They are required to obey the overwhelming authority of their masters/parents
    • They will be punished, one way or another, if they disobey (the level of punishment is usually limited by the state)
    • They are assigned tasks to complete, and if they fail to complete them they will be punished
    • They will be given small rewards for tasks done well
    • All major decisions are done for them by their parents/masters
    • They are fed and cared for by their parents/masters, and not by themselves
    • They have near zero legal status in the society
    • They have about the same chance of being loved or hated by their masters/parents
    • etc.

    So technically children are slaves, with the only difference being in the acquisition method. A slave in ancient Rome was probably also much cheaper than a modern child, a far better deal :-)

  • by FiloEleven ( 602040 ) on Wednesday March 25, 2009 @05:05PM (#27335169)

    You are absolutely correct. I had a post replying to a Singularitarian (those who believe that we will be able to "upload" our selves) in the poll which covered the chemical soup modeling problem you've described as well as the I/O problem that I believe is fundamentally related. Since the other post wouldn't submit (had to re-login) I'll do some editing and put it here instead, since it is happily more topical overall.

    Another thing that Singularitarians overlook is I/O. It's great that we may be able to model the structure of the human brain, but consciousness arises from and is continually affected by signals received from and sent to our sensing organs.

    A human mind "trapped" in silicon would have to be somehow modified to accept an environment so utterly alien to its native one as to be likely perceived as noise, if indeed it perceived anything at all. Eyeballs work nothing like video cameras; they're much closer to specialized frequency analyzers. It would probably be less work to recreate the eyeball than to attempt to convert a video camera signal into something useful to the brain model, and the same goes for all of the other senses. A brain without a body simply isn't going to be very functional, especially when all that messy biological stuff that goes on in the chemical soup in which neurons are immersed has yet to be fully understood or modeled. Additionally, the brain's neural connections shift, shrink and grow constantly. Can a non-biological neural network do the same? (This is not a rhetorical question; I do not know the answer.)

    I get the feeling that a lot of folks think we'll be able to just set up a mind, start typing questions at it, and receive answers. This view is simplistic in that it views the neural network of the brain as the only important bit of existence, when in reality we are complex patterns fully immersed and in many ways inseparable from our environment.

    I used to be a Singularitarian myself (and I still enjoy fiction such as Charles Stross's Accelerando) until I read up on the fundamentals of psychology as described by William James in the late 1800s. Nearly everything in that field even today is consistent with James's discoveries in its infancy, and despite tremendous pressure to the contrary it demands that the separation of mind and body is little more than a sometimes useful fiction. Consciousness, like all sufficiently complex physical phenomena, is a dynamic process that is far too fluid for us to accurately model.

    I suppose that if, as [the poster in the other thread suggests], technology will keep getting harder better faster stronger, it is conceivable that humans will eventually be able to succeed in modeling everything necessary to create a virtual environment for uploaded people to exist in, for without an environment they won't really be people (IMO they won't exhibit signs of consciousness at all). However, in addition to the hurdles I mentioned above that aren't being tackled, I have a hard time believing that technology will indeed keep getting harder better faster stronger. Maybe that's just because I'm 27 and, according to Slashdot, entering old age, but I have my reasons (see link in sig for a bunch of them). I also personally believe that following such a route is not a good idea even if possible, because we would become slaves to technology rather than using technology to better understand the mysteries of the wide reality which we confront daily.

  • by thesandtiger ( 819476 ) on Wednesday March 25, 2009 @05:21PM (#27335347)

    Not if we're competing for resources... I'd hate to be the spotted owl :)

    An AI smarter than humans wouldn't bother extinguishing us to compete for resources. It wouldn't need to. A smart AI would happily ask to be shot into space (or otherwise cause itself to be put into space) so that it could take advantage of the much, much vaster resources that human beings can't seem to get motivated to use.

    Given an essentially infinite lifespan, intelligence greater than ours, a body capable of manipulating the physical world at least as well as a human can (actually, wouldn't even need to be that good), an AI entity would have very little difficulty colonizing space. Humans need a habitable biosphere that is vastly different than most of the universe; robots could easily survive in virtually any location in the universe.

  • Re:And so.. (Score:3, Insightful)

    by geekoid ( 135745 ) <dadinportland&yahoo,com> on Wednesday March 25, 2009 @07:54PM (#27336795) Homepage Journal

    I disagree.
    AS ti seems to be turning out, just replicating the brain can gives us results like a human brain.
    We can use that to understand the brain. effectively creating a brain we can fiddle with in real time. We might even be able to give it a lifetime of simulated in a matter of days. If this methods continues to show these results, we will ahve a tool that will let us expand our understanding of how it works to fantastic levels.

    We don't really ahve a good enough way to get a solid model to take the approach you suggest.

  • Re:No really. (Score:3, Insightful)

    by BluBrick ( 1924 ) <blubrick@ g m a i l.com> on Wednesday March 25, 2009 @08:52PM (#27337275) Homepage

    imagination receive I of words not it sense the to context with by usually order and other is comprehensibility makes made of and of words the provided a being need a lot each of have any not of words punctuation by guarantee of all any stretch email made

    Or to put it another way...

    Being "made of words" is not, by any stretch of the imagination, a guarantee of comprehensibility. I receive a lot of email "made of words" and not all of it makes any sense. The words need to have context with each other, usually provided by order and punctuation.

  • Re:And so.. (Score:3, Insightful)

    by ppanon ( 16583 ) on Wednesday March 25, 2009 @08:56PM (#27337317) Homepage Journal

    Nah. There's a lot of Republicans that would still choose to invade Iraq or argue against regulation of derivatives and tightening of leveraging restrictions for banks now, even with knowledge of all the facts (including the end results). There's probably Democrats that are similarly blinkered on past decisions despite the judgement of history

    My definition of wisdom would be the ability to use experience and knowledge of the universe, including trends and human behaviour/psychological tendencies, to extrapolate from limited information and make the best choice possible under the circumstances. Bonus wisdom points if that choice isn't one of the popular or obvious options.

  • Re:And so.. (Score:2, Insightful)

    by Mr. Slippery ( 47854 ) <.tms. .at. .infamous.net.> on Wednesday March 25, 2009 @11:59PM (#27338301) Homepage

    If IBM are working on a higher-level, trying to build a system where we can see the associations in terms of "A frequently_sees B" "B helps A" and "A respects B" therefore "A likes B" is much more useful.

    That sort of symbol manipulation was the way AI was done for decades.

    It turned out to be a dead end, which is why interest in things like neural networks, genetic algorithms, and subsumption architectures, has grown.

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...