Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing

Stanford Bioengineers Develop 'Neurocore' Chips 9,000 Times Faster Than a PC 209

kelk1 sends this article from the Stanford News Service: "Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC (abstract). Kwabena Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed 'Neurocore' chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. ... But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. (...) Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies. By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore's cost 100-fold – suggesting a million-neuron board for $400 a copy."
This discussion has been archived. No new comments can be posted.

Stanford Bioengineers Develop 'Neurocore' Chips 9,000 Times Faster Than a PC

Comments Filter:
  • by geekoid ( 135745 ) <dadinportland@y[ ]o.com ['aho' in gap]> on Tuesday April 29, 2014 @06:27PM (#46873625) Homepage Journal

    Are you ready?

    If they can use modern fabs, then we will have a simulate brain in a decade.

    • I highly doubt it. The brain isn't just a random mass of interconnected neurons; it has a complex structure that we have yet to fully map out or even understand. Also, the inter-neuronal connections involve the release and re-uptake of neurotransmitters, which is itself a complex system that we have yet to fully understand in some cases.

      Don't get me wrong -- for biological systems that we do understand, like the center-surround cells in the retina or the hypercolumns of the visual cortext, a chip like thi

      • by geekoid ( 135745 )

        "The brain isn't just a random mass of interconnected neurons"
        no shit? herp derp.

        We have simulated 'large' number of neurons, and you know what happens? it begins to act like a brain. Granted we are talkaing some pretty basic signalling

        Expanding beyond that is pricey, power intensive, and take a lot of power. Did I mention the power?

        we will not understand the brain, and then build a simulator. We will build it up a bit at a time and use the brain as a model.

        http://theness.com/neurologica... [theness.com]

        • Perhaps we don't? A model of something you don't understand won't give you insight into the unknown. Perhaps one might discover something like human intelligence but you'll never know if it is the same thing.

          Also, I think that Godel (logically) and quantum effects (materially) stand in the way of understanding how three pounds of flesh can become intelligence and sentience.

          If the human brain were so simple that we could understand it, we would be so simple that we couldn't.

          • Emerson M. Pugh, As quoted in The Biological Origin of Human Values
          • A model of something you don't understand won't give you insight into the unknown. Perhaps one might discover something like human intelligence but you'll never know if it is the same thing.

            I suspect we'll end up recreating it without actually understanding it.

            • I suspect we'll end up recreating it without actually understanding it.

              Perhaps we can create a machine that does.

      • I think a lot of the benefit from these chips is that we can try to simulate small brain structures with the expectation of failure. Then learn from that failure what new questions we should be asking.

      • by mikael ( 484 )

        Research on just a single slice of neurons leads to about a dozen research papers, and there are tens of thousands of such slices to be made through the human brain. Such research has led to improvements in automatic face recognition, motion stabilization for cameras and cochlear implants. Neurons are known to form similar groups known as cortical columns. These actually seems to overlap into each other and are replicated tens of thousands of times. Diffusion tensor imaging has provided a layout of the data

    • The human brains hardware is not the difficult part. As usual, software is where the magic is.

    • by gl4ss ( 559668 )

      something about this stinks to high heavens.

      "operates 9,000 times faster than a personal computer simulation of its functions."

      anyhow, they haven't apparently done anything with it or used it for controlling anything.

      • You missed the first part of that quote:

        "The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions."

        So, actually, the "9,000 times" refers to a comparison between a real mouse cortex and a PC simulation.

        Whether the headline is therefore erroneous, or whether the new chip runs at comparable speeds to real brains, is unclear.

  • Is my CPU going to struggle with depression and anxiety now?
  • by aXis100 ( 690904 ) on Tuesday April 29, 2014 @07:03PM (#46873909)

    Good old clueless tech journalists, followed by slashdot editors just copy pasting.

    The chips aren't 9000 times faster than a typical PC for general tasks. Specifically, they can simulate neurons 9000 times faster than a PC can simulate neurons. Pretty typical of any ASIC with a limited set of a highly specialised functions.

    • The interesting bits (Score:5, Informative)

      by Anonymous Coward on Tuesday April 29, 2014 @07:47PM (#46874213)

      It isn't a typical ASIC; the chip is a custom fully asynchronous mixed digital+analog; the board uses 16 chips in a tree router for guaranteed deadlock prevention between the chips; and can simulate 1 million neurons powered only by one USB port.

      The neurons are implemented with analog circuits to match the dynamics of real neurons, moving beyond a simple hodgkin-huxley model to include components like ion channels, which is first of its kind in an analog chip. It has a neat hexahedral resistor network that distributes the spike impulse across a neighborhood of neurons, a phenomena seen in many cortical brain areas; essentially an analog phenomena implemented efficiently in analog design.

      Analog gives it fun biological-like properties, with things like temperature sensitivity that must be regulated with additional circuitry. Asynchronous design means outside of leakage from the chip, which is low with such a large fabrication process, very little energy is used at a neuron level if no stimuli is present. This is in contrast to a traditional CPU, which has a clock marching along lots of a chip to consume energy every clock cycle.

      Outside of wireless/signaling stuff, this is probably the biggest mixed analog digital asynchronous chip in existence.

      But otherwise yes, the editors sucked on this one.

    • It will be a while before we can understand just how important such circuits can be. It may be that they simply supplement CPUs already in use. And much will depend upon just how deeply we can program such a device as well. It may well be that the worst path to take would be to try to get a machine to think like a human. We humans are a bit on the defective side. How well can we think when we have a history of electing people like George W. Bush as President? The evidence at hand is that humanity
    • Actually not even that is clear:

      The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions.

      So a real brain - or cortex anyway - is 9,000 times faster than the PC simulation. No actual word on how fast these new chips are.

  • by funwithBSD ( 245349 ) on Tuesday April 29, 2014 @07:36PM (#46874137)

    Cray Cray.

  • by timeOday ( 582209 ) on Tuesday April 29, 2014 @07:45PM (#46874199)
    9000 times faster than a PC, if that PC happens to be running the specific artificial neural network simulation implemented in hardware by this chip.

    Not that I'm knocking it. A GPU implements specific algorithms to great effect. But a GPU's algorithms are ones that are interesting for a specific application (drawing texture-mapped polygons), whereas an artificial neural network still needs another layer of programming to do something useful. In other words, a Word Processor implemented on this chip would not be 9000x faster than a Word Processor implemented on a CPU. A face recognition algorithm, on the other hand, might see a decent fraction of that 9000x, although it remains to be seen whether this chip would be a better fit for any particular application than a GPU (for example).

    • Even for only neural simulation, this should be a no-brainer.

      At $40,000 to perform the task of 9,000 PCs, you'd need the PCs to be $4.44 each in order to match the price to performance ratio of this board.

      I imagine the kinds of computing networks being used for neural simulation research are well in excess of that $40,000 price tags, so why wait for mass-production to bring the price down further? There's value today. Unless of course something is horribly off in the reporting.

      • It would depend what you want to do. A person doing neuroscience would usually want to make their own neuron model, which is bound to differ from what is hard-coded in this device. A neuron model can be anything from a simple sigmoid function (but you can handle tens of millions) to a detailed electrochemical simulation.
  • It’s very very different; nerumorphic chips have been around for ages, they use the same phenomena the brain does (ion-flow across a neuron's membrane) using different a method (electron flow across a silicon membrane).

    The big difference is that they make use of analogue computation using the physical properties of electricity to model whatever you’re trying to model, whereas digital computers model things by representing quantities as symbolic values.

    So digital computers let you model something

  • That looks nice, but what problem does it solves?
  • In 1989 I was doing billions of connections per second on DataCube finite impulse response filter hardware to do the weighted sums, and hardware look up table for the sigmoid mapping for trainable multisource image segmentation for around $40,000 in off-the-shelf VME bus hardware, but that was in 1989 dollars, so I guess there has been some advancement.

  • Comment removed based on user account deletion
  • Comment removed based on user account deletion
  • ASIC's are faster than software!
    Stop the press everybody!

  • At 400$ a pop, I'd be willing to shell the cash to have access to this kind of chip/board. There's at least one direct application I'd like to try: source code analysis. The current tools are quite powerful, mind you, but I'm sure the pattern recognition capabilities of such chips should be a lot better at pinpointing ill side effects, inefficiencies, memory leaks and such.

    Now, just imagine a biowolf cluster of those...

  • Stanford Bioengineers Develop 'Neurocore' Chips 9,000 Times Faster Than a PC

    First, as everyone has already pointed out, they won't be just plain "faster than a PC." They are custom chips designed to do a specific job, so it's not that surprising - if it's true.

    Because the article's lead-in reads:

    Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC.

    which is a little unclear as to whether the latter half refers to the new chips or to the human brain.

    And then the article says:

    The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions.

    which again does not refer to the new chips.

  • I wonder if this kind of mass parallel computing architecture, can lead to more efficient ray tracing algorithms.

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...