Forgot your password?
typodupeerror
Supercomputing

Stanford Bioengineers Develop 'Neurocore' Chips 9,000 Times Faster Than a PC 209

Posted by Soulskill
from the i'll-order-a-dozen dept.
kelk1 sends this article from the Stanford News Service: "Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC (abstract). Kwabena Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed 'Neurocore' chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. ... But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. (...) Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies. By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore's cost 100-fold – suggesting a million-neuron board for $400 a copy."
This discussion has been archived. No new comments can be posted.

Stanford Bioengineers Develop 'Neurocore' Chips 9,000 Times Faster Than a PC

Comments Filter:
  • by geekoid (135745) <dadinportland&yahoo,com> on Tuesday April 29, 2014 @07:53PM (#46873843) Homepage Journal

    we're not nearly as frustrating as people, meat sack
    .

  • The interesting bits (Score:5, Informative)

    by Anonymous Coward on Tuesday April 29, 2014 @08:47PM (#46874213)

    It isn't a typical ASIC; the chip is a custom fully asynchronous mixed digital+analog; the board uses 16 chips in a tree router for guaranteed deadlock prevention between the chips; and can simulate 1 million neurons powered only by one USB port.

    The neurons are implemented with analog circuits to match the dynamics of real neurons, moving beyond a simple hodgkin-huxley model to include components like ion channels, which is first of its kind in an analog chip. It has a neat hexahedral resistor network that distributes the spike impulse across a neighborhood of neurons, a phenomena seen in many cortical brain areas; essentially an analog phenomena implemented efficiently in analog design.

    Analog gives it fun biological-like properties, with things like temperature sensitivity that must be regulated with additional circuitry. Asynchronous design means outside of leakage from the chip, which is low with such a large fabrication process, very little energy is used at a neuron level if no stimuli is present. This is in contrast to a traditional CPU, which has a clock marching along lots of a chip to consume energy every clock cycle.

    Outside of wireless/signaling stuff, this is probably the biggest mixed analog digital asynchronous chip in existence.

    But otherwise yes, the editors sucked on this one.

  • Neurons have incredibly complex behaviors, they are not simply threshold triggers as the simple CS model implies.

    You're plainly ignorant. I don't have any threshold triggers in any of my neural networks. Cells have complex protein behaviors, so what? The cybernetic models can be Turing complete. This means that if I really wanted to waste CPU power instead of understanding the fundamental principals of cognition, I could build a neural network that emulated the molecular action of cellular proteins, and if our rate of computer advancements holds that machine intelligence would be able to emulate the molecules that make up human neuron proteins, and eventually an entire human head right down to the molecular level. Artificial neural networks can yield every bit as much complexity as anything else in nature. Did you forget that electrons are made of quantum particles or something? Now, we're shooting for determinism and thus applying quantifications in most cases, but in the future we'll harness things like eddy currents once our n.net model methodologies have nailed down and abstracted more of the key components that emerge of complex behaviors efficiently.

    Neural networks in CS have little to do with the actual wiring and primarily chemical systems that are neurons.

    Nor do the artificial neurons need to have anything to do with organic ones except very basic fundamental properties which produce the complexity of response and thus intelligence. I suppose next you'll be telling me that without putting a human brain in the boxen we won't be able to make personal computers do mathematics.

    You are what I call an organic chauvinist. What's so damn special about the precise chemical functionality of organic brain operations? If the organic chemputers were such a grand and complex design in need of exact duplication to achieve any degree of similar intelligence, then why are dumb computing machines even able to revolutionize computation? How are digital cameras doing facial recognition with far less computation power than human brains require? It's true that organic neurons have more internal state and some of the details of the process by which neurons operate are still undiscovered; However, we don't need to achieve the exact nuanced behavior of human neurons or even the same human brain neuron capacity scale or even its same connectivity types in order to produce intelligent behaviors. There are some general principals at work that any complex system will exhibit in order to achieve a given behavior, and those are worth emulating in an optimized fashion. Nature has converged upon solutions randomly using trial and error and going with the first working attempt the entropy gives her whether it is optimal or not. Replicating every detail of said accidental functionality exactly is not essential any more than it is essential for creatures to have 4 legs in order to walk.

    It's already been proven that complexity yields intelligence. The more neurons the smarter the entity. In fact, we have been determining the minimal degree of complexity required to solve various problems, and nearly universally we can solve the same problems with far less complexity than the equivalent solution in nature, since organisms weren't intelligently designed. There is no binary dichotomy: An interaction does not reach some threshold and then magically becomes intelligent. Instead, there is an intelligence gradient: All systems exhibit some degree of "intelligence" AKA processing power, and the amount scales with complexity. Even a run of dominoes has some small degree of intelligence. Human brains have a lot of neurons doing stuff that isn't even required to produce sentience (thermal regulation, breath control, motor skills, etc). In fact, you can take whatever estimate your cognitive neuroscience prof claims the human brain has as a yardstick for the complexity requirement of sentience and [youtube.com]

You have a tendency to feel you are superior to most computers.

Working...