Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Supercomputing IBM

A Skeptical Reaction To IBM's Cat Brain Simulation Claims 198

kreyszig writes "The recent story of a cat brain simulation from IBM had me wondering if this was really possible as described. Now a senior researcher in the same field has publicly denounced IBM's claims." More optimisticaly, dontmakemethink points out an "astounding article about new 'Neurogrid' computer chips which offer brain-like computing with extremely low power consumption. In a simulation of 55 million neurons on a traditional supercomputer, 320,000 watts of power was required, while a 1-million neuron Neurogrid chip array is expected to consume less than one watt."
This discussion has been archived. No new comments can be posted.

A Skeptical Reaction To IBM's Cat Brain Simulation Claims

Comments Filter:
  • by cyberspittle ( 519754 ) on Tuesday November 24, 2009 @10:52AM (#30213660) Homepage
    Think about it. Think about it like a cat.
  • nonlinear (Score:5, Insightful)

    by Garble Snarky ( 715674 ) on Tuesday November 24, 2009 @10:56AM (#30213696)
    Wouldn't power consumption grow more than linearly with neuron count? I would think the number of connections is the dominant factor - so the comparison of two data points of power consumption vs neuron count is meaningless.
    • Re: (Score:3, Insightful)

      by jabuzz ( 182671 )

      You assume all neurons are connected to all other neurons. My brain does not work like that, so why you would expect a simulated brain to work like that does not make sense.

      • Re: (Score:3, Interesting)

        by gnick ( 1211984 )

        You assume all neurons are connected to all other neurons. My brain does not work like that...

        Are you sure? I know that all of the neurons in your brain are not directly connected, but that doesn't imply that there's no path between them. So, while the power consumption involved with neuron interaction may not increase quite as much per added neuron as if you had direct connections between each of them, it still seems that it would be more complicated than a direct linear correlation.

        • It's nonlinear for small numbers of neurons since you need to count connections to second and third nearest neighbors as well as first nearest neighbors. But once you get past the length scale of the longest connection, it scales linearly from there.

          It's like the road system. A city with a bunch of intersections will have more road segments between the intersections than there are intersections themselves. But a second city won't build roads from each of its intersections back to each intersection in the fi

      • Re: (Score:3, Funny)

        by MrNaz ( 730548 ) *

        It makes sense if you assume that *his* brain works like that.

    • considering that I can't even find the quote for the second article linked, I'll remain skeptical of the whole thing. The article on that "low power" version doesn't say anything about low power, in fact it talks about wattage woes and concerns due to the requirements to make a "neural" processor equivalent.

      Also of note is that they're doing the same idea as intel, just at a horrendously lower capability. Basically a lack of information and whole lot of hype.

      • From the article on the "low power" neurogrid chip (page 3):

        Just a few miles down the road, at the IBM Almaden Research Center in San Jose, a computer scientist named Dharmendra Modha recently used 16 digital Blue Gene supercomputer racks to mathematically simulate 55 million neurons connected by 442 billion synapses. The insights gained from that impressive feat will help in the design of future neural chips. But Modha’s computers consumed 320,000 watts of electricity, enough to power 260 American ho

    • by pz ( 113803 )

      Wouldn't power consumption grow more than linearly with neuron count? I would think the number of connections is the dominant factor - so the comparison of two data points of power consumption vs neuron count is meaningless.

      Neurons are not typically fully connected in K-star like networks, they are more usually connected to a fixed number of other neurons that varies by type from a small handful to 10,000. The latter number (10,000) is used as when researchers and scientists want to estimate the total number of connections in the cortex, especially when talking about simulations or writing grant proposals where bigger numbers are more impressive.

      So, power consumption should grow linearly with neuron count, if the simulation i

  • by drainbramage ( 588291 ) on Tuesday November 24, 2009 @10:56AM (#30213700) Homepage

    All those neurons using less than 1 watt?
    I know some people like that.

    • VINDICATION! [penny-arcade.com]
    • by Nerdfest ( 867930 ) on Tuesday November 24, 2009 @11:27AM (#30214100)
      I'm being environmentally, friendly you insensitive clod!
    • by dontmakemethink ( 1186169 ) on Tuesday November 24, 2009 @04:36PM (#30218230)

      Actually if you read TFA, the long-pondered question of why humans only use 1-15% of their brain is largely a matter of power consumption, and the reason for the abundance of dormant neurons is for greater potential diversity of thought.

      "While accounting for just 2 percent of our body weight, the human brain devours 20 percent of the calories that we eat."

      "The brain achieves optimal energy efficiency by firing no more than 1 to 15 percent—and often just 1 percent—of its neurons at a time."

      That seems to indicate that a human brain would burn more calories than the rest of the body if it were "always on".

      Being a hypoglycemia sufferer, I can attest to the severe limitations of brain activity when deprived of sugar. Before being diagnosed I underwent tunnel vision and black-outs, not to mention the typical mood swings, shakiness, cold sensations, etc.

      Never has my nickname been more appropriate...

  • Brain Power (Score:3, Informative)

    by Trevin ( 570491 ) on Tuesday November 24, 2009 @10:57AM (#30213710) Homepage

    The cat's brain is made up of 1 BILLION neurons and 10 trillion synapses. So with the nuerogrid chips, it will require at least a kilowatt to simulate.

    • Re: (Score:3, Insightful)

      by L4t3r4lu5 ( 1216702 )

      So with the nuerogrid chips, it will require at least a kilowatt to simulate.

      So, a reduction of 319kW, then? That's pretty good.

      • Re:Brain Power (Score:5, Insightful)

        by Yvan256 ( 722131 ) on Tuesday November 24, 2009 @11:32AM (#30214172) Homepage Journal

        In a simulation of 55 million neurons on a traditional supercomputer, 320,000 watts of power was required, while a 1-million neuron Neurogrid chip array is expected to consume less than one watt.

        320kW / 55 = 5.818kW per million of neuro with a traditional supercomputer.
        One watt per million of neuro with a Neurogrid chip array.

        So if a cat's brain is 1 BILLION neurons, that would require 5818.182kW with a supercomputer and 1kW with the Neurogrid chip array.

        A reduction of 5817.182kW.

        • by watanabe ( 27967 )

          The fine letter linked to in the above points out the real problems inherent in calculating this out: actually simulating NEURONS, rather than so-called "neural networks" is really hard, and requires a lot of computing power, plus development of techniques that are still cutting edge research. There is no chip array that can do all the (currently not completely specified) simulating of a cat brain at 1 kW.

    • Re: (Score:3, Funny)

      by Xest ( 935314 )

      Damn, if only we could find such a great source of power!

    • Re: (Score:2, Interesting)

      by hattig ( 47930 )

      Their chip uses 340 transistors to model a neuron, and has 65536 neurons.

      That means it has ~22m transistors for neurons, although there certainly more transistors managing non-neuron aspects.

      It looks like it was made on a 130nm - 250nm process for the die size.

      Shrink that to 45nm once the technology is proven, and you'll have 8 to 32 times as many neurons in a single chip. That's 512Ki to 2Mi neurons per chip.

      A chip makes up a neural cluster, and you use multiple chips to simulate multiple neural clusters,

      • by afidel ( 530433 )
        Yeah but it's going to require one hell of a crossbar configuration to connect those chips all to each other at decent speeds. Guess someone better pull the SGI and Cray patents out of mothballs.
    • Might want to start with simulating a dog brain to save power. That's what, maybe 5 neurons, 1000 synapses, and half a dog biscuit?

    • by tmosley ( 996283 )
      Or just one can of cat food. Better get the good stuff, though, she's a bit finicky.
  • by jabuzz ( 182671 ) on Tuesday November 24, 2009 @10:58AM (#30213734) Homepage

    If you have custom silicon to do each neuron then you are going to be hugely more power efficient that a general purpose processor simulating a neuron in software. There is nothing new there and anyone who thinks otherwise is just clueless. Given IBM have the facilities and resources to fabricate some custom silicon I fail to see the issue.

    • Except that individual neurons have tens of thousands of possible connections to other neurons, and continually morph and change those connections. That's impossible to do on a rigid piece of hardware.
      • Re:Except (Score:5, Informative)

        by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Tuesday November 24, 2009 @11:16AM (#30213954) Journal

        Except that individual neurons have tens of thousands of possible connections to other neurons, and continually morph and change those connections. That's impossible to do on a rigid piece of hardware.

        I'm no expert and I've just been reading the second link's project site on Stanford's page [stanford.edu] but your assertion to continually morph and change those connections seems to be mitigated by this strategy:

        Neurogrid simulates six billion synaptic connections by using local analog communication, another choice motivated by cortical studies. Cortical axons synapse profusely in a local area, course along for a while, then do it again. Thus, nearby neurons receive inputs from largely the same axons, as expected from the map-like organization of cortical areas. Local wires running between neighboring silicon neurons emulate these patches, invoking postsynaptic potentials within a programmable radius. Using a patch radius of 6 lets us increase the number of synaptic connections a hundredfold—from 600 million to six billion—without increasing digital communication.

        If they connect most (if not all) possible connections that the morphing/changing synaptic channels can take, then they use a sort of addressing technique and RAM strategy to continually morph and change:

        Instead of hardwiring the silicon neurons together, as Mead did in his silicon retina, we softwired them by assigning unique addresses. Every time a spike occurs, the chip outputs that neuron’s address. This address points to a memory location (RAM) that holds the synaptic target’s address, or to multiple memory locations if the neuron has multiple synaptic targets. When this address is fed back into the chip, a post-synaptic potential is triggered at the target. An extremely efficient technique, as the same post-synaptic circuit serves all the synapses that neuron receives—virtual synapses! Encoding, translating, and decoding an address happens fast enough to route several million spikes per second, allowing a million connections to be made among a thousand silicon neurons. These softwires may be rerouted simply by overwriting the RAM’s look-up table, making it possible to specify any desired synaptic connectivity.

        Although their page is really hard for a lay person like myself to get through, it's very informative [stanford.edu]. Having read it, I'm considerably more optimistic about the future of biological tissues and nervous systems being translated to hardware. At least people are starting back at the simple components and adding new twists.

        • invoking postsynaptic potentials within a programmable radius.

          So basically some of the simulation is still software-side, then.

    • On that theme, it's easy to calculate some reasonable bounds, based off of actual cat metabolism. Small cats, around 7 lbs., will require ~125 kcal/day to maintain body weight. We can use that kcal/day value as a rough bound, which results in a mighty 6W. For the whole cat. Granted, that includes a lot of nap time, but it also includes all other metabolic functions.

      Obviously, I have no trouble whatsoever believing that it's possible to do better than 320,000 W in simulating a cat brain. Even padding fo

  • by Anonymous Coward on Tuesday November 24, 2009 @10:59AM (#30213748)

    From the original FA: "The simulation, which runs 100 times slower than an actual cat's brain, is more about watching how thoughts are formed in the brain and how the roughly 1 billion neurons and 10 trillion synapses in a cat's brain work together."

    So the most bad-ass computer simulation, assuming it worked, which this guy is saying it probably didn't, was still 100 times slower than a real cat's brain. A real cat's brain also fits inside a tiny furry space the size of a baseball... and it runs on a once-daily small bowl of cat food. We have a long ways to go.

    • Re: (Score:2, Informative)

      by slashchuck ( 617840 )

      ... A real cat's brain also fits inside a tiny furry space the size of a baseball...

      The brain size of the average cat is 5 centimeters in length and 30 grams. [wikipedia.org]

      • The brain size of the average cat is 5 centimeters in length and 30 grams. ... which is small enough to fit inside a baseball.

        So, uh... thanks for correcting the already-correct post? I guess?

    • by Anonymous Coward on Tuesday November 24, 2009 @11:10AM (#30213878)

      More than this, their simulated neurons aren't anywhere close to the real thing. A real neuron, an individual cell, has tremendous computing power due to the distribution of a bunch of different ion channel types (active conductances) in a highly complex dendritic tree. Simulating a few seconds of just ONE neuron accurately can take several minutes to several hours of supercomputer time. I know this because I do it for a living.

      • Re: (Score:3, Insightful)

        by fbjon ( 692006 )
        No surprise there. Raytracing a photorealistic scene takes far longer than just bouncing some photons around. Running Windows in a VM makes it really slow compared to running on hardware. This "brain" isn't all that different.
      • by Neil Hodges ( 960909 ) on Tuesday November 24, 2009 @11:34AM (#30214208)

        ...I know this because I do it for a living.

        Don't each of our brains do this for a living, too?

      • by ErikZ ( 55491 ) *

        It sounds very interesting. Do you know of a good reference for those of us who don't have Masters in Biology or Comp-Sci?

      • by Rod Frey ( 1685360 ) on Tuesday November 24, 2009 @12:39PM (#30215232)

        Isn't there value in moving to a higher level of abstraction than a single neuron though? Or simplifying the basic elements for the sake of a tractable broader model?

        Simulating a single atom, for example, is reasonably complex: it would be impossible with current computational resources to simulate the electromagnetic properties of a metal if we required accurate simulations of individual atoms. Yet despite ignoring what we know about the atomic models, the higher-level models are very predictive.

        Not that we have such predictive, higher-level models for the brain. That's what some researchers are searching for: I'm just suggesting that those models hopefully won't require accurate simulation of individual neurons. That seems to be the pattern in other domains.

    • by toppavak ( 943659 ) on Tuesday November 24, 2009 @11:18AM (#30213974)
      He's not arguing that it didn't work, he's arguing that they essentially ran a simulation of a large Artificial Neural Network, a relatively trivial task as long as you have a big enough computer behind it. ANNs are essentially points that connect to each other and learn by assigning weights to these various connections- this is essentially the simplest possible way to simulate the behavior of a neuron. The argument is being made that to claim an ANN, regardless of its size, approaches the capabilities of any mammalian brain is simply wrong, and that a true attempt to create such a simulation would need to factor in the stochasticity of ion channels, branchings in neurons and various other biological phenomena that have a tremendous impact on how our brains work.

      Without reading more details on the original work, I'm inclined to say that he has a very valid point if they were indeed only running a large ANN model.
      • by Zackbass ( 457384 ) on Tuesday November 24, 2009 @11:36AM (#30214232)

        Considering how little we know about the emergence of intelligence from networks how is it possible to claim outright that an ANN can't approach the capabilities of a human brain? Real neurons are vastly more complex and aren't accurately modeled with such simple systems, but we don't have any idea what those complexities have to do with intelligence, so it seems to be quite the leap of faith to make claims on the topic.

        • But getting back to the letter from "Henry Markram". My reading of the article is that it says a few things: i) That this *isn't a simulation of a cats brain* which regardless of what one believes about intelligence appears to be correct. ii) This isn't anything new.
      • by Xest ( 935314 ) on Tuesday November 24, 2009 @11:51AM (#30214492)

        It basically just seem to be a case of the same old AI arguments we've always heard even since Turing's days.

        The problem is, we don't actually know what the limits of ANNs are, there is no proof that suggests that they can't, given ever greater amounts of computing power allow for the emergence of (at least seemingly) truly intelligent response to an event.

        So on one hand we have the IBM guys overstating what they've achieved, and on the other we have a guy spouting out a view of the limits of ANNs without actually putting any effort into providing evidence for their limitations.

        I don't know why but the AI field has always been horifically polarised, the kind of arguments you get in that field are just so immature it's beyond belief. You have people in the AI field following their viewpoint religiously, completely unwilling to consider the other viewpoint. To see what I mean just look up some of the discussions on Searle's chinese room argument.

        If AI scientists spent as much time on research as they did bitching at each others experiments and theories we'd have a walking talking robo-jesus by now that could build worlds.

        • There's is no proof that suggests that it can either.
    • by L4t3r4lu5 ( 1216702 ) on Tuesday November 24, 2009 @11:28AM (#30214106)

      The simulation, which runs 100 times slower than an actual cat's brain, is more about watching how thoughts are formed in the brain...

      What? I can already tell them that!

      IF $stomach_contents = 0 THEN ConsumeFood;
      IF $claw_count > 0 THEN ScratchShitOutOfFurniture;
      IF $Sphincter_Tension > 0 THEN PoopAnywhereYouWant;
      IF $TimeSinceSleep < 1800 THEN $TimeSinceSleep = $TimeSinceSleep + 1 ELSE YawnFishBreathInOwnersFaceAndFallAsleepOnComputerChair;

    • This could still be an accurate representation. Cats work in batch mode. They sleep 23 hours a day, during which they think about how they'll spend the hour they are awake. So if they're solely comparing the simulation's processing speed to how cats function in awake mode, it may actually be around four times slower in aggregate. Not to shabby.
      • So you're saying that a cat queues up its activities for the hour of wakefulness by planning during its hours of sleep? Kind of like the Mars rovers get commands issued from Earth, move a little, and then wait around for another batch? And therefore cats can't respond to any new stimuli during their wakeful hour until they have another sleep cycle to process the new information? Fascinating.

  • I bet they just based their simulation on Simon's Cat [youtube.com] which, to be honest, is a pretty accurate representation.

  • I don't really see how they would have verified that they were able to simulate a cat's brain. AFAIK, we don't have single-neuron level imaging, and the resolution on FMRI and EEG put those right out. Looking at macro level behavior would be pretty absurd- I too, can write a program that will decide to play with yarn. Unless there's something I'm missing, IBM seems to have made a claim it can't support.
    • by Xest ( 935314 )

      Why do we need single neuron level imaging? The activity of a single neuron really tells us very little. The emergent patterns in the form of brain activity of multiple neurons are what matters. The question is whether we are getting the right responses in this respect from the right set of neurons in reaction to the corresponding trigger.

      • The question is whether we are getting the right responses in this respect from the right set of neurons in reaction to the corresponding trigger.

        As I see it, there's several problems here.

        The first is that we don't really understand neurology all that well- higher level thought is, for the most part, a mystery to us, so identifying the "right set" isn't really possible for us at this point.

        The second is that even if we were able to select the "right set", I don't think we have the imaging technology necessary to distinguish between correct and incorrect states without inducing a margin of error that would qualify our hypothesis out of existenc

    • AFAIK, we don't have single-neuron level imaging, and the resolution on FMRI and EEG put those right out.

      Just so that you know... We can get higher resolutions on brains neurons by invasive means such as cutting the brain apart and looking at live cells slice by slice under a powerful microscope.

      It is rather tedious and gruesome but it is a viable way to look at the neurons directly.

      Its even been to done to humans after they have passed away, but animals you can sort of get away with doing it while the subj

      • You seem to have some knowledge here, so if you don't mind (and will forgive the pun) I'd like to pick your brain about this.

        Lets say we have a tabby, an ocelot, and a simulation that we are told models one of the two. Given that we're able to perform any kind of scan or procedure on the two animals, could we determine which species the simulation was using only that data?
  • by xtracto ( 837672 ) on Tuesday November 24, 2009 @11:13AM (#30213916) Journal

    So according to this guy rant letter, the "cat-brain simulation" was nothing more than the simulation of a ANN wiht X number of neurons with X equal to the average number of neurons in a cat.

    However, it seems the /complexity/ of the simulated neurons is not remotely similar to that of the neurons of a real cat.

    With that view, yes it seems less breakthrough. The experiment reminds me of AI researchers that thought that we could get intelligent machines using a brute-force kind of approach; this by adding /enough/ knowledge-rules, /enough/ processing power, etc...

  • Skeptical? (Score:5, Interesting)

    by golden age villain ( 1607173 ) on Tuesday November 24, 2009 @11:15AM (#30213936)
    This IBM announcement was just ridiculous. To cite only one argument, the brain does not consist only of neurons. It contains at least as many other cells which are also involved in signal processing. Mohda would be laughed at in any neuroscience conference and he certainly doesn't help the cause of theoreticians in the neuroscience field by making such stupid announcements. Eugene Izhikevich who designed the neuron model being used for these simulations had a PNAS paper not too long ago modeling the entire human brain and he did not claim that he successfully modeled the human brain. Plus no one has any clue how the brain computes really so making a claim about the formation of thoughts is just nonsense.
    • Re: (Score:3, Insightful)

      by radtea ( 464814 )

      Plus no one has any clue how the brain computes really so making a claim about the formation of thoughts is just nonsense.

      Unfortunately, what a certain class of pseudo-scientist has learned is that monkeys in suits are too stupid to know the difference between real, conservative, careful science and over-hyped handwaving. Since we live in a world where monkeys in suits have managed to get almost total control of the corporate system and used that to leverage thier way into political power, people who suck

      • by hiryuu ( 125210 )

        Our world increasingly looks like Fredrick Pohl's story "The Marching Morons"...

        Not saying this because I know better, but because your mention of the story intrigued me and I hoped to find it or at least find out more about it. It appears it was written by Cyril M. Kornbluth, a contemporary and good friend of Pohl's.

        link [wikipedia.org]

        I think I must find this story, as the premise of "Idiocracy" was interesting but the execution seemed, to me, quite flawed.

    • There's also the number of assemblies we don't know about that have been disregarded early to be brought back on the table (name forgotten) as something that's a) important and b) much much much more powerful than we thought (Kurzweil likened them to RAM, which they are ostensibly not). Oh, and c) we have no real clue how they work because neuroscience isn't even there yet.

  • IBM has a known history of making overblown claims. This is what happens when you let your PR mesh with your technical research. Deep Blue was a giant PR stunt, and they had humans retooling the code in between matches. What a crock. When they get a robot that catches mice, purrs, and jumps on the table to eat my burger when I leave the room for 2 seconds, maybe then I'll believe it.
  • by __aailob1448 ( 541069 ) on Tuesday November 24, 2009 @11:24AM (#30214052) Journal

    I saw that story earlier and dismissed it for the crap that it was. I'd like to thank Henry Markram for vindicating my snap judgment with his flame email.

  • Markram's for real (Score:5, Informative)

    by bellwould ( 11363 ) on Tuesday November 24, 2009 @11:26AM (#30214088) Journal

    My research recently took me to some of Markram's work - the guy is brilliant and REALISTIC. His research goals are simple and attainable and any claims of success he has are *well* within the real world. He's incrementally worked his way up from a few neurons - the way a *real* scientist works; and to him, the simplest "brain simulation" of any sort is definitely possible, but far off in the future.

    • Seriously, you haven't posted in 4-5 years, and you jump out to post now? Let me guess, you work in his lab...
      • The term "sockpuppet" is usually reserved for someone who posts under multiple accounts. I believe you were looking for "mouthpiece" instead.
    • You make a compelling argument. In fact, I won't even bother asking for citations, I'll just ask how I can send money to this scientific demigod. Is cash OK?
  • http://intranet.cs.man.ac.uk/apt/projects/SpiNNaker/ [man.ac.uk]

    It seems that for quite a lot of folks toying with topology and interconnects is a promising approach.

  • Emo Philips (Score:4, Interesting)

    by Temujin_12 ( 832986 ) on Tuesday November 24, 2009 @12:01PM (#30214636)

    "I used to think that the brain was the most wonderful organ in my body. Then I realized who was telling me this."

  • It's hard to verify anything cause the machine just sits there and ignores everyone.
  • Until it can piss on my briefcase because it thinks I've been ignoring it we have no way of confirming that it is actually simulating a real cat's brain.
  • I think people are missing the obvious potential here. I mean, if you could engineer a computer to accurately simulate a cat's brain, then you could implant that computer in a sexy gynoid body, and have a robot-girl with the mind of a cat!

  • If pressed, Dr. Boahen himself would contradict the Discover article and say the chips were not "brain like" at all. He's working from the same place Karl Pribram worked from 50 years ago, and Karl still can't say he knows how the brain works. Simulating a process that's assumed to be a part of brain function because it can produce results more effectively and/or efficiently that brute force digital computing does not make it "brain like". The comparison/contrast done on power consumption doesn't make a cas

  • They did manage to simulate a cat brain... but they failed to mention it was a dead cat.
  • Digital computers are deterministic: Throw the same equation at them a thousand times and they will always spit out the same answer. Throw a question at the brain and it can produce a thousand different answers, canvassed from a chorus of quirky neurons. "The evidence is overwhelming that the brain computes with probability," Sejnowski says. Wishy-washy responses may make life easier in an uncertain world where we do not know which way an errant football will bounce, or whether a growling dog will lunge. Un

  • Matching the neuron count and connection count of a cat brain is clearly not sufficient to simulate the functionality. Neurons in a mammal brain are not randomly connected. A great level of organization happens during the growth of the brain cells and connections starting from the embryonic stage. Much of the functionality is "hardwired" as result of this organized growth process which has evolved over hundreds of millions of years, and for higher level mammal like a cat a lot of the functionality is wired

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...