Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Supercomputing IBM

A Skeptical Reaction To IBM's Cat Brain Simulation Claims 198

kreyszig writes "The recent story of a cat brain simulation from IBM had me wondering if this was really possible as described. Now a senior researcher in the same field has publicly denounced IBM's claims." More optimisticaly, dontmakemethink points out an "astounding article about new 'Neurogrid' computer chips which offer brain-like computing with extremely low power consumption. In a simulation of 55 million neurons on a traditional supercomputer, 320,000 watts of power was required, while a 1-million neuron Neurogrid chip array is expected to consume less than one watt."
This discussion has been archived. No new comments can be posted.

A Skeptical Reaction To IBM's Cat Brain Simulation Claims

Comments Filter:
  • nonlinear (Score:5, Insightful)

    by Garble Snarky ( 715674 ) on Tuesday November 24, 2009 @10:56AM (#30213696)
    Wouldn't power consumption grow more than linearly with neuron count? I would think the number of connections is the dominant factor - so the comparison of two data points of power consumption vs neuron count is meaningless.
  • by marqs ( 774373 ) on Tuesday November 24, 2009 @10:59AM (#30213746)
    "If a lion could talk, we could not understand him."
    Ludwig Witgenstein - tractatus logico-philosophicus
  • by Anonymous Coward on Tuesday November 24, 2009 @10:59AM (#30213748)

    From the original FA: "The simulation, which runs 100 times slower than an actual cat's brain, is more about watching how thoughts are formed in the brain and how the roughly 1 billion neurons and 10 trillion synapses in a cat's brain work together."

    So the most bad-ass computer simulation, assuming it worked, which this guy is saying it probably didn't, was still 100 times slower than a real cat's brain. A real cat's brain also fits inside a tiny furry space the size of a baseball... and it runs on a once-daily small bowl of cat food. We have a long ways to go.

  • Re:nonlinear (Score:3, Insightful)

    by jabuzz ( 182671 ) on Tuesday November 24, 2009 @11:03AM (#30213792) Homepage

    You assume all neurons are connected to all other neurons. My brain does not work like that, so why you would expect a simulated brain to work like that does not make sense.

  • by xtracto ( 837672 ) on Tuesday November 24, 2009 @11:13AM (#30213916) Journal

    So according to this guy rant letter, the "cat-brain simulation" was nothing more than the simulation of a ANN wiht X number of neurons with X equal to the average number of neurons in a cat.

    However, it seems the /complexity/ of the simulated neurons is not remotely similar to that of the neurons of a real cat.

    With that view, yes it seems less breakthrough. The experiment reminds me of AI researchers that thought that we could get intelligent machines using a brute-force kind of approach; this by adding /enough/ knowledge-rules, /enough/ processing power, etc...

  • Re:Brain Power (Score:3, Insightful)

    by L4t3r4lu5 ( 1216702 ) on Tuesday November 24, 2009 @11:19AM (#30213998)

    So with the nuerogrid chips, it will require at least a kilowatt to simulate.

    So, a reduction of 319kW, then? That's pretty good.

  • Re:Brain Power (Score:5, Insightful)

    by Yvan256 ( 722131 ) on Tuesday November 24, 2009 @11:32AM (#30214172) Homepage Journal

    In a simulation of 55 million neurons on a traditional supercomputer, 320,000 watts of power was required, while a 1-million neuron Neurogrid chip array is expected to consume less than one watt.

    320kW / 55 = 5.818kW per million of neuro with a traditional supercomputer.
    One watt per million of neuro with a Neurogrid chip array.

    So if a cat's brain is 1 BILLION neurons, that would require 5818.182kW with a supercomputer and 1kW with the Neurogrid chip array.

    A reduction of 5817.182kW.

  • by fbjon ( 692006 ) on Tuesday November 24, 2009 @11:33AM (#30214186) Homepage Journal
    No surprise there. Raytracing a photorealistic scene takes far longer than just bouncing some photons around. Running Windows in a VM makes it really slow compared to running on hardware. This "brain" isn't all that different.
  • by Zackbass ( 457384 ) on Tuesday November 24, 2009 @11:36AM (#30214232)

    Considering how little we know about the emergence of intelligence from networks how is it possible to claim outright that an ANN can't approach the capabilities of a human brain? Real neurons are vastly more complex and aren't accurately modeled with such simple systems, but we don't have any idea what those complexities have to do with intelligence, so it seems to be quite the leap of faith to make claims on the topic.

  • by Xest ( 935314 ) on Tuesday November 24, 2009 @11:51AM (#30214492)

    It basically just seem to be a case of the same old AI arguments we've always heard even since Turing's days.

    The problem is, we don't actually know what the limits of ANNs are, there is no proof that suggests that they can't, given ever greater amounts of computing power allow for the emergence of (at least seemingly) truly intelligent response to an event.

    So on one hand we have the IBM guys overstating what they've achieved, and on the other we have a guy spouting out a view of the limits of ANNs without actually putting any effort into providing evidence for their limitations.

    I don't know why but the AI field has always been horifically polarised, the kind of arguments you get in that field are just so immature it's beyond belief. You have people in the AI field following their viewpoint religiously, completely unwilling to consider the other viewpoint. To see what I mean just look up some of the discussions on Searle's chinese room argument.

    If AI scientists spent as much time on research as they did bitching at each others experiments and theories we'd have a walking talking robo-jesus by now that could build worlds.

  • Re:Skeptical? (Score:3, Insightful)

    by radtea ( 464814 ) on Tuesday November 24, 2009 @12:15PM (#30214814)

    Plus no one has any clue how the brain computes really so making a claim about the formation of thoughts is just nonsense.

    Unfortunately, what a certain class of pseudo-scientist has learned is that monkeys in suits are too stupid to know the difference between real, conservative, careful science and over-hyped handwaving. Since we live in a world where monkeys in suits have managed to get almost total control of the corporate system and used that to leverage thier way into political power, people who suck up to the monkeys and make them feel good about themselves and their world by making outrageously false claims get rewarded with cash, while real scientists get left behind.

    Our world increasingly looks like Fredrick Pohl's story "The Marching Morons", in which idiots have taken over the world (it's much more clever than the film "Idiocracy" was) and the idiots refer to the few remaining smart people, who keep things running, as "dummies". In retrospect, Pohl's story seems less about genetics (intelligence being at best very weakly heritable, as everyone with a brain knows) and more about the social factors that put money and power into the hands of exactly the kind of human who seeks money and power (rather than knowledge and serenity.)

  • by Critical Facilities ( 850111 ) * on Tuesday November 24, 2009 @12:17PM (#30214860)
    Insightful??

    Hmmmph! My cat Phydeaux must have mod points again.
  • by Rod Frey ( 1685360 ) on Tuesday November 24, 2009 @12:39PM (#30215232)

    Isn't there value in moving to a higher level of abstraction than a single neuron though? Or simplifying the basic elements for the sake of a tractable broader model?

    Simulating a single atom, for example, is reasonably complex: it would be impossible with current computational resources to simulate the electromagnetic properties of a metal if we required accurate simulations of individual atoms. Yet despite ignoring what we know about the atomic models, the higher-level models are very predictive.

    Not that we have such predictive, higher-level models for the brain. That's what some researchers are searching for: I'm just suggesting that those models hopefully won't require accurate simulation of individual neurons. That seems to be the pattern in other domains.

  • by R2.0 ( 532027 ) on Tuesday November 24, 2009 @10:55PM (#30222226)

    "I suspect consciousness will be a byproduct in such a system (as it is in us)..."

    You are presupposing that human consciousness is an emergent trait, and that manifests itself once a certain level of processing capacity is reached. But that isn't really a position, more of a default - since we really don't know what it is, we don't know how to create it or model it, so we assume it sort of "shows up on it's own." But the problem with that model is that our conception of "intelligence" is inextricably linked with human consciousness.

    You said

    "I think we want a system that we can ask to do a complex task in natural language, and which will perform the task, only asking for further instruction when what we've told it is sufficiently ambiguous.

    How an obvious question is "how do humans do it?" That answer invariably involves consciousness - a human's awareness of the situation, data, and decision process. It's an internal awareness, not an external input.

    My opinion is that one cannot achieve "intelligence" without consciousness, at least as we understand intelligence. Human intelligence is the only one we have as a model. True, we observe and theorize about animal intelligence, but we know so much more about our own. Modeling AI on a cat brain, while an interesting exercise, could only lead to an artificial cat intelligence. But since we don't understand what cats "think" as it is, how do we know we hit the mark?

    If we define intelligence without considering consciousness, we may well achieve AI. But it won't be an intelligence WE understand, and it won't give us insight into our own.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...