Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing Biotech Hardware

Supercomputer Simulates Human Visual System 244

An anonymous reader writes "What cool things can be done with the 100,000+ cores of the first petaflop supercomputer, the Roadrunner, that were impossible to do before? Because our brain is massively parallel, with a relatively small amount of communication over long distances, and is made of unreliable, imprecise components, it's quite easy to simulate large chunks of it on supercomputers. The Roadrunner has been up only for about a week, and researchers from Los Alamos National Lab are already reporting inaugural simulations of the human visual system, aiming to produce a machine that can see and interpret as well as a human. After examining the results, the researchers 'believe they can study in real time the entire human visual cortex.' How long until we can simulate the entire brain?"
This discussion has been archived. No new comments can be posted.

Supercomputer Simulates Human Visual System

Comments Filter:
  • Impessive.
  • If it can help me find my socks...
  • New goal... (Score:5, Insightful)

    by dahitokiri ( 1113461 ) on Friday June 13, 2008 @05:29PM (#23785141)
    Perhaps the goal should be to make the visual system BETTER than ours?
    • Re:New goal... (Score:5, Interesting)

      by spun ( 1352 ) <loverevolutionary&yahoo,com> on Friday June 13, 2008 @05:40PM (#23785307) Journal
      Something like a Mantis Shrimp? [wikipedia.org] Some species can detect circularly polarized light; each stalk mounted eye, on its own, has trinocular vision; they have up to sixteen different types of photoreceptors (not counting the many separate color filters they also have) to our four; and the information is transmitted from the retina in parallel, not serially down a single optic nerve like ours.

      These are also the little dudes who can strike with the force of a .22 caliber bullet, fast enough to cause cavitation and sonoluminescence.

      Go Super Shrimp!
      • Re:New goal... (Score:5, Insightful)

        by CodeBuster ( 516420 ) on Friday June 13, 2008 @06:41PM (#23786183)
        You do realize that such an ocular system, which undoubtedly works well for the limited needs of the shrimp, may have accompanying disadvantages for complex land based life forms such as humans. The human vision system while not optimized for certain specialized uses, such as the aforementioned shrimp, is never the less a very decent general purpose system that has served our species well for eons. It is likely that our current system of vision, especially when compared to the possible trade-offs for increased capabilities (less general intelligence capabilities as more of the brain and nervous system is devoted to complex autonomous image processing for example), is fairly close to optimal given the other constraints of our bodies. Besides, for those situations where a particular aptitude is useful but not always desirable, night vision for example, human intelligence has allowed us to construct external enhancement devices that we can turn on or off at will. Animals which have developed night vision naturally as part of a nocturnal lifestyle cannot turn that feature on or off at will and thus are at a disadvantage during the daytime whereas humans are more generally adaptable. It is fairly clear that innate intelligence is among the very best, if not the best, of the natural abilities that have developed under evolutionary pressure. How else to explain why humans have dominated the earth and essentially escaped the natural system that once controlled them?
        • by spun ( 1352 ) <loverevolutionary&yahoo,com> on Friday June 13, 2008 @06:52PM (#23786343) Journal
          Dude, calm down. I wasn't dissing humanity, by mentioning that mantis shrimp have better vision, okay?

          "Hew-mans! Hew-mans! Hew-mans! we're number one! we're number one!"

          Feel better now?
        • Re:New goal... (Score:4, Insightful)

          by thanatos_x ( 1086171 ) on Saturday June 14, 2008 @12:54AM (#23789007)
          There's little doubt that innate intelligence can overcome brute force, but the incredibly crucial part is the ability to build upon previous generations work and to work collaboratively.

          Termite and ant colonies are examples of this. There was a group of scientists who injected a mound with concrete, and when they excavated it, the inner area was dozens of cubic meters. Large nests can protrude 9 meters above the surface while the underground area can extend 25 meters. The nests are climate controlled, including ventilation and are somehow protected against rain.

          All this from an insect that few would call intelligent. Compared to the relative size it dwarfs all but the largest cities man has built. General intelligence is nice, but even if we had 10 times the processing power of our current brains, but had to learn everything from scratch each time, I doubt anyone would ever get past the iron age. There is only so much one can do with a lifetime.

          Also humans don't have a great deal of general intelligence it seems. There is a great deal of our brains dedicated to social interactions and emotions. If we ran with a simpler set of social interactions, I have no doubt the average human would make Einstein look like an idiot regarding physics. Some evidence of this can be found in individuals with certain mental 'defects', like autism, which are able to master a task well beyond what most other humans can hope to, even with intense effort.

          Finally... it really depends on what you mean by control. Vermin and bacteria spring to mind as creatures that exist nearly everywhere, despite our best efforts to eliminate many of them. Yes, we thrive with the most purpose and with the fastest increases (hence the idea of a singularity), but we are not the only species to thrive on this planet.
      • Re: (Score:3, Informative)

        by Illserve ( 56215 )
        and the information is transmitted from the retina in parallel, not serially down a single optic nerve like ours.

        Nope, not true. Practically everything our brain does is parallel, and this is definitely true of the optic nerve.

        It's certainly a major bottleneck in the system; a lot of compression gets down by the retina before it is transmitted but that's because the optic nerve is long and has to move with the eyeball.

        Yes, I think any mantis shrimp capable of self reflection would consider the human eye an
        • by spun ( 1352 ) <loverevolutionary&yahoo,com> on Friday June 13, 2008 @06:59PM (#23786437) Journal
          Christ on a fucking pogo stick, another one? What's with people who can't admit that maybe, just maybe, humans aren't the best at everything?

          Mantis shrimp don't have a blind spot, because their eyes aren't like the stupid human eyes where the optic nerve attaches to the front! Nyah nyah nyah!

          Here's the quote I was referring too:

          The visual information leaving the retina seems to be processed into numerous parallel data streams leading into the central nervous system, greatly reducing the analytical requirements at higher levels.
          As far as I know, there is only a single data stream per eye in human vision. It may be transmitted in parallel, but there is only one image created for each eye. Not so for the vastly superior mantis shrimp. We have trinocular vision in each eye, so suck it, monkey boy!

          I wouldn't, I mean, a mantis shrimp would never consider trading my, I mean his superior eyes for your puny human ones!
          • by Illserve ( 56215 )

            Christ on a fucking pogo stick, another one? What's with people who can't admit that maybe, just maybe, humans aren't the best at everything?

            Sure we're bad at lots of things. I'd hate to go one on one with a tiger, or to compete in an underwater endurance test with a halibut, but that doesn't make your comment any less wrong.

            The visual information leaving the retina seems to be processed into numerous parallel data streams leading into the central nervous system, greatly reducing the analytical requirements at higher levels.

            As far as I know, there is only a single data stream per eye in human vision. It may be transmitted in parallel, but there is only one image created for each eye. Not so for the vastly superior mantis shrimp. We have trinocular vision in each eye, so suck it, monkey boy!

            The human retina has 4 different types of receptors, each specializing in a different flavor of light. These are processed in the retina into several data streams; some specialize in rapid transitions from light to dark, some in colors, some in hi-res, some in lo-res. It is a vastly complicated river of data that squirts along that optic nerve to eventually land in your brain.

            I wouldn't, I mean, a mantis shrimp would never consider trading my, I mean his superior eyes for your puny human ones!

            You are aware, Mr M. Shrimp that different focal planes exist?

            I laugh at your primitive optical appendages.

            • by spun ( 1352 ) <loverevolutionary&yahoo,com> on Friday June 13, 2008 @10:56PM (#23788311) Journal
              Focal planes? Bah! What do we need with focal planes when we have, essentially, tens of thousands of pinhole cameras and eyes divided into three separate areas. Compared to your fovea, we have multiple bands of high res areas stretching across the middle of our eye! Can you see circularly polarized light? Why, an octopus has better eyesight than a human!

              Now if you'll excuse me, I have to go stun a herring.
          • That post made my day, thank you again Spun.
          • Yea, well this EAGLE just flew in through my window and said he read my screen from his roof top 6 blocks away and simply can't let this stand. he said not only does he have way better eyesight than a puny shrimp, what with his UV vision and ability to see his prey from several miles away, along with his cool twin fovea which give him kick ass acuity at different distances. He also said he would go round to the shrimps house and bite his head off if he doesn't shut his shrimpy mouth.
      • Re:New goal... (Score:4, Informative)

        by mikael ( 484 ) on Friday June 13, 2008 @06:46PM (#23786257)
        It's amazing the variations on mammalian visions - some animals still have four different color receptors (the normal red, green, blue and with the extra one which sees into the ultra-violet range of the electromagnetic spectrum). Insects are able see into the UV range as well as being able to detect polarization of sunlight).

        Liveleak has a video of the Snapping shrimp [liveleak.com]
      • Just curious what weird wikiadventure led you to that nugget. Are you a biologist who studies these things or did you wind up looking into WWI and eventually stumble on the mantis shrimp?
      • I was following you until the 'serially down a single optic nerve like ours', our 'optic nerve' is actually a bundle that contains about 1 to 1.5 million individual nerve fibers, still much less than the actual number of sensors but the data is not 'serialized', its steady state summed from about 100 sensor cells, the 'encoding' if you wish is activation level transformed in a more frequent excitation of the nerves.

      • And why such complex eyes? One hypothesis is thier brains are not very advanced so they are unable to recognise the outlines of camoflaged fish without all these enhancements to thier eyes.

        Shrimp fails. Retard shrimp.
    • by LS ( 57954 )
      You gotta understand how something works before you make a better version of it....
  • by RingDev ( 879105 ) on Friday June 13, 2008 @05:30PM (#23785157) Homepage Journal
    Who the hell left colored drop lights laying all over the server room!?

    -Rick
    • Oh yea, like you don't have a load of neons and coloured LEDs in your case mod. Nerds modding thier computers are worse than chavs modding thier cars in this respect.
  • by Citizen of Earth ( 569446 ) on Friday June 13, 2008 @05:33PM (#23785183)

    How long until we can simulate the entire brain?

    And when this simulation claims to be conscious, what do we make of that?

  • by Anonymous Coward
    I thought that supermodels stimulate the human visual system.
  • Visual object recognition systems have been a thorn in the side of robotics since the beginning. The other annoynace of battery power will likely be solved by the nanowire battery - therefore leaving 'sight' as the real final technological step for our lovely robots.

    Extrapolating further, a human-quality object recognition system will yield results which we cannot currently imagine (let's avoid some big-brother robot talk for a second, however).

    For example; I was looking at some old WWII photographs of troops getting on boat - thousands of faces in these very high-quality photographs. To myself, I thought,'Self. If all historical photographs could be placed in view of a recognition system, perhaps it could be found, interestingly, where certain ancestors of ours did appear.'

    Throw in a dash of human-style creativity and reasoning and I'm certain some truly nifty revelations are to be found in our mountains of visual documentation currently lamenting in countless vast archives.
    • Re: (Score:2, Funny)

      by notgm ( 1069012 )
      output:

      why does christopher lambert show up in all of these historical pictures?
    • Let's avoid some big-brother robot talk for a second, however

      I don't think that's wise.

      "Human #3,047,985,944, I have finished analysis of all known photographs. Congratulations, your grandfather stormed the beach at Normandy . . . NOW BOW DOWN AND WORSHIP YOUR ROBOT MASTER"
  • by Richard.Tao ( 1150683 ) on Friday June 13, 2008 @05:39PM (#23785273)
    It's nice to see progress is being made. It's scary how accurate Ray Kurzwiel's predictions seem to be, he said that by early 2010 we'll have simulated a human brain. (he's a technological analyst and author of "The Singularity is Near"). Todays desktops are faster then the super computers of the 90's. I can't wait till I'm able to get a laptop smarter then me in every way (queue joke about how stupid I am), that'll be a cool time to live in. Seems it's only a matter of decades away. Probably 20 years.
    • by 4D6963 ( 933028 ) on Friday June 13, 2008 @05:48PM (#23785439)

      It's nice to see progress is being made. It's scary how accurate Ray Kurzwiel's predictions seem to be, he said that by early 2010 we'll have simulated a human brain. (he's a technological analyst and author of "The Singularity is Near"). Todays desktops are faster then the super computers of the 90's. I can't wait till I'm able to get a laptop smarter then me in every way (queue joke about how stupid I am), that'll be a cool time to live in. Seems it's only a matter of decades away. Probably 20 years.

      OMG a super computer! It's so powerful it can probably pop up a consciousness of its own!

      Sarcasm aside, computer power and strong AI are two very distinct problems. Computer power is all about scaling up power so you can do more in less time, that doesn't allow you to do anything new, only the same things except faster. Strong AI is all about algorithms, and nobody can tell if such algorithms exist. And anyone who talks about human-like strong AI is a crackpot (Kurzwiel is a crackpot to me for his wacky predictions), as we have yet to see a bug-like strong AI, and if it was just a problem of power we'd already have something working in that field.

      • Re: (Score:3, Interesting)

        by alexborges ( 313924 )
        Has it occured to you to actually read Kurzwiel? Why do you think its positive to label someone a "crackpot" when he is looking into some possibilities for our evolution.

        More on that: how in the hell are we to keep evolving if not through technology? We wont evolve "naturally", i think thats well established, not anymore. Our social system (for ALL of us) has not erm... evolved to be a good evolutive system that rewards the best.

        The only way "up" is through a technologicall singularity. I dont think its ine
        • by 4D6963 ( 933028 )

          Has it occured to you to actually read Kurzwiel? Why do you think its positive to label someone a "crackpot" when he is looking into some possibilities for our evolution.

          More on that: how in the hell are we to keep evolving if not through technology? We wont evolve "naturally", i think thats well established, not anymore. Our social system (for ALL of us) has not erm... evolved to be a good evolutive system that rewards the best.

          The only way "up" is through a technologicall singularity. I dont think its inevitable though, i think its necessary, desirable.

          Translation : His claims are not crackpottery because you think that what he says is our only way out. That's like saying that terraforming Mars is possible just because we have no other choice.

          And we still evolve naturally, you may want to read about the recent genetic evolutions that made groups of population stop tolerating milk. Besides, what's the necessity of the evolution you're talking about?

          Kurzwiel is a crackpot despite his good intentions, because his claims are baseless, speculative and fanta

      • by jnana ( 519059 )

        Kurzwiel [sic] is a crackpot to me for his wacky predictions

        Have you checked out how many of his past wacky predictions [wikipedia.org] have already come true? He's been making such predictions for decades now and has a pretty good success rate.

        • by 4D6963 ( 933028 )

          Kurzwiel [sic] is a crackpot to me for his wacky predictions

          Have you checked out how many of his past wacky predictions [wikipedia.org] have already come true? He's been making such predictions for decades now and has a pretty good success rate.

          OMG, he predicted the downfall of USSR, in a book that came out the next year! ZOMG genius, "he foresaw that cellular phones would grow in popularity while shrinking in size for the foreseeable future". I for one thought that by the year 2000 the very few cell phone owner would barely be able to carry them in the back of their pickup trucks! Who woulda thought! OMG, he predicted a number of other things that anyone else could have predicted just by extrapolating a relevant graph! OMG he also predicted the

          • by jnana ( 519059 )

            I'm no big fan of Kurzweil's. I disagree with a lot of what he says, and I think many of his predictions are silly and/or obvious. But I am still able to distinguish between Ray Kurzweil and an actual crackpot like Dr. Gene Ray, Cubic [timecube.com].

            I'm guessing you were too lazy to take the time to do any research on what Kurzweil has done with his life before spouting off your opinions, because regardless of whether Kurzweil deserves a Nobel prize -- he does not, of course -- and regardless of whether he is right on a

    • Too Optimistic (Score:3, Interesting)

      by raftpeople ( 844215 )
      Based on reasonable extrapolations of the rate of hardware advance, we won't be able to simulate a human brain in real time until sometime in the 2020's.

      However, that is based on the previously incorrect assumption that neurons are the only kind of brain matter that is important. Now it is clear that glial cells play an important role in coordinating cognition. There are 10 times as many glial cells as there are neurons. That sets our simulation back a few years.

      I think Ray Kurzwiel is way, way, to
  • After examining the results, the researchers 'believe they can study in real time the entire human visual cortex.'

    I'll believe it when I'll see it. With my own eyes that is.

    How long until we can simulate the entire brain?

    How does 'never' sound? But more seriously you'd need to have an intricate understanding of its inner workings, besides the fact that it involves creating a strong AI which feasibility even in the distant future falls within the realm of wild speculation.

    • Vatanen's Peak (Score:3, Insightful)

      by mangu ( 126918 )

      How does 'never' sound?

      It's funny that if you claim a mountain is impossible to climb they'll name it after you [wikipedia.org]. But try going up that same mountain in ten minutes. [youtube.com] Will they rename it after you? No way...

      It's true that we don't know how the human brain works, yet, because we don't have all the needed tools to study it today. A caveman would never be able to understand the workings of a watch, you cannot study a watch stone tools. But each time a supercomputer beats a record we get a better tool to study th

  • 'How long until we can simulate the entire brain?"

    Lord knows we've got a planetful of nitwits to help out somehow. Just build a massive mesh network and several of these and we could raise the World IQ by a couple of points, at least!

    Yeah, I'm bored.
  • by Daimanta ( 1140543 ) on Friday June 13, 2008 @05:41PM (#23785329) Journal
    And we should call it Skeyenet.
  • by overtly_demure ( 1024363 ) on Friday June 13, 2008 @05:43PM (#23785359) Homepage Journal
    There are roughly 10^15 synapses in a human brain. If you place 10 Gb of RAM (10^10 bytes) on a 64 bit multicore computer and simulated neuronal activation levels with a one-byte value, it would take a 100,000 such computers (10^10 * 10^5 = 10^15) to pretend they have roughly the synaptic simulation power of a human brain. It is apparently now feasible, at least in principle.

    We are ignoring for the moment how the neural network simulators work, how they communicate amongst themselves, how they are partitioned, what sensor inputs they receive, how they are trained (that's a tough one), etc. This will turn out to be extraordinarily difficult unless some very clever people mimic nature in very clever ways.

    Well, at least the hardware is there.

  • Why supercomputers? (Score:3, Interesting)

    by nurb432 ( 527695 ) on Friday June 13, 2008 @05:50PM (#23785473) Homepage Journal
    Why not just setup another 'distributed' project where we all donate cycles and simulate the brain?

    Should be enough of us out here i would think.
    • I've wondered this too, but I don't think the concept of "work packets" would work too well in this type of thing... brains require real time processing to truly be brains. Just my WAG though...
  • Unless it naturally focuses on boobies when a woman enters the room then it was made by aliens.
  • by HuguesT ( 84078 ) on Friday June 13, 2008 @06:00PM (#23785625)
    From TFA it's not very clear what this simulation achieved. It was code that already existed and as far as I understand it, it was used to validate some simulation models of low-level biological vision.

    However his simulation did not necessarily achieve computer vision in the usual sense, i.e: shape recognition, image segmentation, 3D vision, etc. This is the more cognitive aspect of the visual processus, which at present requires a much higher level of understanding of the vision process that we do not posess.

    FYI the whole brain has already been simulated, see the work of Dr izhikevich [nsi.edu]. It took several months to simulate about 1 second of brain activity.

    However this experiment did not simulate thought, just vast amounts of simulated neurons firing together. The simulated brain exhibited large-scale electrical behaviours of the type seen in EEG plots, but this is about it.

    This experiment sounds very similar. I'm not all that excited yet.
  • by electric joy boy ( 772605 ) on Friday June 13, 2008 @06:01PM (#23785645) Homepage
    "aiming to produce a machine that can see and interpret as well as a human."

    First I want to say that this whole level of brain modeling is really cool. However, there are, of course, different levels of "interpretation" I don't think that this computer will be able to achieve a human level of interpretation simply by modeling the visual cortex.

    1. perception: at one level you could argue (not very effectively) that interpretation just means perception... that's an eyeball/optic nerve visual cortex thing. e.g. You can perceive a face.
    2. recognition/categorization: of visual forms involves the visual cortex/occipital lobe. e.g. you can recognize if that face is familiar
    3. interpretation: involves assigning meaning to a stimulus and this involves many more parts of the brain than the visual cortex. It's obviously tied to memory which is closely tied, physiologically, to emotion. It also involves higher order thinking since, when most humans interpret a real world stimulus, there are multiple overlapping and networked associations that must be processed into a meaningful whole. e.g. you can recognize how threatening that face is, why it is threatening or not (and in what substantive domains it is or is not threatening), and even what you should do about it.

    Even "interpretation" at the second level above (which it seems the "roadrunner" might be able to model) require a lot more, for humans, than just the visual cortex.

    In other words if we were to call into existence a floating occipital lobe connected to a couple of eyes that had never been attached to the rest of a brain we would never be able to achieve recognition/categorization let alone interpretation. If I'm wrong maybe some of you hardcore neuroscience type can help me out?

    • by in75 ( 1307477 ) on Saturday June 14, 2008 @12:45AM (#23788949)
      In the interest of full disclosure, let me first say that I am one of the co-authors of the model that was executed on the Roadrunner, though I had nothing to do with the actual implementation that was executed (this was done by professional computer scientists, and I am a computational neuroscientist).

      Let me clarify what was done, and what will be done in the future.

      We simulated about 1 billion neurons communicating with each other and coupled according to theoretically derived arguments, which are broadly supported by experiments, but are a coarse approximation to them. The reason is that we are interested in principles of the neural computation, which will enable us to construct special purpose dedicated hardware for vision in the future. We are not necessarily interested in curing neurological diseases, hence we don't want to reproduce all physiological details in this simulation, but only those that, in our view, are essential to performing the visual computation. This is why we have no glia and other similar things in the model: while important in long-term changes of neuronal properties, they communicate chemically and, therefore, are too slow to help in recognition of an object in ~200 milliseconds.

      The simulation was a proof of principle only. We simulated only the V1 area of the brain, and only those neurons in it that detect edges and contours in the images. But the size of V1 we simulated was much larger than in real life, so that we had only a bit smaller total number of neurons than the entire visual system in a human has. Hence we can reliably argue that we will be able to simulate the full visual cortex, almost in real time. This is what will be done in the next year or so.

      When we talk about human cognitive power, we only mean the ability to look at images, segment them into objects, and recognize these objects. We are not talking about consciousness, free will, and thinking, etc. -- only visual cognition. This is also why we want to match a human, rather than to beat him: in such visual tasks, humans almost never make any errors (at least, when the images are not ambiguous), while the best computer vision programs make an error in 1 in 10 casesor so (just imagine what your life would be if you didn't see every tenth car on the road). Based mostly on theoretical arguments characterizing neuronal connectivity, and neglecting many important biological details, we may never be able to match a human (or maybe we will -- who knows? this is why it's called research). But we have good reasons to believe that these petascale simulations with biologically inspired, if not fully biological, neurons will decrease error rates by hundreds or thousands. This is also why we are content with simulating the visual system only: some theories suggest that image segmentation and object identification happens in the IT area of the visual cortex (which we plan to simulate). While the rest of the brain certainly influences its visual parts, it seems that the visual system, from the retina to IT, is sufficiently independent of the rest of the brain, so that visual cognitive tasks may be modeled by modeling the visual cortex alone.

      Finally, let me add that we got some interesting scientific results from these petascale simulations and the accompanying simulations and analysis on smaller machines. But we need to verify what we found and substantially expand it before we report the results; this will have to wait till the fall, when the RR computer will be available to us again. For now, the fact that we can simulate the system the size of the visual cortex is of interest by itself.

      That's all, folks!
      • /. overlords insist comment goes here
      • I'm not too proud to ask a stupid question...

        What does having this simulation on a peta-computer do that having just a super-fast computer look at something for a longer time period not do? In other words... how did having a faster computer help you accomplish your goals when the challenges to this type of things are mostly software related?

        And if this type of processing power made you able to simulate something as complicated as vision now... wouldn't it be logical to assume even FASTER computers i
        • Re: (Score:3, Informative)

          by in75 ( 1307477 )

          I'm not too proud to ask a stupid question... What does having this simulation on a peta-computer do that having just a super-fast computer look at something for a longer time period not do?

          One of the goals is to simulate the cortical processing in real time, which should almost be possible with the RR. Real time analysis allows one to process streaming video, such as from a security camera. Leaving real-time aside, there was one other reason why we needed the RR. When simulating ~billion of neurons with ~30 thousand connections per neuron, the total memory required to store the connections matrix (even if the strength of connections is calculated on the fly) is just below 100 terabytes, whic

  • See as human may or may not be easy... but interpret as human could be a bit more complicated. Could recognize patterns, detect movements, and more things that we take as normal without thinking too much on them, ok, but things are a bit more complex than that. As the brain is not so fast processing visual info, somewhat we anticipate the future [nytimes.com] in our perceptions. That is the base of most optical illusions.
    Could be useful to simulate such things, based on our limitations? Will that computer be fooled by
  • Back in times of yore, when the Beach Boys were young, and Get Smart was a TV show, they used to say that a computer powerful enough to simulate the human brain would require all the electricity generated by Niagara Falls to power it, and all the water going over the Falls to cool it. So far, as our understanding of the brain's complexity grows, that estimate still remains.
  • Per the summary, then why does it take 100,000+ cores and the worlds first petaflop supercomputer to do it?
  • by RockoTDF ( 1042780 ) on Friday June 13, 2008 @06:24PM (#23785933) Homepage
    Machine consciousness is not something that will likely happen in our lifetime. We don't even know exactly what it is in humans, much less a machine. Neuroscience is further ahead on consciousness issues than computer science, and even they haven't turned up a great deal yet. Computer scientists and physicists haven't got a clue about this, and sometimes their drivel about consciousness and human cognition is just embarrassing to them.
    • Re: (Score:3, Informative)

      by Prune ( 557140 )
      Your post is ridiculous. Research into the neural correlates of consciousness has been progressing significantly over the past decade. The explanation is coming together from research in different areas. Damasio's model, for example, is seriously backed up by neurology: http://www.amazon.com/Feeling-What-Happens-Emotion-Consciousness/dp/0156010755 [amazon.com]
      On the philosophy side, the usual objections to the reductionist approach and other philosophical nonsense like qualia are crushed by Dennett's well-thought-out
  • by videoBuff ( 1043512 ) on Friday June 13, 2008 @06:37PM (#23786125)
    Human vision and associated perception has confounded AI folks right from the beginning.

    After examining the results, the researchers 'believe they can study in real time the entire human visual cortex.' How long until we can simulate the entire brain?"

    There are researches who believe that humans use their whole brain to "see." If that is true, the claims of these researchers are highly premature with respect to vision. Everything from stored patterns to extrapolation is used to determine what we see. Even familiarity is used in perception - that is why there is this urban myth that "foreign" people look the same. If one were to ask those foreigners, they will say all indigenous people are totally different.

  • How else to explain why they have not already been Slashdotted?
  • by Anonymous Coward on Friday June 13, 2008 @07:01PM (#23786473)
    I admit I didn't RTFA - but that sort of report cropping up in different places is really quite misleading in principle. While it may be true that the processing power exists to simulate networks on the scale of small parts of the brain in real time, the biological data to work on simply _does not exist_. The situation is somewhat better for the retina than for other parts of the nervous system, but seriously: Nobody knows the topology of neural networks in our brain to the level of detail required for simulations that would somehow reflect the real world situation. Think about it: A neuron is small, just several micrometers in diameter and it can form appendages of several centimeters (within the brain) in length that can connect it to several thousands of other neurons. The technology to map that kind of structure simply does not exist. It _is_ being developed, but there is nowhere near to enough data to justify calling the programs these computers run "simulations of the human brain".
  • How long? (Score:3, Interesting)

    by Renraku ( 518261 ) on Friday June 13, 2008 @07:09PM (#23786563) Homepage
    How long, you ask?

    Until they can emulate the quantum/holographic methods the brain employs. Keep in mind, there are some worlds-in-worlds within the physical components. Just like how metal siding can form a complete circuit around the house, the nerves of the brain form multiple networks (chemical, electrical, interference patterns, etc)
  • How long? (Score:4, Informative)

    by PHPNerd ( 1039992 ) on Friday June 13, 2008 @07:12PM (#23786585) Homepage
    I'm a PhD student in Neuroscience. Don't get too excited. This is merely just a piece of the visual cortex. How long until we can simulate the entire brain in real time? That's not likely for a long, long time, but not because we won't have the computing power (we'll have that in about 10 years), but because we won't have the entire brain mapped to simulate. In order to accurately simulate the entire brain we first have to understand each part's connections, how they work, and how they interact with the rest of the brain. Sadly, our knowledge of the brain is so primitive that I don't see us totally mapping the brain for at least another 100 years. Sound ridiculous? Ask anyone in academia in neuroscience, and they'll tell you that even tenured theories are being thrown out regularly when evidence to the contrary proves it wrong. There are even some who think we'll never fully understand the brain due to the fact that the best way to study it is in live humans and scientists are severely limited in that study by human rights laws.
  • Based on the results of PetaVision's inaugural trials, Los Alamos researchers believe they can study in real time the entire human visual cortex--arguably a human being's most important sensory apparatus.

    What the hell does that mean???

    I'm guessing it just means that this peta-beast has the oomph to run their model in real-time. They seem to want you to assume that the model actually achieves something human-like and/or never-done-by-a-computer-before, but seeing as they don't actually come out and make that
    • by 4D6963 ( 933028 )

      Hallelujah! People even on Slashdot seem to think that sparks of magic come out of "supercomputers". It just runs shit faster than your PC, the exact same type of shit, only faster. It's a bit disappointing that even supposedly educated people (at least in the realm of computer technology) are so easily impressed.

  • by Anonymous Coward
    In the interest of full disclosure, let me first say that I am one of the co-authors of the model that was executed on the Roadrunner, though I had nothing to do with the actual implementation that was executed (this was done by professional computer scientists, and I am a computational neuroscientist).

    Let me clarify what was done, and what will be done in the future.

    We simulated about 1 billion neurons communicating with each other and coupled according to theoretically derived arguments, which are broadl
  • Singularity (Score:4, Funny)

    by CODiNE ( 27417 ) on Friday June 13, 2008 @11:41PM (#23788551) Homepage
    Heh... what if they finally simulate a human brain and... he's just a normal guy. "Design a better computer for us B.O.B." "Uhhh... I don't even like computers." Or what if it turns out to be stupid? Make it 100x faster and it's just STUPID FAST. :)
  • Brains are not just massively parallel, they are also fully analog (NOT digital), and nonlinear to boot. Not to mention that they contain many billions, not a mere few hundred thousand, of elements. The processing power of the brain is many orders of magnitude over what this machine, or anything near its size and complexity, can ever hope to do.

    AND, not only is the brain a much LARGER thing to simulate than they have any hope of coming close to very soon (in numbers of neurons and sheer processing power)
  • Not Bloody Likely (Score:3, Insightful)

    by DynaSoar ( 714234 ) on Saturday June 14, 2008 @06:49AM (#23790509) Journal
    Between the rods and cones of the retina and the optic nerve are four layers/types of retinal processing cells. Unlike most neurons these operate entirely on inhibitory processing (rather than 85% excitatory and 15% inhibitory) and entirely on slow voltage gradient (rather than store up charge to a threshold and then fire a burst). How this accomplishes visual processing is a mystery to those of us to who understand real meatware processing. It is not likely a bunch of high powered supercomputer geeks even know this is how the visual system operates much less how to simulate it.

    They way well use their XYZflops to develop a visual processing system of some sort, but it will NOT be a simulation of something that those who understand it far better than they understand it hardly at all.

    If and when they get to actually trying to match the human visual system in operation (though by different processing) they'll have to figure out of to get their system to consistently guess with fairly good accuracy what it's going to be seeing 0.1 to 0.3 seconds in the future. Proof of that long suspected technique was just forthcoming in the last week or so.

    There is nothing at all "intelligent" about this. It is all automated processing. Level of "intelligence" has nothing to to with visual proceses' efficacy. Anytime anyone inserts the "I" word into anything regarding computers, particulary when comparing with the human brain, they need to define their terms. Almost certainly those of us who have struggled for years with the insufficient and contradictory proposed definitions of "intelligence" in the human mind will be more than happy to fill them in on why their definitions have already been proven to be failures in humans, and why anything derived from those will not apply to system designed to provide human-looking output via entirely different means of processing.

What is research but a blind date with knowledge? -- Will Harvey

Working...