Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Supercomputing Biotech Hardware

Supercomputer Simulates Human Visual System 244

An anonymous reader writes "What cool things can be done with the 100,000+ cores of the first petaflop supercomputer, the Roadrunner, that were impossible to do before? Because our brain is massively parallel, with a relatively small amount of communication over long distances, and is made of unreliable, imprecise components, it's quite easy to simulate large chunks of it on supercomputers. The Roadrunner has been up only for about a week, and researchers from Los Alamos National Lab are already reporting inaugural simulations of the human visual system, aiming to produce a machine that can see and interpret as well as a human. After examining the results, the researchers 'believe they can study in real time the entire human visual cortex.' How long until we can simulate the entire brain?"
This discussion has been archived. No new comments can be posted.

Supercomputer Simulates Human Visual System

Comments Filter:
  • by HuguesT ( 84078 ) on Friday June 13, 2008 @06:00PM (#23785625)
    From TFA it's not very clear what this simulation achieved. It was code that already existed and as far as I understand it, it was used to validate some simulation models of low-level biological vision.

    However his simulation did not necessarily achieve computer vision in the usual sense, i.e: shape recognition, image segmentation, 3D vision, etc. This is the more cognitive aspect of the visual processus, which at present requires a much higher level of understanding of the vision process that we do not posess.

    FYI the whole brain has already been simulated, see the work of Dr izhikevich [nsi.edu]. It took several months to simulate about 1 second of brain activity.

    However this experiment did not simulate thought, just vast amounts of simulated neurons firing together. The simulated brain exhibited large-scale electrical behaviours of the type seen in EEG plots, but this is about it.

    This experiment sounds very similar. I'm not all that excited yet.
  • by klasikahl ( 627381 ) <klasikahl@@@gmail...com> on Friday June 13, 2008 @06:02PM (#23785655) Journal
    You likely suffer from mild prosopagnosia.
  • Re:New goal... (Score:3, Informative)

    by Illserve ( 56215 ) on Friday June 13, 2008 @06:42PM (#23786197)
    and the information is transmitted from the retina in parallel, not serially down a single optic nerve like ours.

    Nope, not true. Practically everything our brain does is parallel, and this is definitely true of the optic nerve.

    It's certainly a major bottleneck in the system; a lot of compression gets down by the retina before it is transmitted but that's because the optic nerve is long and has to move with the eyeball.

    Yes, I think any mantis shrimp capable of self reflection would consider the human eye an upgrade (except for the fact that its too big for the little buggers to swim with)

  • Re:New goal... (Score:4, Informative)

    by mikael ( 484 ) on Friday June 13, 2008 @06:46PM (#23786257)
    It's amazing the variations on mammalian visions - some animals still have four different color receptors (the normal red, green, blue and with the extra one which sees into the ultra-violet range of the electromagnetic spectrum). Insects are able see into the UV range as well as being able to detect polarization of sunlight).

    Liveleak has a video of the Snapping shrimp [liveleak.com]
  • by Anonymous Coward on Friday June 13, 2008 @07:01PM (#23786473)
    I admit I didn't RTFA - but that sort of report cropping up in different places is really quite misleading in principle. While it may be true that the processing power exists to simulate networks on the scale of small parts of the brain in real time, the biological data to work on simply _does not exist_. The situation is somewhat better for the retina than for other parts of the nervous system, but seriously: Nobody knows the topology of neural networks in our brain to the level of detail required for simulations that would somehow reflect the real world situation. Think about it: A neuron is small, just several micrometers in diameter and it can form appendages of several centimeters (within the brain) in length that can connect it to several thousands of other neurons. The technology to map that kind of structure simply does not exist. It _is_ being developed, but there is nowhere near to enough data to justify calling the programs these computers run "simulations of the human brain".
  • How long? (Score:4, Informative)

    by PHPNerd ( 1039992 ) on Friday June 13, 2008 @07:12PM (#23786585) Homepage
    I'm a PhD student in Neuroscience. Don't get too excited. This is merely just a piece of the visual cortex. How long until we can simulate the entire brain in real time? That's not likely for a long, long time, but not because we won't have the computing power (we'll have that in about 10 years), but because we won't have the entire brain mapped to simulate. In order to accurately simulate the entire brain we first have to understand each part's connections, how they work, and how they interact with the rest of the brain. Sadly, our knowledge of the brain is so primitive that I don't see us totally mapping the brain for at least another 100 years. Sound ridiculous? Ask anyone in academia in neuroscience, and they'll tell you that even tenured theories are being thrown out regularly when evidence to the contrary proves it wrong. There are even some who think we'll never fully understand the brain due to the fact that the best way to study it is in live humans and scientists are severely limited in that study by human rights laws.
  • by Prune ( 557140 ) on Friday June 13, 2008 @10:44PM (#23788253)
    Your post is ridiculous. Research into the neural correlates of consciousness has been progressing significantly over the past decade. The explanation is coming together from research in different areas. Damasio's model, for example, is seriously backed up by neurology: http://www.amazon.com/Feeling-What-Happens-Emotion-Consciousness/dp/0156010755 [amazon.com]
    On the philosophy side, the usual objections to the reductionist approach and other philosophical nonsense like qualia are crushed by Dennett's well-thought-out arguments. The consciousness problem is well on its way to being solved.
  • by Anonymous Coward on Friday June 13, 2008 @11:07PM (#23788359)

    You're the stupid one, learn to read, dumb ass. It's all about strong AI, not the weak AI we commonly refer to as AI.

    You seem to have completely lost track of the conversation. It's obvious that it's strong AI I am referring to.

    And no, your assumption that a strong AI algorithm can exist is baseless since if it could be proven it would be implementable.

    Huh? Being able to prove that something exists means you can build a replica? I can prove the sun exists, does this mean it's trivial to build a replica of it?

    We know there is an algorithm for intelligence. It's codified in a sloppy bag full of chemicals and implemented around six billion times. Reverse-engineering it and building an artificial version may be difficult, but the algorithm certainly exists. It may very well turn out that it's easier to discover/invent a different algorithm instead, but that doesn't change the fact that we know at least one such algorithm exists.

  • by Anonymous Coward on Friday June 13, 2008 @11:33PM (#23788507)
    In the interest of full disclosure, let me first say that I am one of the co-authors of the model that was executed on the Roadrunner, though I had nothing to do with the actual implementation that was executed (this was done by professional computer scientists, and I am a computational neuroscientist).

    Let me clarify what was done, and what will be done in the future.

    We simulated about 1 billion neurons communicating with each other and coupled according to theoretically derived arguments, which are broadly supported by experiments, but are a coarse approximation to them. The reason is that we are interested in principles of the neural computation, which will enable us to construct special purpose dedicated hardware for vision in the future. We are not necessarily interested in curing neurological diseases, hence we don't want to reproduce all physiological details in this simulation, but only those that, in our view, are essential to performing the visual computation. This is why we have no glia and other similar things in the model: while important in long-term changes of neuronal properties, they communicate chemically and, therefore, are too slow to help in recognition of an object in ~200 milliseconds.

    The simulation was a proof of principle only. We simulated only the V1 area of the brain, and only those neurons in it that detect edges and contours in the images. But the size of V1 we simulated was much larger than in real life, so that we had only a bit smaller total number of neurons than the entire visual system in a human has. Hence we can reliably argue that we will be able to simulate the full visual cortex, almost in real time. This is what will be done in the next year or so.

    When we talk about human cognitive power, we only mean the ability to look at images, segment them into objects, and recognize these objects. We are not talking about consciousness, free will, and thinking, etc. -- only visual cognition. This is also why we want to match a human, rather than to beat him: in such visual tasks, humans almost never make any errors (at least, when the images are not ambiguous), while the best computer vision programs make an error in 1 in 10 casesor so (just imagine what your life would be if you didn't see every tenth car on the road). Based mostly on theoretical arguments characterizing neuronal connectivity, and neglecting many important biological details, we may never be able to match a human (or maybe we will -- who knows? this is why it's called research). But we have good reasons to believe that these petascale simulations with biologically inspired, if not fully biological, neurons will decrease error rates by hundreds or thousands. This is also why we are content with simulating the visual system only: some theories suggest that image segmentation and object identification happens in the IT area of the visual cortex (which we plan to simulate). While the rest of the brain certainly influences its visual parts, it seems that the visual system, from the retina to IT, is sufficiently independent of the rest of the brain, so that visual cognitive tasks may be modeled by modeling the visual cortex alone.

    Finally, let me add that we got some interesting scientific results from these petascale simulations and the accompanying simulations and analysis on smaller machines. But we need to verify what we found and substantially expand it before we report the results; this will have to wait till the fall, when the RR computer will be available to us again. For now, the fact that we can simulate the system the size of the visual cortex is of interest by itself.
  • by in75 ( 1307477 ) on Saturday June 14, 2008 @12:45AM (#23788949)
    In the interest of full disclosure, let me first say that I am one of the co-authors of the model that was executed on the Roadrunner, though I had nothing to do with the actual implementation that was executed (this was done by professional computer scientists, and I am a computational neuroscientist).

    Let me clarify what was done, and what will be done in the future.

    We simulated about 1 billion neurons communicating with each other and coupled according to theoretically derived arguments, which are broadly supported by experiments, but are a coarse approximation to them. The reason is that we are interested in principles of the neural computation, which will enable us to construct special purpose dedicated hardware for vision in the future. We are not necessarily interested in curing neurological diseases, hence we don't want to reproduce all physiological details in this simulation, but only those that, in our view, are essential to performing the visual computation. This is why we have no glia and other similar things in the model: while important in long-term changes of neuronal properties, they communicate chemically and, therefore, are too slow to help in recognition of an object in ~200 milliseconds.

    The simulation was a proof of principle only. We simulated only the V1 area of the brain, and only those neurons in it that detect edges and contours in the images. But the size of V1 we simulated was much larger than in real life, so that we had only a bit smaller total number of neurons than the entire visual system in a human has. Hence we can reliably argue that we will be able to simulate the full visual cortex, almost in real time. This is what will be done in the next year or so.

    When we talk about human cognitive power, we only mean the ability to look at images, segment them into objects, and recognize these objects. We are not talking about consciousness, free will, and thinking, etc. -- only visual cognition. This is also why we want to match a human, rather than to beat him: in such visual tasks, humans almost never make any errors (at least, when the images are not ambiguous), while the best computer vision programs make an error in 1 in 10 casesor so (just imagine what your life would be if you didn't see every tenth car on the road). Based mostly on theoretical arguments characterizing neuronal connectivity, and neglecting many important biological details, we may never be able to match a human (or maybe we will -- who knows? this is why it's called research). But we have good reasons to believe that these petascale simulations with biologically inspired, if not fully biological, neurons will decrease error rates by hundreds or thousands. This is also why we are content with simulating the visual system only: some theories suggest that image segmentation and object identification happens in the IT area of the visual cortex (which we plan to simulate). While the rest of the brain certainly influences its visual parts, it seems that the visual system, from the retina to IT, is sufficiently independent of the rest of the brain, so that visual cognitive tasks may be modeled by modeling the visual cortex alone.

    Finally, let me add that we got some interesting scientific results from these petascale simulations and the accompanying simulations and analysis on smaller machines. But we need to verify what we found and substantially expand it before we report the results; this will have to wait till the fall, when the RR computer will be available to us again. For now, the fact that we can simulate the system the size of the visual cortex is of interest by itself.

    That's all, folks!
  • by in75 ( 1307477 ) on Saturday June 14, 2008 @09:22AM (#23790985)

    I'm not too proud to ask a stupid question... What does having this simulation on a peta-computer do that having just a super-fast computer look at something for a longer time period not do?
    One of the goals is to simulate the cortical processing in real time, which should almost be possible with the RR. Real time analysis allows one to process streaming video, such as from a security camera. Leaving real-time aside, there was one other reason why we needed the RR. When simulating ~billion of neurons with ~30 thousand connections per neuron, the total memory required to store the connections matrix (even if the strength of connections is calculated on the fly) is just below 100 terabytes, which is what RR has. Needless to say, if we had to store the matrix on HDDs and read/write them at every update, the calculation would take forever, not just in the proportion of the speed of the machine.

    In other words... how did having a faster computer help you accomplish your goals when the challenges to this type of things are mostly software related?
    It's not just speed, it's RAM issues, as per above.

    And if this type of processing power made you able to simulate something as complicated as vision now... wouldn't it be logical to assume even FASTER computers in the future would make it easier to create an AI -- or at least vastly better forms of intelligent systems? Seems like a straight forward extrapolation to me.
    Faster and bigger would be required. But even this would be insufficient. The reason we are working with vision (besides obvious practical applications), is that a lot more is known about the structure of the brain there, than anywhere else. This is largely because we know which kind of objects exist in real world, that edges are mostly smooth, that textures are only discontinuous at edges, etc. This allows one to predict, theoretically, like Steven Zucker did at Yale, what the connectivity in the visual cortex should be, up to a few global parameters, some of which we were able to fit in these first runs. I am unaware of similar arguments for other parts of the brain. Which, of course, doesn't mean that, but the type we get bigger machines, such arguments won't be found.

The faster I go, the behinder I get. -- Lewis Carroll

Working...