Supercomputer Simulates Human Visual System 244
An anonymous reader writes "What cool things can be done with the 100,000+ cores of the first petaflop supercomputer, the Roadrunner, that were impossible to do before? Because our brain is massively parallel, with a relatively small amount of communication over long distances, and is made of unreliable, imprecise components, it's quite easy to simulate large chunks of it on supercomputers. The Roadrunner has been up only for about a week, and researchers from Los Alamos National Lab are already reporting inaugural simulations of the human visual system, aiming to produce a machine that can see and interpret as well as a human. After examining the results, the researchers 'believe they can study in real time the entire human visual cortex.' How long until we can simulate the entire brain?"
Simulate is the operative word (Score:3, Informative)
However his simulation did not necessarily achieve computer vision in the usual sense, i.e: shape recognition, image segmentation, 3D vision, etc. This is the more cognitive aspect of the visual processus, which at present requires a much higher level of understanding of the vision process that we do not posess.
FYI the whole brain has already been simulated, see the work of Dr izhikevich [nsi.edu]. It took several months to simulate about 1 second of brain activity.
However this experiment did not simulate thought, just vast amounts of simulated neurons firing together. The simulated brain exhibited large-scale electrical behaviours of the type seen in EEG plots, but this is about it.
This experiment sounds very similar. I'm not all that excited yet.
Re:I suck at remembering faces (Score:2, Informative)
Re:New goal... (Score:3, Informative)
Nope, not true. Practically everything our brain does is parallel, and this is definitely true of the optic nerve.
It's certainly a major bottleneck in the system; a lot of compression gets down by the retina before it is transmitted but that's because the optic nerve is long and has to move with the eyeball.
Yes, I think any mantis shrimp capable of self reflection would consider the human eye an upgrade (except for the fact that its too big for the little buggers to swim with)
Re:New goal... (Score:4, Informative)
Liveleak has a video of the Snapping shrimp [liveleak.com]
These programs run without meaningful data (Score:3, Informative)
How long? (Score:4, Informative)
Re:Machine Consciousness (Score:3, Informative)
On the philosophy side, the usual objections to the reductionist approach and other philosophical nonsense like qualia are crushed by Dennett's well-thought-out arguments. The consciousness problem is well on its way to being solved.
Re:The Singluarity is Near (Score:2, Informative)
You're the stupid one, learn to read, dumb ass. It's all about strong AI, not the weak AI we commonly refer to as AI.
You seem to have completely lost track of the conversation. It's obvious that it's strong AI I am referring to.
And no, your assumption that a strong AI algorithm can exist is baseless since if it could be proven it would be implementable.
Huh? Being able to prove that something exists means you can build a replica? I can prove the sun exists, does this mean it's trivial to build a replica of it?
We know there is an algorithm for intelligence. It's codified in a sloppy bag full of chemicals and implemented around six billion times. Reverse-engineering it and building an artificial version may be difficult, but the algorithm certainly exists. It may very well turn out that it's easier to discover/invent a different algorithm instead, but that doesn't change the fact that we know at least one such algorithm exists.
this is what was done, actually (Score:2, Informative)
Let me clarify what was done, and what will be done in the future.
We simulated about 1 billion neurons communicating with each other and coupled according to theoretically derived arguments, which are broadly supported by experiments, but are a coarse approximation to them. The reason is that we are interested in principles of the neural computation, which will enable us to construct special purpose dedicated hardware for vision in the future. We are not necessarily interested in curing neurological diseases, hence we don't want to reproduce all physiological details in this simulation, but only those that, in our view, are essential to performing the visual computation. This is why we have no glia and other similar things in the model: while important in long-term changes of neuronal properties, they communicate chemically and, therefore, are too slow to help in recognition of an object in ~200 milliseconds.
The simulation was a proof of principle only. We simulated only the V1 area of the brain, and only those neurons in it that detect edges and contours in the images. But the size of V1 we simulated was much larger than in real life, so that we had only a bit smaller total number of neurons than the entire visual system in a human has. Hence we can reliably argue that we will be able to simulate the full visual cortex, almost in real time. This is what will be done in the next year or so.
When we talk about human cognitive power, we only mean the ability to look at images, segment them into objects, and recognize these objects. We are not talking about consciousness, free will, and thinking, etc. -- only visual cognition. This is also why we want to match a human, rather than to beat him: in such visual tasks, humans almost never make any errors (at least, when the images are not ambiguous), while the best computer vision programs make an error in 1 in 10 casesor so (just imagine what your life would be if you didn't see every tenth car on the road). Based mostly on theoretical arguments characterizing neuronal connectivity, and neglecting many important biological details, we may never be able to match a human (or maybe we will -- who knows? this is why it's called research). But we have good reasons to believe that these petascale simulations with biologically inspired, if not fully biological, neurons will decrease error rates by hundreds or thousands. This is also why we are content with simulating the visual system only: some theories suggest that image segmentation and object identification happens in the IT area of the visual cortex (which we plan to simulate). While the rest of the brain certainly influences its visual parts, it seems that the visual system, from the retina to IT, is sufficiently independent of the rest of the brain, so that visual cognitive tasks may be modeled by modeling the visual cortex alone.
Finally, let me add that we got some interesting scientific results from these petascale simulations and the accompanying simulations and analysis on smaller machines. But we need to verify what we found and substantially expand it before we report the results; this will have to wait till the fall, when the RR computer will be available to us again. For now, the fact that we can simulate the system the size of the visual cortex is of interest by itself.
what we did in this simulation (Score:5, Informative)
Let me clarify what was done, and what will be done in the future.
We simulated about 1 billion neurons communicating with each other and coupled according to theoretically derived arguments, which are broadly supported by experiments, but are a coarse approximation to them. The reason is that we are interested in principles of the neural computation, which will enable us to construct special purpose dedicated hardware for vision in the future. We are not necessarily interested in curing neurological diseases, hence we don't want to reproduce all physiological details in this simulation, but only those that, in our view, are essential to performing the visual computation. This is why we have no glia and other similar things in the model: while important in long-term changes of neuronal properties, they communicate chemically and, therefore, are too slow to help in recognition of an object in ~200 milliseconds.
The simulation was a proof of principle only. We simulated only the V1 area of the brain, and only those neurons in it that detect edges and contours in the images. But the size of V1 we simulated was much larger than in real life, so that we had only a bit smaller total number of neurons than the entire visual system in a human has. Hence we can reliably argue that we will be able to simulate the full visual cortex, almost in real time. This is what will be done in the next year or so.
When we talk about human cognitive power, we only mean the ability to look at images, segment them into objects, and recognize these objects. We are not talking about consciousness, free will, and thinking, etc. -- only visual cognition. This is also why we want to match a human, rather than to beat him: in such visual tasks, humans almost never make any errors (at least, when the images are not ambiguous), while the best computer vision programs make an error in 1 in 10 casesor so (just imagine what your life would be if you didn't see every tenth car on the road). Based mostly on theoretical arguments characterizing neuronal connectivity, and neglecting many important biological details, we may never be able to match a human (or maybe we will -- who knows? this is why it's called research). But we have good reasons to believe that these petascale simulations with biologically inspired, if not fully biological, neurons will decrease error rates by hundreds or thousands. This is also why we are content with simulating the visual system only: some theories suggest that image segmentation and object identification happens in the IT area of the visual cortex (which we plan to simulate). While the rest of the brain certainly influences its visual parts, it seems that the visual system, from the retina to IT, is sufficiently independent of the rest of the brain, so that visual cognitive tasks may be modeled by modeling the visual cortex alone.
Finally, let me add that we got some interesting scientific results from these petascale simulations and the accompanying simulations and analysis on smaller machines. But we need to verify what we found and substantially expand it before we report the results; this will have to wait till the fall, when the RR computer will be available to us again. For now, the fact that we can simulate the system the size of the visual cortex is of interest by itself.
That's all, folks!
Re:what we did in this simulation (Score:3, Informative)