Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Science

Why We Should Build a Supercomputer Replica of the Human Brain 393

An anonymous reader sends this excerpt from Wired: "[Henry] Markram was proposing a project that has bedeviled AI researchers for decades, that most had presumed was impossible. He wanted to build a working mind from the ground up. ... The self-assured scientist claims that the only thing preventing scientists from understanding the human brain in its entirety — from the molecular level all the way to the mystery of consciousness — is a lack of ambition. If only neuroscience would follow his lead, he insists, his Human Brain Project could simulate the functions of all 86 billion neurons in the human brain, and the 100 trillion connections that link them. And once that's done, once you've built a plug-and-play brain, anything is possible. You could take it apart to figure out the causes of brain diseases. You could rig it to robotics and develop a whole new range of intelligent technologies. You could strap on a pair of virtual reality glasses and experience a brain other than your own."
This discussion has been archived. No new comments can be posted.

Why We Should Build a Supercomputer Replica of the Human Brain

Comments Filter:
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Wednesday May 15, 2013 @05:42PM (#43735507)
    Comment removed based on user account deletion
  • Re:One teensy detail (Score:5, Interesting)

    by rwa2 ( 4391 ) * on Wednesday May 15, 2013 @05:55PM (#43735655) Homepage Journal

    Well, supposedly they have enough CPU power to do a pretty reasonable simulation of insect and even small mammal brains, like rats and cats.

    But supposedly there might be more going on in there than just interactions between connected neurons...
    http://discovermagazine.com/2009/feb/13-is-quantum-mechanics-controlling-your-thoughts#.UZQDe7VeZ30 [discovermagazine.com]

  • Moral objection (Score:4, Interesting)

    by girlintraining ( 1395911 ) on Wednesday May 15, 2013 @05:56PM (#43735657)

    We've long established that the source of the human "soul" is in the brain. Those interconnections give rise to consciousness and self-awareness -- and sentience. If you build something that precisely models the brain, you will be creating sentience. I have to question how we can create a sentient creature simply to experiment upon it and still claim to have a shred of humanity to us.

    I know that this is not as dazzling and interesting as building the device to geeks like us, but we cannot simply ignore the ethical consequences of our actions. All vocations, all manner of human endeavor, must move forward with an eye towards a respect for life. This may not be human life we're creating, or even organic life, but it is no less deserving.

    Someday we're going to have cybernetic life walking about. And I have to wonder -- how well will they treat us, when they find out how ethical we were in creating it?

  • by Synerg1y ( 2169962 ) on Wednesday May 15, 2013 @05:57PM (#43735659)

    The brain "develops" in humans for a very long time though, to work around /with that the mechanical brain would either need to be able to develop itself or start off in an adult state.

    I have my doubts about the success of this project, but we've got to start somewhere & we'd learn a lot with this project, not like we don't spend our country's money on wars, or policing / giving aid to people who hate us instead.

  • by BlackSabbath ( 118110 ) on Wednesday May 15, 2013 @06:03PM (#43735713)

    Say this actually works. We create a brain and start down the long path of "teaching" it just like with new-born humans.
    What happens when we detect that the brain is "experiencing pain" (we already know that pain has a detectable neurological basis right?)
    What happens when we detect the brain is experiencing depression?
    What are our responsibilities then? Is this thing a human, a lab-rat, or a machine?

  • by Rob_Bryerton ( 606093 ) on Wednesday May 15, 2013 @06:04PM (#43735723) Homepage
    To put it in perspective, that 86 billion neurons would be 86 "giga-neurons"; huh, conceptually not too overwhelming. Then we have the 100 trillion connections between them, or 100 "tera-connections"? Forget it.

    Not to even mention (as someone already did) the initial state, then the learning process. To even form this structure in RAM would require, what? 40-50 more Moores Law iterations? Which I doubt is even physically possible.

    I think this is the wrong approach, and even if possible, not in our lifetimes....
  • Re:Moral objection (Score:5, Interesting)

    by Intropy ( 2009018 ) on Wednesday May 15, 2013 @06:11PM (#43735783)

    When you create a child you're on the hook for raising it. You don't start out knowing everything about it so you have to learn about it at the same time you teach it. That's moral. A new form of life is necessarily going to require more learning on our part in order to raise well. We will make mistakes. We will hurt it. But that's life. The only realistic other option is not to create it to begin with. Better to exist imperfectly than not all.

  • by Animats ( 122034 ) on Wednesday May 15, 2013 @06:58PM (#43736187) Homepage

    From the article:

    "There are too many things we don't yet know," says Caltech professor Christof Koch, chief scientific officer at one of neuroscience's biggest data producers, the Allen Institute for Brain Science in Seattle. "The roundworm has exactly 302 neurons, and we still have no frigging idea how this animal works."

    That's the problem. Just because we can extract the wiring diagram doesn't mean the components are well understood yet. Also, if we understood the components and how to wire them up, it would be cheaper to just build hardware. Simulating neurons is slow. It's like running SPICE instead of building circuits. Works, but there's about a 1000x or worse speed, power, and cost penalty. GPUs are often simulated at the gate level before making an IC; NVidia uses twenty or thirty racks of servers to simulate one GPU during development.

    What bothers me about claims of strong AI is that I've heard it before. Ed Feigenbaum, the "expert systems" guy at Stanford, was running around in the 1980s, promising Strong AI Real Soon Now if only he could funding for a giant national AI lab headed by him. He even testified before Congress on that. Expert systems were a dead end.

    Rod Brooks from MIT went down this road too. His COG project [wikipedia.org] had a robotic head and some arms, some facial expressions, and a lot of hype. Work ceased on that embarrassment in 2003. He'd done good artificial insect work, but the jump to human level was way too big.

    This is the hubris problem in AI. Too many people have approached this claiming their One Big Idea would lead to strong AI. So far, not even close.

    All the mammals have similar DNA and brain architecture. A mouse brain is about 1g; a human brain is about 1000g. So build a simulated mouse brain and demonstrate it works, or STFU.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...