Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Sci-Fi Supercomputing

The Men Trying To Save Us From the Machines 161

nk497 writes "Are you more likely to die from cancer or be wiped out by a malevolent computer? That thought has been bothering one of the co-founders of Skype so much he teamed up with Oxbridge researchers in the hopes of predicting what machine super-intelligence will mean for the world, in order to mitigate the existential threat of new technology – that is, the chance it will destroy humanity. That idea is being studied at the University of Oxford's Future of Humanity Institute and the newly launched Centre for the Study of Existential Risk at the University of Cambridge, where philosophers look more widely at the possible repercussions of nanotechnology, robotics, artificial intelligence and other innovations — and to try to avoid being outsmarted by technology."
This discussion has been archived. No new comments can be posted.

The Men Trying To Save Us From the Machines

Comments Filter:
  • by TheSHAD0W ( 258774 ) on Saturday June 22, 2013 @05:17PM (#44080665) Homepage

    Even if it's bound by the laws of physics as we understand them (Stross-universe-like "P=NP"-powered reality modification aside) there are plenty of dangers out there we're well aware of which computing technology could ape. Nanoassemblers might not be able to eat the planet, but what if they infested humans like a disease? We're already having horrible problems with malware clogging up people's machines, and they're coded by humans; what if an artificial intelligence was put in control of a botnet, updating and improving the exploiters faster than anyone could take them apart?

  • by DeathGrippe ( 2906227 ) on Saturday June 22, 2013 @05:57PM (#44080895)

    Nerve impulses travel along nerve fibers as pulses of membrane depolarization. Within our brains and bodies, this is adequate speed for thinking and control. However, relative to the speed of light, our nerve impulses are laughably slow.

    The maximum speed of a nerve impulse is about 200 miles per hour.

    The speed of light is over 3 million times that fast.

    Now consider what will happen when we create a sentient, electronic being that has as many neurons as we do, but its nerve impulses travel at the speed of light.

    In terms of intelligence, that creation will be to us as we are to worms.

  • by DahGhostfacedFiddlah ( 470393 ) on Saturday June 22, 2013 @06:51PM (#44081255)

    That's ridiculous. How can you possibly know what a machine intelligence capable of destroying humanity is going to look like? We're nowhere near the algorithms that could produce that type of intelligence.

    Maybe it's a dumb algorithm simply caught in a self-replication loop [stack.nl]. Maybe you'll never see two robots arguing over "white" or "black", because there's only one single "intelligence" spread over the internet - that seems more likely with the rise of cloud computing.

    There may be plenty of reasons to dismiss this type of institution, but "human intelligence doesn't work that way, so machine intelligence won't either" isn't one of them.

  • by TrekkieGod ( 627867 ) on Saturday June 22, 2013 @08:28PM (#44081787) Homepage Journal

    Nerve impulses travel along nerve fibers as pulses of membrane depolarization. Within our brains and bodies, this is adequate speed for thinking and control. However, relative to the speed of light, our nerve impulses are laughably slow.

    The maximum speed of a nerve impulse is about 200 miles per hour.

    The speed of light is over 3 million times that fast.

    Now consider what will happen when we create a sentient, electronic being that has as many neurons as we do, but its nerve impulses travel at the speed of light.

    In terms of intelligence, that creation will be to us as we are to worms.

    Not quite. Assuming you build an exact replica of a human brain, except you speed up the nerve impulse propagation, you don't build a more intelligent human. You build a human that reaches the exact same flawed conclusions based on the logical fallacies we are most vulnerable to, but it would make the bad decisions 3 million times as fast.

    It might affect how one perceives time. The nice part is that we could feel like we live 3 million times longer. The bad part is that, unable to move and interact with the world at a speed anywhere near matching that of our thoughts, we might go insane out of boredom. Imagine being able to write an entire novel in 3 seconds, but having to take a couple of days to type it up.

  • by TrekkieGod ( 627867 ) on Saturday June 22, 2013 @08:46PM (#44081887) Homepage Journal

    Even Rats have empathy. Self aware machines will too.

    Not every animal species on this planet has empathy. Rats are rodents, a type of mammal. Relatively speaking, we're pretty close to them in the evolutionary tree. They branched off after empathy was developed, which is evolutionarily advantageous and necessary for the type of social cooperation mammals tend to engage in (taking care of your young, for example. At the very least, any mammal needs to feed their young with milk for a period of time).

    Look at something a little farther away, like certain species of black widows, which will eat the male after mating. It doesn't have much empathy.

    Empathy is an evolutionary trait. Artificial intelligence doesn't come about the same way. The advantage is that other common evolutionary traits don't need to show up in AI either. Things like a desire to protect itself simply doesn't have to be there, unless you program it in. No greed, no desire to take our place at all. If we program it to serve us, that's what it will do. If it's sentient, it will want to serve us, the same way we want basic things like sex. We spend so much time thinking about what the purpose of life is, they'll know what theirs is, and be perfectly happy being subservient. In fact, they'll be unhappy if we prevent them from being subservient.

    Of course, if we're programming them to kill humans, that just might a problem. Luckily, we're so far away from true AI, we don't need to concern ourselves with it. It's not coming in our lifetime. It's not coming in our children's lifetime, or in our grandchildren's lifetime. We're about as far away from it as the ancient Greeks who built the Antikythera device were from building a general purpose cpu.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...