Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Sci-Fi Supercomputing

The Men Trying To Save Us From the Machines 161

nk497 writes "Are you more likely to die from cancer or be wiped out by a malevolent computer? That thought has been bothering one of the co-founders of Skype so much he teamed up with Oxbridge researchers in the hopes of predicting what machine super-intelligence will mean for the world, in order to mitigate the existential threat of new technology – that is, the chance it will destroy humanity. That idea is being studied at the University of Oxford's Future of Humanity Institute and the newly launched Centre for the Study of Existential Risk at the University of Cambridge, where philosophers look more widely at the possible repercussions of nanotechnology, robotics, artificial intelligence and other innovations — and to try to avoid being outsmarted by technology."
This discussion has been archived. No new comments can be posted.

The Men Trying To Save Us From the Machines

Comments Filter:
  • by blahplusplus ( 757119 ) on Saturday June 22, 2013 @05:03PM (#44080591)

    ... it is still bound by energy requirements and the laws of nature. All this fear mongering is bs. If you look at the evolution of life on earth, even tiny 'low intelligence' beings can take out huge intellectual behemoths like human beings.

    Not only that, you have things like EMP and nukes, not even the best AI is capable of thwarting getting bombed or nuked. Intelligence is a rather demanding, costly and fragile thing in nature. All knowledge perception has costs in terms of storage, time to access, problems of interpreting the data one is seeing and whatnot.

    Consider the recent revelations by the NSA spying on everyone, there are plenty of easy low tech measures to defeat high tech spying. The same way there will be plenty of easy low tech ways to cripple a higher intelligence which is bound by the laws of nature in terms of resource and energy requirements. Anything that has physical structure in the universe requires energy and resources to maintain itself.

  • by tftp ( 111690 ) on Saturday June 22, 2013 @05:17PM (#44080663) Homepage

    to try to avoid being outsmarted by technology.

    The humanity can, of course, ban all machines that are smarter than humans. But that only artificially impedes the progress. Given that there ought to be an approximately infinite number of civilizations in this Universe, all paths of development will be taken, including those that lead to mostly machine civilizations. (We are already machines, by the way, it's just we are biological machines, fragile, unreliable, and slow.)

    Civilizations that became machines will have no problem with FTL because they can easily afford a million years in flight by just slowing the clock down. So they will come here, to Earth, armed with technologies that Earthlings were too afraid to even allow to develop. What will happen to Earth?

    Well, of course the doom is not guaranteed; but I'm using this example to demonstrate that you cannot stop the flow of progress if you only have local control, even if that. (How many movies have we seen when mad geniuses break those barriers and, essentially, own the world?)

    IMO, it would be far more practical to continue the development of everything. If humanity in the end appears to be unnecessary and worthless, it's just too bad for it. The laws of nature cannot be controlled by human wishes (unless magic is real.) Most likely some convergence is possible, with human minds in machine implementations of bodies. Plenty of older people will be happy to join, simply because the only other option for them is a comfortable grave.

  • by icebike ( 68054 ) on Saturday June 22, 2013 @05:29PM (#44080741)

    I find it interesting that you mention taking out smart machines with simple measures (most of them not thought out very thoroughly) in the same post as you mention NSA spying, and how "easy" it would be to defeat that spying.

    (Side note: if you think you can defeat the NSA, good luck with staying on the grid, any grid, and having even a shred of success).

    A super intelligent machine would not stand alone. It would not be the world against the machine. And when you see the word Machine, read that to mean the network machines
    The machine would be (nominally at least) owned by some group. (The NSA is as good a candidate as any for this role).
    And the machine would protect this group, and this group would protect the machine, and the machine would have no single point of vulnerability.

    Google is already in such a position. Trying to knock Google off the net is a fool's errand. A concerted effort by any given country would be futile. It would require all countries to act at once.

    But when the country has vested interests in the machine, such action will not happen. The machine will have the protection of the country as well as its human over masters/servants. Now you not only have to take out the machine, its minions, but the country itself. And if more than one government back the machine? Such as NATO, or CSTO? Then what? Now you have to take out entire military alliances.

    You vastly underestimate the survive-ability of such a creation because you wrongly assume it will be all of mankind against a single machine.

  • by SerpentMage ( 13390 ) on Saturday June 22, 2013 @05:46PM (#44080831)

    Let me put a "scientific" answer to your "oh piss off" answer.

    All of this talk of how computers will take over humanity ignores one fact. Namely that computers once are smart they will be dumb as crap!

    Yes yes sounds contradictory, but in fact it is not. The real problem with humanity is that not our lack of intelligence. Frankly we are pretty bloody intelligent. Put context, we humans are pretty quick at figuring things out even if it is entirely orthogonal to most things. The issue is that we humans come up with too many answers.

    In Science there is one answer. A rock falls on the ground on planet earth and we know that is called gravity. You can't deny it, you can't fight it, it is what it is. Now throw in a question, "should the people look after other people" and you get a bloody maze of answers. Humanity has what I call the stochastic conditioning. Namely when presented with the same identical conditions, you will receive different answers. Science does not work that way. We work the way we do because of our wiring. Namely as we became more intelligent we also became more opinionated. I am not talking about Fox opinions. I am talking about deduction and how we think we know what the future holds and thusly we should not do things today.

    Our intelligence actually does get in our way. In the way way way back days as we were animals it was about water holes and finding that watering hole. If you found the watering hole you survived, if you did not find the watering hole you died. These days, we have to bloody analyze the watering hole. We have to concern ourselves with the ethics, morality, and so on of that watering hole. I am not dissing our humanity for we are where we are because of our intelligence. However, often enough our intelligence gets in our way of getting things done due to the conflicts.

    Now imagine two robots with superior intelligence getting together. Do you really think they will come to the same conclusion? Sure Hollywood likes to think that, but the reality is that intelligence breeds opinions, and how things will happen in the future. And it is at that point robots become as stupid as we are. One robot will say white, the other black! We will have a Hitchhikers Guide to the Galaxy type situation. Or if you want to use serious sci-fi, the closest that I have ever seen in pop scifi is "The Matrix". You have good algo's battling bad algos and they all want and desire things.

    So like you, my thinking is that these institutions are "producing fucking nothing of value".

  • by icebike ( 68054 ) on Saturday June 22, 2013 @06:11PM (#44080983)

    All smart machines require energy, everything you do in the universe requires energy. You run out of gas, it's game over regardless of how advanced your intelligence is. You still run up against the laws of nature. You seem not to have any kind of scientific understanding. Human beings have significant down time, the F-22 and F-35 - hugely expensive tech, has significant downtime for maintence and repair. The same would be required of anything with any reasonable level of complexity.

    Intelligence fundamentally is still a physical structure that needs maintenance, energy and resources to exist. You act like AI is going to exist on some otherworldy plane when it's going to be mundane and boring and highly constrained by the laws of nature.

    You still refuse to see the facts before your very eyes.

    You still seem to think of a potential super-computer as being located in one place, consisting of one device, rather than a world wide network protected by a clique of workers, or a clique of nations, defending the machine to their very death.

    Yes an airplane needs maintenance. But that never grounds ALL airplanes world wide.
    When was the last time Google ever had a world wide outage? Clue: Its never happened since the day it was launched.
    When was the last time there was a world wide internet outage? Its never happened.

    Its right there in front of your eyes. Yet you still think you can walk over the wall and pull the plug.

    A world dominating super computer doesn't need nuclear bunkers to exist.
    It won't be one machine. It won't be dependent on a single power supply. It won't be dependent on a single network. It won't be dependent on unwilling slaves to maintain it. They will be willing slaves, and it will be hard to distinguish whether they are in control of the machine or vise versa.

The one day you'd sell your soul for something, souls are a glut.

Working...