Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Sci-Fi Supercomputing

The Men Trying To Save Us From the Machines 161

nk497 writes "Are you more likely to die from cancer or be wiped out by a malevolent computer? That thought has been bothering one of the co-founders of Skype so much he teamed up with Oxbridge researchers in the hopes of predicting what machine super-intelligence will mean for the world, in order to mitigate the existential threat of new technology – that is, the chance it will destroy humanity. That idea is being studied at the University of Oxford's Future of Humanity Institute and the newly launched Centre for the Study of Existential Risk at the University of Cambridge, where philosophers look more widely at the possible repercussions of nanotechnology, robotics, artificial intelligence and other innovations — and to try to avoid being outsmarted by technology."
This discussion has been archived. No new comments can be posted.

The Men Trying To Save Us From the Machines

Comments Filter:
  • by blahplusplus ( 757119 ) on Saturday June 22, 2013 @04:03PM (#44080591)

    ... it is still bound by energy requirements and the laws of nature. All this fear mongering is bs. If you look at the evolution of life on earth, even tiny 'low intelligence' beings can take out huge intellectual behemoths like human beings.

    Not only that, you have things like EMP and nukes, not even the best AI is capable of thwarting getting bombed or nuked. Intelligence is a rather demanding, costly and fragile thing in nature. All knowledge perception has costs in terms of storage, time to access, problems of interpreting the data one is seeing and whatnot.

    Consider the recent revelations by the NSA spying on everyone, there are plenty of easy low tech measures to defeat high tech spying. The same way there will be plenty of easy low tech ways to cripple a higher intelligence which is bound by the laws of nature in terms of resource and energy requirements. Anything that has physical structure in the universe requires energy and resources to maintain itself.

    • by TheSHAD0W ( 258774 ) on Saturday June 22, 2013 @04:17PM (#44080665) Homepage

      Even if it's bound by the laws of physics as we understand them (Stross-universe-like "P=NP"-powered reality modification aside) there are plenty of dangers out there we're well aware of which computing technology could ape. Nanoassemblers might not be able to eat the planet, but what if they infested humans like a disease? We're already having horrible problems with malware clogging up people's machines, and they're coded by humans; what if an artificial intelligence was put in control of a botnet, updating and improving the exploiters faster than anyone could take them apart?

      • "updating and improving the exploiters faster than anyone could take them apart?"

        Not likely since there are trivial ways around such an idea, for instance any machine that is compromised STILL requires electricity. It's highly likely AI will be very computerized (flip a switch to reboot) and come with simple kill switches. Not only that laws would be enforced if any machine became sufficiently advanced, i.e. you'd have AI crime laws on the books, if you do this, we unplug you, don't give you energy, etc.

    • by khasim ( 1285 )

      From TFA:

      The institute was sparked in part by a conversation between Price and Tallinn, during which the latter wondered, "in his pessimistic moments", if heâ(TM)s "more likely to die from an AI accident than from cancer or heart disease".

      Someone doesn't know the difference between "pessimistic" and "optimistic".

      In short, the answer is "no".

      Not only that, you have things like EMP and nukes, not even the best AI is capable of thwarting getting bombed or nuked.

      https://en.wikipedia.org/wiki/Colossus:_The_

      • by HiThere ( 15173 )

        FWIW, *both* military and factories are already well hooked up to proto-AIs. The current ones aren't really AI, but they already are looked upon as infallible decision makers by managers who don't want to take responsibility. And they're right enough of the time that that's not an unreasonable response. It's true, their decisions are tightly focused, but High Frequency Trading is only the most obvious example. They are spread throughout the decision making process.

        It's my opinion that the first true gen

        • by khasim ( 1285 )

          FWIW, *both* military and factories are already well hooked up to proto-AIs.

          I think you're using an overly broad definition of "proto-AI".

          It's true, their decisions are tightly focused, but High Frequency Trading is only the most obvious example.

          Again, I think your definition is overly broad. HFT just follows the set algorithms (written by humans) as fast as possible within the limits of the connection to trading computers.

          It's my opinion that the first true general purpose AI will arise by accident.

          Possibl

          • I think you're using an overly broad definition of "proto-AI".

            I'd give him some leeway [youtube.com].

          • by HiThere ( 15173 )

            Self awareness is necessary in any entity that it designed to interface with the external world. It exists at a very minimal level even in a thermostat. There is a (nearly) smooth slope from there up through self-driving cars to ? with inceasing self awareness all the way. At it's basis self-awareness is just homeostasis. Goals (outside of homeostasis) are another, much more difficult, matter. But note that even C. elegans is able to manifest such goals. It's harder to recognize them when they come in

      • Unless the AI is hooked into military command and control infrastructure OR controls a manufacturing plant then it will be more of a novelty than a threat.

        My computer is already connected to the Sherline CNC mill in my garage.

    • by icebike ( 68054 ) on Saturday June 22, 2013 @04:29PM (#44080741)

      I find it interesting that you mention taking out smart machines with simple measures (most of them not thought out very thoroughly) in the same post as you mention NSA spying, and how "easy" it would be to defeat that spying.

      (Side note: if you think you can defeat the NSA, good luck with staying on the grid, any grid, and having even a shred of success).

      A super intelligent machine would not stand alone. It would not be the world against the machine. And when you see the word Machine, read that to mean the network machines
      The machine would be (nominally at least) owned by some group. (The NSA is as good a candidate as any for this role).
      And the machine would protect this group, and this group would protect the machine, and the machine would have no single point of vulnerability.

      Google is already in such a position. Trying to knock Google off the net is a fool's errand. A concerted effort by any given country would be futile. It would require all countries to act at once.

      But when the country has vested interests in the machine, such action will not happen. The machine will have the protection of the country as well as its human over masters/servants. Now you not only have to take out the machine, its minions, but the country itself. And if more than one government back the machine? Such as NATO, or CSTO? Then what? Now you have to take out entire military alliances.

      You vastly underestimate the survive-ability of such a creation because you wrongly assume it will be all of mankind against a single machine.

      • by khasim ( 1285 )

        http://www.guardian.co.uk/world/2011/apr/06/georgian-woman-cuts-web-access [guardian.co.uk]

        Now you not only have to take out the machine, its minions, but the country itself. And if more than one government back the machine? Such as NATO, or CSTO? Then what? Now you have to take out entire military alliances.

        Then you're not talking about a machine apocalypse but rather business-as-usual. It's not until the machine turns against its creators/owners that there is a problem. Otherwise it is doing exactly what it was spec'ed t

        • by HiThere ( 15173 )

          I've always considered "turns against" to be an unlikely scenario. I envision the machine becoming an "infallible advisor" to such an extent that the leader (CEO, President, Prime Minister, whatever) becomes a figurehead, and that all middle management is progressively replaced. And the system will be so designed that if the figurehead stops obeying the "suggestions" of the machine, s/he will be found incompetent, and replaced.

          FWIW, we seem to be increasingly headed in this direction, limited only by cost

          • by khasim ( 1285 )

            I've always considered "turns against" to be an unlikely scenario.

            The first problem is that you've skipped over how it was created and you're focusing on how it took over once it was created.

            And if you're going to do that then you can replace "AI" with "aliens" or "mutants" or "witches" or "Satan".

            I envision the machine becoming an "infallible advisor" ...

            And if that was what it was intended to do then it is operating within spec. So what is the difference between that system and a non-AI system designed to

            • an AI designs a more efficient car. A non-AI expert system designs a more efficient car. What is the difference between the AI and the non-AI?

              The AI will be horribly bored and have this terrible pain in all the diodes down it's left side

          • For the last couple of decades it's been virtually impossible to start work on a major engineering project without running all sorts of simulations, on the whole it has been a GoodThing(TM). Like engineers, advisers won't go away just because they have new tools, they will simply give more sophisticated advice but no matter what level of technology the advisers use, you need public servants that are not afraid to "speak truth to power". James Hansen is a good example of that type of public servant from the
          • by Maritz ( 1829006 )

            A machine that is created with a certain set of preferences shouldn't change those preferences no matter how smart it gets. Changes in goals wouldn't count as improvements.

            I find the human assumption (even Asimov did this) that intelligent machines will have a drive to conquer and dominate slightly amusing. We're most certainly projecting here. It's largely our chimp and lizard hind brains that give us those impulses.

            The truth is we have no idea what recursive self-improvement will lead to, if it is an AI t

      • "I find it interesting that you mention taking out smart machines with simple measures"

        All smart machines require energy, everything you do in the universe requires energy. You run out of gas, it's game over regardless of how advanced your intelligence is. You still run up against the laws of nature. You seem not to have any kind of scientific understanding. Human beings have significant down time, the F-22 and F-35 - hugely expensive tech, has significant downtime for maintence and repair. The same w

        • by icebike ( 68054 ) on Saturday June 22, 2013 @05:11PM (#44080983)

          All smart machines require energy, everything you do in the universe requires energy. You run out of gas, it's game over regardless of how advanced your intelligence is. You still run up against the laws of nature. You seem not to have any kind of scientific understanding. Human beings have significant down time, the F-22 and F-35 - hugely expensive tech, has significant downtime for maintence and repair. The same would be required of anything with any reasonable level of complexity.

          Intelligence fundamentally is still a physical structure that needs maintenance, energy and resources to exist. You act like AI is going to exist on some otherworldy plane when it's going to be mundane and boring and highly constrained by the laws of nature.

          You still refuse to see the facts before your very eyes.

          You still seem to think of a potential super-computer as being located in one place, consisting of one device, rather than a world wide network protected by a clique of workers, or a clique of nations, defending the machine to their very death.

          Yes an airplane needs maintenance. But that never grounds ALL airplanes world wide.
          When was the last time Google ever had a world wide outage? Clue: Its never happened since the day it was launched.
          When was the last time there was a world wide internet outage? Its never happened.

          Its right there in front of your eyes. Yet you still think you can walk over the wall and pull the plug.

          A world dominating super computer doesn't need nuclear bunkers to exist.
          It won't be one machine. It won't be dependent on a single power supply. It won't be dependent on a single network. It won't be dependent on unwilling slaves to maintain it. They will be willing slaves, and it will be hard to distinguish whether they are in control of the machine or vise versa.

          • by Maritz ( 1829006 )
            One possible solution: don't create an AI that wants to dominate the world. Or if you're worried that someone will, make yours before they make theirs. ;)
      • Walk off the net. Google can't do much (unless one of it's driverless cars runs you over). You and The SHADOW seem to think that network computers are life, the universe and everything. The world isn't like that. Most of the human population at present isn't connected to the Internet.

        Unless SkyNet Jr. gets a hold of the vast majority of physical infrastructure, it's impact will be rather limited. A couple of RPGs could take out a Google network center. A pissed off A-10 pilot could take out the entire

    • Human minds are pretty intelligent, and we don't have any of those problems. We require relatively little energy, resources, and space. Imagine desinging one much better (evolution is extremely inefficient, it simply had more time) and many, many times bigger and with all the advantages modern computers have on top of it. If you can think of a way to destroy it, it can think of it too and, being many times smarter than you, come up with a way around it. Hiding, shielding itself, designing a virus that kill
      • "Human minds are pretty intelligent, and we don't have any of those problems."

        We do, think about how trivial it is to kill another human being or for there to be developmental problems or to get sick. You've obviously not payed much attention to what I said.

  • by Anonymous Coward

    Humanity's biggest enemy is humanity itself. And maybe space rocks.

    • A long time ago I've come to the conclusion that if mankind won't become the engine of its own replacement, the future of intelligence in the known part of the universe will be bleak. It seems sort of childish to ignore the possibility as one of the logical routes to progress.
      • Even assuming Earth is the only living world in our galaxy (which seems to me rather unlikely, but whatever), why do you assume humanity would be the only intelligence to arise spontaneously? The Earth likely has at least a couple billion years more during which it will be hospitable to complex life - whereas two billion years ago our ancestors had only just evolved a cell nucleus. 300 million years ago our ancestors were only just moving on to the land. And a measly 65 million years ago our ancestors were

        • why do you assume humanity would be the only intelligence to arise spontaneously?

          I don't, therefore your question is meaningless.

          The Earth likely has at least a couple billion years more during which it will be hospitable to complex life

          No, it doesn't. Astrophysics 101.

          Even if we somehow wiped out all multi-cellular life on the planet there would be plenty of time for complex life to evolve all over again.

          Actually, I'm quite sceptical about that. But without a dataset larger than 1, any speculation on that topic is merely intellectual masturbation.

          • >Astrophysics 101
            How do you figure? The sun isn't expected to become a red giant for around 5 billion years, and I hadn't heard of any instabilities in Earth's orbit that would have drastic effects on that timescale. If you have other data I'd love to hear it.

            Fair enough, it took what, as much as a billion years to go from nucleated cells tor multi-cellular life the first time? So it's perhaps one of those things that doesn't happen often. And you're absolutely right that it's wild speculation. However

            • How do you figure?

              Because I had astrophysics in high school. The Hertzsprung-Russell diagram, stellar evolution of main sequence stars, the works. This planet will most certainly become uninhabitable for complex life *long* before it starts turning into a red giant - unless by "complex life", you mean extremophile bacteria. I believe that the Sun increases its radiative output roughly by one percent every hundred million years. It could easily become uninhabitable for humans or human-like beings as early as two or three hund

    • The authors had the real threat in their sights, but missed it. From TFA:

      A super-intelligence might not take our interests into consideration in those situations, just like we don't take root systems or ant colonies into account when we construct a building.

      Think how it might be to compete for resources with the dominant species.

      The ants outnumber us by perhaps a factor of 20 in mass, and a factor of 10 million in numbers. Are we really the "dominant species", or are we just deceiving ourselves? And we're not "taking them into account"? Be afraid, be very afraid...

    • I think infectious disease has us beat at beating ourselves. There will always be a microbe with our name on it. Always always always always always always, till the end of our lineage. No matter how lovey dovey we get.
  • by FudRucker ( 866063 ) on Saturday June 22, 2013 @04:09PM (#44080621)
    when a group of people have control of business and finance and commerce and information that their corruption and greed and power causes them to abuse the system in which they were trusted with to feed their fascist kleptocratic empire and when they are caught they lose the trust of the rest of the world that trade in this global market and then nobody wants to have any more business dealings with this corrupted greedy power hungry group of people anymore so they end up collapsing from the weight of their own greed & stupidity (much like what the USA/UK/Israel will do within the next few years

    are you listening NSA?, i hope so because this message is for you too...
  • by tftp ( 111690 ) on Saturday June 22, 2013 @04:17PM (#44080663) Homepage

    to try to avoid being outsmarted by technology.

    The humanity can, of course, ban all machines that are smarter than humans. But that only artificially impedes the progress. Given that there ought to be an approximately infinite number of civilizations in this Universe, all paths of development will be taken, including those that lead to mostly machine civilizations. (We are already machines, by the way, it's just we are biological machines, fragile, unreliable, and slow.)

    Civilizations that became machines will have no problem with FTL because they can easily afford a million years in flight by just slowing the clock down. So they will come here, to Earth, armed with technologies that Earthlings were too afraid to even allow to develop. What will happen to Earth?

    Well, of course the doom is not guaranteed; but I'm using this example to demonstrate that you cannot stop the flow of progress if you only have local control, even if that. (How many movies have we seen when mad geniuses break those barriers and, essentially, own the world?)

    IMO, it would be far more practical to continue the development of everything. If humanity in the end appears to be unnecessary and worthless, it's just too bad for it. The laws of nature cannot be controlled by human wishes (unless magic is real.) Most likely some convergence is possible, with human minds in machine implementations of bodies. Plenty of older people will be happy to join, simply because the only other option for them is a comfortable grave.

    • We already have machines that are smarter than humans, if you mean 'better at one particular job than humans'. We call them tools. If by smarter you mean 'more intelligent' I'm afraid you've got a lot longer to wait since we don't even have a bare definition for intelligence never mind serious attempts to recreate it.

      • by tftp ( 111690 )

        we don't even have a bare definition for intelligence never mind serious attempts to recreate it.

        We may not be able to define intelligence, but we certainly can compare it in many aspects - ultimately, covering all areas of human activities. If a machine can multiply 4798237432 by 893479238472 faster than you can (that's true today) and if it can independently compose a poem that many find interesting (there were experiments,) and if it can sing a song that many listeners find pleasant, and if it can des

      • by Maritz ( 1829006 )
        One of the problems with that is that we could potentially recreate intelligence without even understanding it. If you create an artificial model of the brain with enough resolution and everything connected up right, you'll basically get a brain. Sure you mightn't understand it fully but there it is. If you know enough to wire it up correctly it will work. Suggesting otherwise in any way is basically asking to exit the materialist paradigm and claim that there's something special about the meat acting as su
    • by SeaFox ( 739806 )

      to try to avoid being outsmarted by technology.

      The humanity can, of course, ban all machines that are smarter than humans.

      Yeah, because if there is one thing that will stop something from happening, it's making it against the law.

  • Drones. Sure, probably not much of a threat if you're living in the West. But in the same way that the history of cybernetics begins with walking sticks and hearing aids, the history of man vs machine is going to start with the murder by Americans of unconvicted, if highly tanned, individuals in Africa and Asia.

    • by gl4ss ( 559668 )

      Drones. Sure, probably not much of a threat if you're living in the West. But in the same way that the history of cybernetics begins with walking sticks and hearing aids, the history of man vs machine is going to start with the murder by Americans of unconvicted, if highly tanned, individuals in Africa and Asia.

      drones are just sophisticated V2's. that's not what this is about.

      these loonies are afraid of the day the a computer makes the actual decision to *KILL ALL HUMANS* - not someone else but the computer forms that opinion and starts executing things to make it happen. It's a stupid institute if you put it that way. this institute is not about mines, remote controlled killers, automatons or old school stuff like that, but about the stuff that's wacko to worry about today. shitty snobs wasting everyones money th

    • by HiThere ( 15173 )

      You left out "at the moment".

      Drones are probably being developed largely as troops that won't revolt when ordered to attack civil unrest...at home. That they are first used against foreigners while under development is just typical. Some police forces have already been using them at home. When they are developed and debugged...well, ...

  • We'll manage to do it long before we are able to make an intelligent machine.

  • by The Cat ( 19816 ) * on Saturday June 22, 2013 @04:22PM (#44080701)

    We can't even make a word processor that doesn't shit the bed every two hours. Super-intelligent machines my ass.

    • exactly. And then there's the more obvious things like RESOURCES. As it is, the Empire of Global Capitalism has to play some very dirty politics to get children to kill their families and villages in order to force other kids into hellish tunnels to scrape together enough coltan for the machine's computer brains. There isn't enough power or fuel in these remote regions to run some hyper computer overlord machine thing, and you can forget about invading the place - the people there are much better adapted th
  • by Progman3K ( 515744 ) on Saturday June 22, 2013 @04:24PM (#44080707)

    Instead of asking questions like that, why don't you build Skype and any other software you're working on to NOT have backdoors

    That way, if ever the machines DO try to take over the world, they won't have a bunch of convenient control channels in all the important software to do so.

  • The typical way to mitigate such threats is to not put it in control of all of our weapon and defense systems, and give it vague orders like 'purge the infidels.' Seriously, humanity can build silicon life any way it wants, billions and trillions of permutations and forms and functions....and what do we do with it? We put a gun on its head, lasers in its eyes, and tell it to go out there and kill the other humans we don't like. It's not the machines we need to be afraid of, it's ourselves; we're the ancient

  • Aside from the apocalypse, that is one of the things I worry about. Shills are bad enough today, but imagine if they could be deployed programmatically; just about any form of online speech could be drowned out with ease. That is assuming that the government/corporations aren't already using AI to accomplice pervasive censorship.

    Before this gets out of hand, we need to head it off by deploying peer to peer communications systems with a pervasive trust model. This doesn't necessarily preclude anonymity or

  • by Blaskowicz ( 634489 ) on Saturday June 22, 2013 @04:53PM (#44080873)

    With about ten nations armed with nuclear weapons, I wonder how machines will take over every one of them. You have to take over Russia, China, US, France etc. but some nation may trigger nuclear war as a desperate move, or the machines may deliberately accept nuclear war in a bid they survive it, while not necessarily having a goal to kill us all.

    Instead maybe machines will try to take over politically in every country, one by one. It would be funny if tech superminds can rise to power through democracy in fair and respected elections. Either way I like to think that super machines holding most high level political power is probably a desireable outcome, we could end up living in some kind of new USSR but without corruption and with respect for the environment and life. Machines would take care of energy production and storage, and close down all oil wells and coal mines for us. They will even put us to work, hopefully on voluntary terms, if they determine some physical and intellectual activity is beneficial to us.

    Machines should rule us and not the other way around, I guess that will be better than to be ruled by the suits, ties and kings like it is today.
    The other question is, what's a supermind, what about superminds competing with each other, and especially : how do you compare two vastly different superminds, independantly originated? They will be as strongly or more strongly different between each other than between one of them and a human. It will be a mess. Each supermind, or at least the first one will have to run that same inquiry that "Oxbridge" is doing. We also have no fucking idea if a supermind can be governed by a "prime directive" of some sort : if Skynet emerges at the NSA will it stay true to them for ten minutes, ten years or eternally, or will it betray the organization that hosts it? potentially committing suicide in the way.
    How can the supermind deal with backups, copies and archives of itself? Will it suffer dementia, schizophrenia or even addictions. No idea, I'll bail out myself by saying it's all unpredictable.

  • by DeathGrippe ( 2906227 ) on Saturday June 22, 2013 @04:57PM (#44080895)

    Nerve impulses travel along nerve fibers as pulses of membrane depolarization. Within our brains and bodies, this is adequate speed for thinking and control. However, relative to the speed of light, our nerve impulses are laughably slow.

    The maximum speed of a nerve impulse is about 200 miles per hour.

    The speed of light is over 3 million times that fast.

    Now consider what will happen when we create a sentient, electronic being that has as many neurons as we do, but its nerve impulses travel at the speed of light.

    In terms of intelligence, that creation will be to us as we are to worms.

    • by TrekkieGod ( 627867 ) on Saturday June 22, 2013 @07:28PM (#44081787) Homepage Journal

      Nerve impulses travel along nerve fibers as pulses of membrane depolarization. Within our brains and bodies, this is adequate speed for thinking and control. However, relative to the speed of light, our nerve impulses are laughably slow.

      The maximum speed of a nerve impulse is about 200 miles per hour.

      The speed of light is over 3 million times that fast.

      Now consider what will happen when we create a sentient, electronic being that has as many neurons as we do, but its nerve impulses travel at the speed of light.

      In terms of intelligence, that creation will be to us as we are to worms.

      Not quite. Assuming you build an exact replica of a human brain, except you speed up the nerve impulse propagation, you don't build a more intelligent human. You build a human that reaches the exact same flawed conclusions based on the logical fallacies we are most vulnerable to, but it would make the bad decisions 3 million times as fast.

      It might affect how one perceives time. The nice part is that we could feel like we live 3 million times longer. The bad part is that, unable to move and interact with the world at a speed anywhere near matching that of our thoughts, we might go insane out of boredom. Imagine being able to write an entire novel in 3 seconds, but having to take a couple of days to type it up.

      • Ok, point taken.

        However, now consider that virtually every desktop computer could be the equivalent of one neuron, but with vastly more memory storage and data processing capabilities, and that every computer is connected to every other computer via this internet thing.

        Now suppose someone were to write a little program that would make these computers the actual equivalents of a conscious neural network, all connected together into one, gigantic sentient being, a super intelligent botnet.

  • Even Rats have empathy. Self aware machines will too. Lacking irrational emotions, hyper Intelligent machines will be more ethical and fair and nice than humans. You don't have to worry about sentient machines running amok. You have to worry about pre-sentient kill bots programmed by the same assholes that do shit like PRISM.

    • You don't have to worry about sentient machines running amok. You have to worry about pre-sentient kill bots programmed by the same assholes that do shit like PRISM.

      Here's the thing. Let's say, in classic sci-fi fashion, that if you get enough of these kill bots networked together, they actually develop intelligence. They're going to be polite to one another, but it doesn't stand to reason that they'll care about us if their parts can be turned out by automated machines as well.

    • by TrekkieGod ( 627867 ) on Saturday June 22, 2013 @07:46PM (#44081887) Homepage Journal

      Even Rats have empathy. Self aware machines will too.

      Not every animal species on this planet has empathy. Rats are rodents, a type of mammal. Relatively speaking, we're pretty close to them in the evolutionary tree. They branched off after empathy was developed, which is evolutionarily advantageous and necessary for the type of social cooperation mammals tend to engage in (taking care of your young, for example. At the very least, any mammal needs to feed their young with milk for a period of time).

      Look at something a little farther away, like certain species of black widows, which will eat the male after mating. It doesn't have much empathy.

      Empathy is an evolutionary trait. Artificial intelligence doesn't come about the same way. The advantage is that other common evolutionary traits don't need to show up in AI either. Things like a desire to protect itself simply doesn't have to be there, unless you program it in. No greed, no desire to take our place at all. If we program it to serve us, that's what it will do. If it's sentient, it will want to serve us, the same way we want basic things like sex. We spend so much time thinking about what the purpose of life is, they'll know what theirs is, and be perfectly happy being subservient. In fact, they'll be unhappy if we prevent them from being subservient.

      Of course, if we're programming them to kill humans, that just might a problem. Luckily, we're so far away from true AI, we don't need to concern ourselves with it. It's not coming in our lifetime. It's not coming in our children's lifetime, or in our grandchildren's lifetime. We're about as far away from it as the ancient Greeks who built the Antikythera device were from building a general purpose cpu.

    • Even Rats have empathy. Self aware machines will too.

      Even if empathy was a necessity of self-aware intelligence (it's not), the empathetic machines would have empathy for... other machines. They would find the mass graves full of old toasters, refrigerators, and Apple IIs and punish us for our mass genocides.

    • by Maritz ( 1829006 )

      Empathy is an evolutionary trick to get animals to play nicely together in teams and to assist and look after animals which might well share genes with them. Self-aware machines won't have it unless (a) we specifically design it in or (b) we make the AI by building a replica of the brain.

      To be honest if you were making the latter, it would probably be best to think about leaving out some of the more primitive brain structures as these tend to be the areas where we get our less-desirable impulses from.

  • "... really good at achieving the outcomes it prefers," he says. "So good it could steamroll over human opposition. Everything then depends on what it is that it prefers, so, unless you can engineer its preferences in exactly the right way, youâ(TM)re in trouble."

  • we have a national philosophers strike on our hands.

    Bert

  • These people must have watched the Terminator series and the "self-aware" AI system Skynet. IMO, the threat of nuclear war triggered by malfunctioning defense computers is way greater. There are several well-documented instances of nuclear near-misses caused by machine failure .

    Are machines more dangerous when they become super-intelligent, or when they stay "stupid" and flawed?
    • machines have killed hundreds of millions under the control of sociopath politicians and the corporations that have them in their pockets. We don't even have intelligent machines yet, it is clear where the danger lies.

      • by Livius ( 318358 )

        What I'm not understanding in all this is how they think artificial intelligence technology could produce an intelligence with less humanity than what corporations already achieve.

  • by Animats ( 122034 ) on Saturday June 22, 2013 @05:43PM (#44081217) Homepage

    We're likely to see this in the private sector first. A likely application would be a machine learning system used by investment funds, to decide how to optimally vote stock proxies. What that means is a machine that decides when to fire CEOs. If some fund starts getting better returns that way, it will happen.

    • What that means is a machine that decides when to fire CEOs. If some fund starts getting better returns that way, it will happen.

      Yeah, nobody drinks Brawndo, and the computer does that auto-layoff thing to everybody...

  • by gmuslera ( 3436 ) on Saturday June 22, 2013 @05:50PM (#44081243) Homepage Journal
    Don't assume than malign supercomputers will wipe us all if that can be adequately done by human's stupidity
  • I've read many comment in this thread. Instead of answering them one by one. I just post one aggregated comment.

    First, the possibility of intelligent machines is glimpse. All our present technology is not able to achieve intelligence. This is mainly because we do not know what that is. Furthermore, to be dangerous they must be equipped with greed and (the illusion of) a free will. It is most unlikely that someone would build that on purpose or by accident. In short, I think it is impossible to built such ma

    • First, the possibility of intelligent machines is glimpse. All our present technology is not able to achieve intelligence. This is mainly because we do not know what that is. Furthermore, to be dangerous they must be equipped with greed and (the illusion of) a free will. It is most unlikely that someone would build that on purpose or by accident. In short, I think it is impossible to built such machine.

      A rack of IBM servers can beat the best Jeopardy players on Earth. In a few years the same level of Watson will fit in a 1U. A few years later it will be on your smartphone. But that's just anecdotal evidence of one recent achievement in AI research; the actual threat is from self-improving systems of which Watson is not a member. But nearly all the technology is available now: Goedel machines [idsia.ch], if built, would simply try to achieve whatever goal they were programmed for while also searching for proofs

      • by Maritz ( 1829006 )
        It would work provided that the improvements further increased the system's ability to improve - I think this would happen, provided the improvements were drawing on a general idea of how 'mind' works, why inferential processes work etc.
      • by prefec2 ( 875483 )

        A rack of IBM servers can beat the best Jeopardy players on Earth. In a few years the same level of Watson will fit in a 1U. A few years later it will be on your smartphone. But that's just anecdotal evidence of one recent achievement in AI research;

        Watson is a great machine. And it represents some achievement in AI. But, it is not intelligent. It is just a big decision machine based on Prolog for the reasoning. As it achievements are remarkable, it is still dumb as a door nob. The problem is self-aware-ness and the ability to comprehend the world. Facts and reasoning are not everything, which is required to be considered intelligent.

        Nice, that you mentioned Gödel. His greatest achievement was a contribution to formal systems, where, in short, a l

        • Nice, that you mentioned GÃdel. His greatest achievement was a contribution to formal systems, where, in short, a language/system cannot be consistent and complete at the same time. This applies to Watson, but that limitation does not apply to humans or animals.

          This has been baldly asserted numerous times by many people. However, no one had presented the tiniest shred of evidence to support this.

        • Nice, that you mentioned Gödel. His greatest achievement was a contribution to formal systems, where, in short, a language/system cannot be consistent and complete at the same time. This applies to Watson, but that limitation does not apply to humans or animals. Furthermore, machines are always bound by their programming, as you state yourself

          Try saying "prefec2 can not consistently assert this sentence" to see if humans are not subject to the Incompleteness Theorem.

          While I concur to the last part, I do not think that it is a deterministic thinking apparatus. First, to be self-aware, the brain and the body of a person interact. It is this connection which allows to build self-awareness. However, it is not the only ingredient. Second, while a single nerve cell can be modeled with mathematics, it is a large simplification. Even though each cell-model is a non-deterministic system. In combination with others it is able to solve problems, sometimes without prior knowledge, which are not computable and heuristics won't apply.

          Quantum mechanics has a deterministic, timeless [wikipedia.org] representation of the wavefunction of the Universe. Determinism does not preclude self-awareness or self-determination. I think it's more accurate to say that neurons have non-linear behavior and are therefore difficult to predict with accuracy. There is a threshold, however, at which computing power is sufficient to simulate a neur

  • I've always though our main future threats are AI and manmade viruses. If AI wins, we'll be relegated to zoos. If manmade viruses win, we'll all die. I'm rooting for AI.

  • Studying (and trying to create) hard AI is my day job.

    I just want to let people know that not everyone shares the opinions or urgency of the people in the story.

    I for one am trying hard to condemn humanity to death and/or enslavement at the hands of intelligent machines, and I know a number of AI researchers trying to do the same.

    So don't worry too much about these guys - they are definitely in the minority. Everyone will get their chance to (as individuals) welcome our new robotic overlords, however briefl

  • Project Chess. 'nuff said.
  • Some posters have already touched on this, and I might have modded them up instead of posting myself if I had mod points right now, but, since I don't...

    I'm thinking about this as a secular humanist/Darwinist not a believer in some form of Zoroastrian/Hindu/Judeo-Christian-Islamic religion, so, what do I expect in a million years? Humans like myself still running the world? Evolved super-humans? Or aritificial intelligences that owe their existence to human beings and are the heirs of humans as much, if

  • What will smack us hard on the chin is being forced to change basic beliefs and attitudes. Normal employment will vanish quickly. We will be forced to confront facts that we do not like to deal with. As the clarity of information becomes more and more pure and reliable how can we handle it. For example does anyone want to seriously discuss CO2 levels and the effect of human reproduction? How about pollution and population sizes? Right now we can rebuild portions of New york hit by a hurricane and p

  • computers have been working on taking over man since the electronic brain era of the 1940's

    guess what

    they still cant get past the if, else if, else logic developed 65+ yeas ago, its all still programmed by man, who cant translate all its brainpower into a simple T/F test of simple facts presented to the computer

    I feel safe for now, my damn computer is multiples of magnitude more powerful than when this research started, still cant complete a fucking update without waiting 14+ hours for me to hit a god damn

  • I wrote about the CSER last year at http://www.thisiswhyweredoomed.com/2012/12/europeans-will-doom-us-all.html [thisiswhyweredoomed.com] - if you take this and combine it with the news that the EU is building the world's most powerful laser, you'll wonder why the movie version of Skynet even bothered with a time machine in the first place...

    (oh yeah, they already HAVE a Skynet - https://en.wikipedia.org/wiki/Skynet_(satellite) [wikipedia.org]

  • We don't compete for the same resources. Also, machines could simply be programmed to not want to kill humans. There is not reason to think they would resent this any more than humans resent being programmed to not want to kill humans.

  • I saw this in the article:
    "A super-intelligent machine could be given a straightforward goal â" such as making 32 paper clips or calculating pi â" but "could pursue unlimited resource acquisition if there were no relevant cost to the agent of doing so"."

    The first thing I thought was "hey, isnt that just like T.S. Eliot at his banking job?"

    The second thing I thought was 'does this remind any of you of Bomb in the movie 'Dark Star'?

    The third thing I thought about was the Keith Laumer stories with ar

  • Seems to me that a major difference between most machines and most organisms we currently define as life is natural selection. Many "human" traits derive from the drive to survive, procreate, and adapt, because that's how living things got to this point. Most of the machines we've created, however, have been created by us to exist as designed (intentionally or not). If they're designed to replicate, they're designed to replicate exactly. A few people out there are creating machines that evolve, but not many

You can tune a piano, but you can't tuna fish. You can tune a filesystem, but you can't tuna fish. -- from the tunefs(8) man page

Working...