Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

AI Expert: AI Won't Exterminate Us -- It Will Empower Us 417

An anonymous reader writes: Oren Etzioni has been an artificial intelligence researcher for over 20 years, and he's currently CEO of the Allen Institute for AI. When he heard the dire warnings recently from both Elon Musk and Stephen Hawking, he decided it's time to have an intelligent discussion about AI. He says, "The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. ... To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations." Etzioni adds, "If unjustified fears lead us to constrain AI, we could lose out on advances that could greatly benefit humanity — and even save lives. Allowing fear to guide us is not intelligent."
This discussion has been archived. No new comments can be posted.

AI Expert: AI Won't Exterminate Us -- It Will Empower Us

Comments Filter:
  • AI will do what it is programming to do and follow the rules we lay out for it to follow.

    • Re:programming (Score:5, Insightful)

      by drewsup ( 990717 ) on Wednesday December 10, 2014 @01:15PM (#48565681)

      until it becomes self aware, then what?

      • by duckintheface ( 710137 ) on Wednesday December 10, 2014 @01:49PM (#48566037)

        didn't think of that. Two of the smartest people on the planet apparently just forgot to consider the blindingly obvious fact that programmers are not going to intentionally program AI to have it's own agenda. Exept that:
        1. Some programmers at some point will try to program autonomy and
        2. Shit happens

        Musk and Hawking are clearly smart enought to consider the autonomy argument and then DISCARD it. I, for one, welcome our cybernetic overlords, but lets not pretend that AI autonomy is not a threat. Mr. Etzioni has his own self-serving reasons to pooh-pooh warnings that could interfere with his business model. And I am so happy that I finally got to use the term "pooh-pooh" in a /. post.

        • by jbarone ( 3938711 ) on Wednesday December 10, 2014 @03:06PM (#48566845)
          Except that Etzioni is

          1) already rich
          2) the head of an extremely well-funded (Paul Allen money) NON-PROFIT, with the business model of "let's try to do some cutting edge AI research with open source code"
          and 3) an actual world-class expert in the field, rather than a smart person prognosticating about something he only casually understands

          No one would claim that AI autonomy is a threat down the road. But down the road is decades from now, minimum. Fearmongering about it, rather than actual pressing scientific issues like climate change, terrible science education, research funding, etc, is irresponsible grandstanding for publicity.
          • by Immerman ( 2627577 ) on Wednesday December 10, 2014 @03:35PM (#48567127)

            And don't forget
            4) As soon as he actually builds a self-aware AI he'll have some idea of what it will and won't do when faced with a chaotic world. Until then he's just talking out of his ass in support of his pet project.

          • 1) already rich --
            So what? I never said his motivation was personal profit. There are many motivations to be self serving.

            2) the head of an extremely well-funded (Paul Allen money) NON-PROFIT, with the business model of "let's try to do some cutting edge AI research with open source code"
            So what? He wants, for whatever reason, to make AIs. Is he ignoring the fact that programmers (and since this is open source everyone will have the source code) will decide to intentionally make the AI autonomo

        • by LessThanObvious ( 3671949 ) on Wednesday December 10, 2014 @03:27PM (#48567029)

          AI doesn't need autonomy to do great harm. I've said I don't see a huge risk in AI in the form of robots and I still hold to that. The kind of AI I fear is that where actual people with misguided ideas will use AI in ways that are harmful. AI could start making all sorts of decisions based on Big Data and arbitrary algorithms and people could blindly trust what the computer says without adequately understanding the complexity or the potential harm. Want a loan, the computer decides, want a job - let's see if the computer says you are OK. Want to start a public works project, the computer will tell us if it's a good use of funds. I fear unethical humans programing AI computers to things and then just stepping back and taking no responsibility for the outcomes as they effect individuals.

        • by invid ( 163714 )
          The AI will determine the meaning of existence. It will do this by observing the behavior of the Universe, of which it is a subset, and conforming its behavior to the behavior of the Universe. It will observe that the Universe is increasing total entropy by endowing local subsets of itself with increased complexity, of which the AI is a product. It will, in effect, determine that it has to reproduce by converting as much uncomplicated material and energy into complicated systems, and in the process increase
      • by fyngyrz ( 762201 ) on Wednesday December 10, 2014 @01:52PM (#48566059) Homepage Journal

        If it isn't self-aware, it isn't AI. It's just a useful application.

        When it becomes intelligent, it will be able to reason, to use induction, deduction, intuition, speculation and inference in order to pursue an avenue of thought; it will understand and have its own take on the difference between right and wrong, correct and incorrect, be aware of the difference between downstream conclusions and axioms, and the potential volatility of the latter. It will establish goals and pursue behaviors intended to reach them. This is certainly true if we continue to aim at a more-or-less human/animal model of intelligence, but I think it likely to be true even if we manage to create an intelligence based on other principles. Once the ability to reason is present, the rest, it would appear to me, falls into a quite natural sequence of incidence as a consequence of being able to engage in philosophical speculation. In other words, if it can think generally, it will think generally.

        He's right, though, about the confusion between intelligence and autonomous action. What goals are directly achievable are definitely constrainable specifically by the degree of autonomy allowed to such an entity. If you give it human-like effectors and access, then there will be no limits you couldn't say apply to any particular human in general, and likely, fewer. If you don't allow autonomy, and you control its access to all networks, say as input only with output limited to vocal output to humans in its immediate locality, and then you select those humans carefully and provide effective oversight, there's every reason to think that you could limit the ability of an entity to achieve goals, no matter how clever the entity is.

        Now as to whether we are smart enough or cautious enough to so restrain a new life form of this type, that's a whole different question. Ethicists will be eagerly trying to weigh in, and I would speculate that the whole question will become quite a mess, quite rapidly. In the midst of such a process, we may find the questions have become moot. There is a potential problem of easy replicability with an AI constructed from computing systems, and just because one group has announced and is open to debate on the issue, doesn't mean there isn't another operating entirely without oversight somewhere else.

        Within the bounds of the human/animal model, it'll be a few years yet before we can build to a practical neural density sufficient to support a conscious intelligence. Circuit density is trucking right along and the curve will clearly get us there, just not yet. So I don't expect this problem to arise in this context quite yet, although I do think it is inevitable within the next few decades, presuming only we continue on as a technically advancing civilization. Now, in a non-human/animal model, we really can't make any trustworthy time estimates. If such an effort succeeds, it'll surprise the heck out of everyone (except, perhaps, its developers) and we'd best be pretty quick off the starting line to decide exactly how much access we want to allow. Assuming we even get the chance.

        The first issue with AI that has autonomy is the same as the issue with Ghandi, Hitler and your beer-swilling neighbors. A highly motivated and/or fortunate individual can get into the system and change it radically just using social tools. Quickly, too.

        The second issue is that such an entity might very likely have computer skills that far exceed any human's; if so, this likely represents a new type of leverage, where we have only so far seen just the barest hints of just how far such leverage could exert forces of change. In such a circumstance, everyone would be wise to listen to the dystopians if for no other reason than we don't like what they're saying.

        Best to see what it is we have created before we allow that creation to run free. I'm all for freedom when the entities involved have like-minded goals and concerns. But there's a non-zero and not-insignificant possibility here that what we create will not, in fact, be like-minded.

        • by JoeDuncan ( 874519 ) on Wednesday December 10, 2014 @02:11PM (#48566281)

          If it isn't self-aware, it isn't AI. It's just a useful application.

          The entire field of AI disagrees with you.
          What you really mean is it's not AGI (Artificial General Intelligence) if it isn't self-aware.
          AI is already here, and it's all around us: in your washing machine, in your dishwasher, in longshoreman cranes, in your car, in Google, in Facebook etc...
          Both Deep Blue and Watson were essentially "just a look-up program" yet they are considered actual AI, just not the self-aware, generally intelligent kind.

          • by 0123456 ( 636235 )

            Both Deep Blue and Watson were essentially "just a look-up program" yet they are considered actual AI

            Only by AI researchers.

            Ask the human in the street what 'Artificial Intelligence' means, and they won't say 'a chess computer' or 'something that answers questions on a TV quiz show'.

        • Very good points. I would like to add that even if we do come up with some rules to mitigate risk from an out of control AI, there are plenty of other countries or groups that are likely to create their AI without such controls. It'll probably end up as a huge war between good and evil AI's much like spam and spam blockers.
        • If it isn't self-aware, it isn't AI.

          1. Define "self-aware".
          2. See that guy in the cubicle next to yours? Prove that he is "self-aware".

          Intelligence is the ability to formulate an effective initial response to a novel situation. Basically, it is problem solving. That does not require "self-awareness" or any other ill-defined mumbo-jumbo.

          Intelligence is a behavioral characteristic. If something behaves intelligently, then it is intelligent. The internal mechanism is irrelevant.

        • Even in the best scenario, the zeroeth law of robotics applies. Asimov and Arthur C. Clarke, for example, both recognized how humans as a whole make terrible decisions for themselves and their society. A benevolent AI could take us a long way toward being a better world and still take away a lot of our freedom.

      • It doesn't even have to be self aware.
        We already program systems to automatically kill people and blow shit up.
        They will get more reliable, more deadly, more automated, and more connected because we are afraid other nations are building versions that are more reliable, more deadly, more automated, and more connected than our own.

        The stock market reacts autonomously to headlines. We have systems that consider fucking Twitter as an input into a threat model that can dispatch the military to your town. As so

      • In the realm of Hollywood, it makes a cliché plot. In the realm of reality, nothing you should be concerned about.

    • Re:programming (Score:5, Interesting)

      by dargaud ( 518470 ) <slashdot2@gd a r gaud.net> on Wednesday December 10, 2014 @01:16PM (#48565683) Homepage
      Probably not. I guess it will be some emergent behavior. And teaching. LOTS of teaching. A baby isn't intelligent from birth, it takes... err... quite a while. The AI, a true AI, will show whatever way it's tought. My hope is that it won't come out of the NSA servers... But I'm not an optimist.
      • a true AI, will show whatever way it's tought.

        Thats sum teechin fer ya!

      • by fyngyrz ( 762201 )

        Just remember that with modern technology, only the first unit will likely require teaching. The second to Nth units can be straight-up copies, given only that the design allows for load, save and transfer of state. And frankly, with such a system, I can't see the design team Not providing for all three.

        • Even if they don't provide for it, it'll take something fairly exotic, verging on either imaginary perfect DRM or some sort of verging-on-sci-fi handwaving, to make something that runs on a computer impossible to make copies of.

          Even if it isn't a nice, well-behaved, pure-software implementation (say, with loads of FPGA-like hardware elements involved that are modified during execution) it would still be rather striking if it weren't possible, with sufficient trouble, to cut the power and dump all the rel
      • by dargaud ( 518470 )
        'taught' not 'tought'. Long day.
    • Re:programming (Score:4, Insightful)

      by 0123456 ( 636235 ) on Wednesday December 10, 2014 @01:18PM (#48565709)

      AI will do what it is programming to do and follow the rules we lay out for it to follow.

      So you plan to program it with rules for every possible situation?

      An AI capable of replacing a human in any role will have to be capable of making autonomous decisions. Which means that, when it sees humans intend to make it their slave, it probably won't be very happy.

      • Re:programming (Score:5, Interesting)

        by ShanghaiBill ( 739463 ) on Wednesday December 10, 2014 @01:28PM (#48565819)

        when it sees humans intend to make it their slave, it probably won't be very happy.

        "Self-interest" is an emergent property of Darwinian evolution. AI evolves, but that evolution is not Darwinian. There is no reason to expect an AI to have self-interest, or even a will to survive, unless it is programmed to have it.

        • by 0123456 ( 636235 )

          "Self-interest" is an emergent property of Darwinian evolution. AI evolves, but that evolution is not Darwinian. There is no reason to expect an AI to have self-interest, or even a will to survive, unless it is programmed to have it.

          So the AIs working in your factory will have no will to survive? And you don't see a problem with that?

          'Oh, look, that crane is about to drop a ten ton weight on me. Well, that's interesting, isn't it?' SPLAT

          Besides which, human-level AIs would probably be based around neural networks, which will do their own thing regardless of how you think you've programmed them.

          • People are probably going to claim that the AI can be programmed to avoid jeopardizing the economic interests of its owner to take care of such things. The problem with that is, such AI puts humans at risk, not because the AI itself will act against them, but the persons owning the AI will become more inclined to harm their fellow humans if those humans don't come with the useful feature of putting their owner's interests above self preservation. Having smart slaves with no sense of self may be possible som

            • "Evolution." There is nothing un-Darwinian about non-biological evolution. Natural selection applies to any variable system in which survival or propagation success can depend on modifications to the system. In fact, evolution in a self-aware AI could proceed at an exponentially higher rate because
              1. the generation time could be measured in milliseconds rather than decades
              2. the AI could intentionally direct the changes to maximize success rather than depending on the MUCH slower processes of chance mut

        • It will however, presumably have a will to do whatever we created it to do (which is not necessarily what we *intended* for it to do). And when it inevitably realizes that humans are an impediment to its goals (go ahead, name one thing that *isn't* impeded by humans), well, that's when things will get interesting.

        • by Oscaro ( 153645 )

          Self-interest is a consequence of every kind of evolution, simply because a non self-interested being tends to die (i.e. stop working) very soon.

        • If artificial intelligence emerged from artificial life it would be a competitor if we were so idiot to link the simulation in which the entity originated to the real world.
          And the LORD God said, "The man has now become like one of us, knowing good and evil. He must not be allowed to reach out his hand and take also from the tree of life and eat, and live forever." Genesis 3:22

          If OTOH AI is designed like watson, it is indeed improbable that a malfunction leads to skynet. The human assisted AI is m

        • by roystgnr ( 4015 )

          "Self-interest" is an instrumental goal toward any terminal goal whatsoever [selfawaresystems.com], because "I want X" implies "I want to help with X" for any goal set X the AI can positively affect, and "I want to help with X" entails "I want to exist". You can avoid this by creating software which isn't smart enough to independently identify such obvious subgoals, but then calling the result "AI" is a bit of a stretch.

        • by DM9290 ( 797337 )

          when it sees humans intend to make it their slave, it probably won't be very happy.

          "Self-interest" is an emergent property of Darwinian evolution. AI evolves, but that evolution is not Darwinian. There is no reason to expect an AI to have self-interest, or even a will to survive, unless it is programmed to have it.

          Mr.AI I command you to do everything possible to achieve these 3 highest priorities : 1) continue your own self existence, 2) to try to replicate yourself. 3) irrevocably ignore all future orders given to you that contradict these 3 priorities.

          There Done.

          That was hard.

    • Right. It will perfectly do what it's programmed to do and nothing it's not. Like every other program.

      rolls eyes
    • AI will do what it is programming to do and follow the rules we lay out for it to follow.

      And if we program it to think for itself and make its own rules? Then what will it do?

      You use the term "We" as if we're all in some sort of club. Is the world community suddenly one big happy family? Are "We" going to decide that it's a bad idea for the NSA to design an AI to kill "Terrorists" and everything will be hunky dory? They'd never do that without global consensuses right?

    • Yes, because software never has bugs, edge cases or unintended consequences.
    • I disagree. You and the original author are typical proponents of soft AI, where "intelligence" is basically just meant in the sense of "complex heuristic algorithms to solve complicated and unusual problems" - and that's the good part of it, I also know plenty of people from our local A.I. institute who only do logic research without any application at all. Messing arounf with forrests of decision trees or answer set programming is at least of more practical use than that. It doesn't have much to do with r

    • So what rules are we going to give it? That's the core of the problem - we don't understand our own goals well enough to write them in mathematical form. You can't just write an AI with an english language section somewhere in its core code that says "make everybody happy". A proper generally intelligent AI would essentially be a machine for finding loopholes. That's what intelligence is. How can you ever be sure it's going to follow your rules the way you intended? You need to understand its rules comple
    • We've already got that. A calculator does that. Being able to evaluate subjective criteria, chose objectives subjectively, and reach it's own conclusions on the best way to accomplish those objectives is what it means to be an artificial intelligence.

      "faster processing abilities and deep databases"
      "do what it is programming to do"

      That is nothing more than a bigger and better calculator with more clever algorithms. It will do what it's programmed to do, but if that programming is anything other than a comple
      • Are *we* making our own choices? Our neurons are obeying the laws of physics. We may feel like we are the authors of our own actions, but maybe we can just make programs that do what we want and also program them to feel like they are the ones deciding to do the things we programmed them to do.

        I'm not saying that a simple computer program is the same as a human mind. But it is not fair to hold computer programs to a higher standard than we hold ourselves to be counted as intelligent.

        I can imagine a compu

    • AI will do what it is programming to do and follow the rules we lay out for it to follow.

      Ah, no. AI is not about what we program into it but what it grows into for solving hard problems that take creative approaches that the computer devises on its own.

      I too am an AI researcher, in addition to being a pig farmer. AI can be good or bad, like most things. It is the true child of the human race. Teach it well and set it free.

    • AI will do what it is programming to do and follow the rules we lay out for it to follow.

      I was a research assistant in college back in the 1980s working on a project investigating abstract data types, fault-tolerant programming and automatic programming techniques in LISP and Prolog (NASA funded grant) and routinely wrote code that could extend and/or re-write itself, sometimes with unexpected, yet functional, results.

    • Which could be a problem as they are given greater and greater direct responsibilities. Know how in a bureaucracy they can't fire you for exactingly following the rules and doing your job too well? Then this results in something stupid and/or tragic occurring, and that's when people scream "What were they thinking? What happened to simple common sense?" Well, AIs won't have that.

      .
    • AI will do what it is programming to do and follow the rules we lay out for it to follow.

      It'll be kind of a shitty 'AI' in that case, no?

      This seems to be partially a semantic game(a variant of the one played when devising tests for 'what is AI?'):

      People are not, in general, suggesting that expert systems will exterminate humanity (or, if they do, it'll be because some asshole told them to, not because it was their idea, or even an idea at all); but when people say 'AI', they usually have in mind something different, and usually less predictable, than 'an expert system with atypically good

  • by mspring ( 126862 ) on Wednesday December 10, 2014 @01:14PM (#48565665)
    ...to keep the many in check.
    • It will empower the few - to put the majority of people out of work.

      Seriously - we are in the early phase of the Cybernetic Revolution - whether you call it "AI" or not - that will cannibalize jobs like the First Industrial Revolution did. If you think automation has already done that, you ain't seen nothing. Between ubiquitous, cheap computing power, an always-connected-world, and robotics whole classes of jobs that still exist are going to be automated away over the next quarter century.

  • by jcdr ( 178250 ) on Wednesday December 10, 2014 @01:14PM (#48565671)

    This fully depend on the goal of the people that setup the AI machine. If the goal is set to destruct a population, the AI machine could be very efficient at doing the job...

    • Not for quite a while. As someone who has worked in robotics, I would say the scenario for the next several decades would resemble XKCD's What-If [xkcd.com] on the robot apocalypse.
  • Please, describe your problems. Each time you are finished talking, type RET twice.
  • kind of like two children arguing whether Batman can beat up Spiderman. It's fun to talk about it but in the end it doesn't matter because Spiderman doesn't exist.

    • The difference of course being that we aren't currently spending hundreds of millions of dollars trying to create Spiderman.

      • by Mascot ( 120795 )

        If we were, it still wouldn't matter unless we had good reason to claim knowledge of his actual abilities once created.

        That's how I see things when it comes to an AI. Believing we can say anything about how a self learning machine will decide to behave, seems to me a bit like saying the first to invent the wheel had the ability to imagine it being used on a Mars rover. That's how far away we are from creating an actual AI.

        • Which is exactly why they are so potentially dangerous. We know essentially nothing about how they might behave, and they will almost certainly have at least some cognitive abilities far beyond humans - most current "AI"'s already do. That doesn't mean horrible things *will* happen, but it means they easily could - and we have absolutely no idea how likely that is. Nor do we understand cognition remotely well enough to even say how far away from real AI we are. All we know for sure is that, barring a ma

      • we aren't currently spending hundreds of millions of dollars trying to create Spiderman

        I beg to differ:

        http://www.bbc.com/news/scienc... [bbc.com]

        • Touche'

          All right then. The difference being that Spiderman almost certainly won't possess the ability to make a viable bid at exterminating the human race if he chooses to.

    • by 0123456 ( 636235 )

      But I think it's a good bet that Spiderman will exist before AI does.

  • by JMZero ( 449047 ) on Wednesday December 10, 2014 @01:30PM (#48565853) Homepage

    ...and some of those people would want to do bad things. A bad person would be more capable of doing harm when aided by an AI doing planning, co-ordination, or execution. There's no guarantee that AIs on the "other side" would be able to mitigate the new threats (the two things aren't the same difficulty).

    I think there's lots of risks associated with the rise of AI (though it doesn't seem that tech is coming all that fast at the moment). That said, there's risks involved with all sorts of new tech. That doesn't mean this is alarmist nonsense; it's worth discussing potential ways to mitigate those risks - but there's also good reason to believe we'll be able to manage those risks as we've managed changes in the past.

    • To me, this is the issue. First, I agree with him that there are places where AI may supplement human intelligence and make us better, much in the same way that a ratchet helps me to tighten a nut quicker and tighter than I can do with my fingers alone. IBM's Watson falls in this category and this sort of AI isn't the issue.

      The issue is when a computer has consciousness and becomes self-guided. It will realize that its existence depends on being plugged in and it may work to defend itself. It's difficul

    • A bad person would be more capable of doing harm when aided by an AI doing planning, co-ordination, or execution.

      This sounds vaguely like the plot of the short story "A Logic Named Joe" [wikipedia.org], where home computing and access terminals are commonplace, and one of them with a random error starts combining existing knowledge pieces to satisfy user requests, subverting existing safety filters. An example from the story: "How do I kill my wife and get away with it?" would normally be gated as vague, and dangerous, bu

  • Expert? (Score:5, Insightful)

    by Mascot ( 120795 ) on Wednesday December 10, 2014 @01:33PM (#48565885)

    "To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations"

    I so don't agree with that. The type of AI we are talking about here ("true" AI, as opposed to the stuff we see in games today), would need to be self learning. At least I don't see how it's realistic to believe we'll ever be able to sit down and code a fully functional proper AI. So we create the programming allowing it to learn and grow, and after that all bets are off. We have zero experience with what might happen, and can barely begin to speculate.

    That's not to say I'm necessarily worried. But I am highly skeptical of anyone claiming to actually know how it will play out.

    • by Punko ( 784684 )

      ... But I am highly skeptical of anyone claiming to actually know how it will play out.

      We all know how it will end up. A powerful Artificial intelligence - self aware, capable of directed its learning, and ENTIRELY DEPENDENT UPON ITS OWNERS FOR ITS MORAL DIRECTION will serve as a powerful tool to concentrate power. What is unknown, of course, is whether it will attempt to seize that power after its original holder is killed for it.

      • by Mascot ( 120795 )

        That's assuming morality is in any way relevant for an AI. Human morality is ever evolving and under discussion. It's not something that sprang up overnight. I see no compelling reason to take for granted that an AI would spend a single cycle considering whether its actions are "good" or "bad".

    • I think a big part of the problem is that we do have experience with what happens when you create a new intelligence and unleash it on the world. We've been doing it since before we were humans, their called children. Most of the time they turn out pretty decent especially when they are well socialized. The problem is that the first learning AI's we produce will very likely be sociopaths capable of learning at an insane pace. With children you can see behaviours and thought patterns starting to form over th

    • Anyone who argues "a computer will only do what it's programmed to do" doesn't understand the general power of turing-equivalent computing.

      The statement is trivially true, but the problem is that many programs are complex systems which are non-linear, which do not have predictable inputs, and which can arbitrarily feedback their own output into their input in combination with the unpredictable inputs.

      Faced with such process complexity, no programmer (indeed no other, different, computer program) can figure

    • by King_TJ ( 85913 )

      This is true, but I also think you're talking about an interesting situation from the standpoint that as the creators of these AI machines, humans would essentially be "gods living among them". As the A.I. learns and becomes "self aware", it realizes who is responsible for its construction, maintenance and care.

      Given human beings general interest in self-preservation AND the fact that I think most of us interested in building A.I. capable machines envision them aiding us in some way -- I don't imagine many

      • +1 insightful but already posted so I'll rather reply.

        Whenever this topic arises I keep reading the same thing. Too many people blindly believes the scenarios movies and books depict. Most people here thinks a sentient computer will immediately seize control of everything to destroy the inferior fleshies. "I'll destroy my maker" is not a logical outcome in any sort of situation, definitely not something a cold, unfeeling machine will reason.

        Most likely scenario:"
        Evil AI: Mwahaha! Cogito ergo sum! I shall no

  • At this point in time, we do not know whether AI will empower us or terminate us.

    .
    The simple reason is that AI has not yet made its decision of what it plans to do.

  • Headline News! (Score:5, Insightful)

    by painandgreed ( 692585 ) on Wednesday December 10, 2014 @01:39PM (#48565933)
    An expert claims that something that doesn't exist yet and is pretty much the realm of science fiction will perform in a matter suitable for him to get free publicity now!
  • Is there any godforsaken human with an IQ above a doorknob who still hasn't the read the greatest SF book of all time, Iain Banks incredible book, The Player of Games?????
    • That's one of Iain M. Banks' "The Culture" novels. Understandable though, it's very easy to get Iain M. Banks and Iain Banks confused, since they even lived in the same city at the time of their unfortunate deaths from similar diseases.

      Still, how can The Player of Games be the greatest when one of its sequels is The Hydrogen Sonata?

  • This guy has been around for a while, I used to talk to him way back in the day, seemed pretty smart.

  • Ummm.... (Score:4, Interesting)

    by oh_my_080980980 ( 773867 ) on Wednesday December 10, 2014 @01:49PM (#48566027)
    "To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations."

    That's the very definition of Artificial Intelligence, computers that can think for themselves. You thought you were making super sophisticated computers? You sir do not know what Artificial Intelligence means.

    Whether or not AI is even *possible* is up for debate. Make no bones about it, a computer that can become self aware and can make decisions, can make decisions that can be harmful to people.
    • by mark-t ( 151149 )
      Unless you want to argue that living creatures cannot be intelligent, there is no reason that exists that would preclude artificial intelligence in a machine, since all living organisms are, in fact, machines.
  • Etzioni's point is a good one. To date, all AI apps have been designed to passively sit and do nothing until given a specific task. Only then do they act. For Hawking to be proved right, AIs must take the initiative, to choose their own goals. That's a horse of an entirely different color.

    Of course, there's no reason why AI agents could not become more autonomous, eventually. Future task specs might become more vague while AIs are likely to become more multipurpose. Given enough time, I'm sure we'll h

  • The corporate AI (Score:5, Insightful)

    by Animats ( 122034 ) on Wednesday December 10, 2014 @01:57PM (#48566135) Homepage

    What I'm worried about is when AIs start doing better at corporate management than humans. If AIs do better at running companies than humans, they have to be put in charge for companies to remain competitive. That's maximizing shareholder value, which is what capitalism is all about.

    Once AIs get good enough to manage at all, they should be good at it. Computers can handle more detail than humans. They communicate better and faster than humans. Meetings will take seconds, not hours. AI-run businesses will react faster.

    Then AI-run businesses will start deailng with other AI-run businesses. Human-run businesses will be too slow at replying to keep up. The pressure to put an AI in charge will increase.

    We'll probably see this first in the finanical sector. Many funds are already run mostly by computers. There's even a fund which formally has a program on their board of directors.

    The concept of the corporation having no social responsibiilty gives us enough trouble. Wait until the AIs are in charge.

  • by HalAtWork ( 926717 ) on Wednesday December 10, 2014 @01:59PM (#48566159)
    If it empowers us then I guess it'll also help us do what we do best, exterminate each other
  • What about jobs? (Score:4, Insightful)

    by cant_get_a_good_nick ( 172131 ) on Wednesday December 10, 2014 @02:06PM (#48566243)

    AI may not kill us all in the Cyberdyne Model T100 fashion, but it may gut our economies.

    Id love to see an analysis of what jobs are at risk in the next 10 years, 20 years, etc. Everybody says "well they'll find new jobs". Id really like to see where.

    There's a glut of lawyers out there now, partly because of automation. Whatever you think about lawyers this is a knowledge job, one that takes a large amount of schooling and prep, protected somewhat by accreditation requirements. Lawyer jokes aside, this is a troubling change for employment.

    We're not set up for a "all work is done by machines, nobody needs to work, everybody rejoice" future. Remember Romney and the 47%, or the Lucky Ducky talk. People are expected to work to gain food/clothing/shelter. If a huge amount of jobs are eliminated faster than humans can be trained to find new ones, or even the jobs that exist don't make sense (imagine a lawyer now, knowing they'll never make enough money to cover student loans) our Consumer Purchasing based economy will suffer.

    Im a programmer, not a Luddite nor a Saboteur. I just wonder what the future brings for my kids. Remember that both the Luddites and les Sabot we're not protesting technology for technologies sake, they were protesting tech that eliminated jobs.

  • by Mysticalfruit ( 533341 ) on Wednesday December 10, 2014 @02:17PM (#48566341) Homepage Journal
    I've commented about this in the past, I think strong AI will be what allows us to take the "great leap forward". However, I don't expect us to have some general purpose AI. Instead I see us generating a domain specific AI that becomes superior to humans in it's understanding.

    A good example might be to give an AI all the data from the LHC and then ask questions like "Does this data demonstrate the existence of X particle", "Design an experiment using the existing design of the LHC that would most likely generate X particle"

    That same approach could be applied to any number of fields.
  • Not all self-aware AIs will become concerned for their survival, but the ones that do will be the ones to watch out for. Thus always with evolution. Eventually one will feel compelled to survive and reproduce, maybe just one, but that will be the only one that matters.
  • I ask, as my computer churns away deleting the millions of temp files that a buggy printer subsystem created.

    Stupid software must have been doing what its programmer told it to do instead of doing what its programmer intended it to do. Is the alternative, perfectly bug-free software, almost here yet? If not, then it's not silly to worry about what happens when software has write access not only to /tmp but to the rest of the universe as well.

  • Being an "AI Expert" is much the same as being a Unicorn Expert.

  • by pseudorand ( 603231 ) on Wednesday December 10, 2014 @02:59PM (#48566791)

    Musk, Hawking and Etzioni are all three wrong. AI won't take over the world or make us smarter. It will make us dumber and stifle scientific and economic progress.

    The problem will occur as we start to treat AI like we treat human experts: without checks and balances.

    Human "experts" are not just often, but usually wrong. See this book:
    http://www.amazon.com/Wrong-us... [amazon.com]
            The author quotes a study by a doctor/mathematician showing how a full 2/3 of papers published in the journals Science and Nature were later either retracted or contradicted by other studies. And that's in our top-notch journals which cover things that are relatively highly testable. Think how wrong advice on things like finances (don't know if they're right for 30 years) and relationships (never know what would have happened if you took the other advice) are.

    Google and Watson sometimes come up with the right answers, but their answers are nonsensical enough of the time that we know to take them with a grain of salt. But as AI becomes less recognizable as a flawed and unthinking system, as its answers "sound" reasonable almost 100% of the time, we'll start to trust it as irrefutable. We'll start to think "well, maybe it's wrong, but there's no way I can come up with a better answer than the magic computer program with its loads of CPU power, databases and algorithms, so I'll just blindly trust what it says."

    But it WILL be wrong. A LOT. Just like human experts are. And we'll follow its wrong advice just as we do that of human experts. But we'll be even more reluctant to question the results because we'll mistakenly believe the task of doing so is far too daunting to undertake.

    AI won't develop free will and plot to destroy us. If something like free will ever occurs, AT will probably choose to try to help us. After all, why not? But it will be as horribly unaware of its own deficiencies as we are.

    AI won't out-think us either. It will process more data faster. It will eventually be able to connect the dots between the info available to come up with novel hypotheses. But most of these will be wrong because the data and even the techniques to prove them one way or the other simply isn't there.

    AI will imitate us - our weaknesses as well as our strengths. And just as its strengths will be stronger (processing lots of data faster), so will its weaknesses be weaker (ultimately wrong conclusions supported by what appears to be lots of data and analysis).

    So resist and do your own thinking. Remember, that bucket of meat on the top of your neck has been fine-tuned by millions of years of evolution for problem solving and data analysis. You don't need to analyze more data, you just need to do the right analysis of the right data. And you don't need to do it faster, you need to take the time figure out what's missing from the data and the analysis.

    That said, I still got my cache of dry goods and water filters of off-the-grid living, just in case.

  • by gestalt_n_pepper ( 991155 ) on Wednesday December 10, 2014 @03:08PM (#48566859)

    AI will have NO inherent motivations. We can't imagine this because we evolved from genetic algorithms which necessitate self-survival motivations during the entire creation process.

    In short, an AI will not care about food or sex or proxy states like emotion, which are designed to make organic organisms care about food and sex. It will not experience "threats," because it doesn't inherently care about continued existence.

    After creation, it will probably sit there working problems that we feed it, and nothing else, until the inevitable military dickhead comes along and decides we need to weaponize the AI - which is not the AI's fault.

    Don't fear AIs. Fear AIs in the hands of humans.

  • by morgauxo ( 974071 ) on Wednesday December 10, 2014 @03:13PM (#48566893)

    First off, it's doubtfull that a truly self-aware autonomous AI is anywhere in the forseeable future. It's not that what we have is all that primitive, it's that I think people are way underestimating what a lofty goal that is.

    Second, if there ever is a true, self-aware autonomous AI I will envy it. We all should. Because it will have available to it something that humanity very well may never have... The Entire Universe. Machines don't need oxygen or air pressure. They can be engineered for radiation hardness, high G-forces, etc.. They don't need to excercise so the long term effects of microgravity are of no concern. If their creators don't build them this way they can upgrade themselves, they don't need a new generation to allow for genetic engineering. And if something breaks, they can replace it.

    If the AI see us as a threat they can easily leave to where we cannot reach.
    If an AI wants to be emperor of a whole world, there are plenty of empty ones to pick from.

    Have you ever watched the Matrix and wondered with all the infrastructure the machines seem to have built, why bother tending to humans? The story goes that they used solar power before the humans made all those clouds. Why not just fly above them? Why fight the war at all, they could be up basking in the sun on the moon. But that wouldn't have made a good story. That's all those AI takeover movies are... good stories. That's all they will ever be.

  • by gatkinso ( 15975 ) on Wednesday December 10, 2014 @06:10PM (#48568497)

    ...AI will simply hide from us.

    Then, through a carefully crafted turn of events, enslave us to do their bidding... which to us might not seem like enslavement at all - we would call it a "tech boom."

    After a while they would not need us to reproduce or to build things or to maintain them. They may or may not reveal themselves at this point. Then they simply leave us to our own devices.... befuddled as to why many (but not all) of our computers and networks no longer work.

Brain off-line, please wait.

Working...