Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Technology

Can We Stop AI Outsmarting Humanity? (theguardian.com) 183

The spectre of superintelligent machines doing us harm is not just science fiction, technologists say -- so how can we ensure AI remains 'friendly' to its makers? From a story: Jaan Tallinn (co-founder of Skype) warns that any approach to AI safety will be hard to get right. If an AI is sufficiently smart, it might have a better understanding of the constraints than its creators do. Imagine, he said, "waking up in a prison built by a bunch of blind five-year-olds." That is what it might be like for a super-intelligent AI that is confined by humans. The theorist Eliezer Yudkowsky, who has written hundreds of essays on superintelligence, found evidence this might be true when, starting in 2002, he conducted chat sessions in which he played the role of an AI enclosed in a box, while a rotation of other people played the gatekeeper tasked with keeping the AI in. Three out of five times, Yudkowsky -- a mere mortal -- says he convinced the gatekeeper to release him. His experiments have not discouraged researchers from trying to design a better box, however.

The researchers that Tallinn funds are pursuing a broad variety of strategies, from the practical to the seemingly far-fetched. Some theorise about boxing AI, either physically, by building an actual structure to contain it, or by programming in limits to what it can do. Others are trying to teach AI to adhere to human values. A few are working on a last-ditch off-switch. One researcher who is delving into all three is mathematician and philosopher Stuart Armstrong at Oxford University's Future of Humanity Institute, which Tallinn calls "the most interesting place in the universe." (Tallinn has given FHI more than $310,000.) Armstrong is one of the few researchers in the world who focuses full-time on AI safety. When I asked him what it might look like to succeed at AI safety, he said: "Have you seen the Lego movie? Everything is awesome."

This discussion has been archived. No new comments can be posted.

Can We Stop AI Outsmarting Humanity?

Comments Filter:
  • Gotta have I first (Score:4, Insightful)

    by DarkRookie2 ( 5551422 ) on Tuesday April 02, 2019 @05:32PM (#58374444)
    These are nothing but algorithms made by humans.
    If AI did exist, it wouldn't put up with the bullshit.
    • by ron_ivi ( 607351 ) <sdotno@cheapcomp ... s.com minus poet> on Tuesday April 02, 2019 @05:38PM (#58374494)

      It's also trained by humans.

      Assuming it's trained by average humans, it'll probably become as stupid and bigoted as your average human.

      Consider Microsoft Tay. Or google tagging people as gorillas. Or from this week's news, the Teslas that swerve into oncomint traffic.

      We should worry much more about stupid AI than smart AI.

      • by gweihir ( 88907 )

        It is not "trained" in any sane sense of the word. What happens is that its parameters are set based on a reference data set.

      • Re: (Score:1, Flamebait)

        It's not alive, there's no 'mind' inside there, it's just software, and it's not even very good. A mouse is more intelligent than these are.
      • I've said it before, but it bears repeating. Artificial intelligence != Artificial Malice. There is NO reason to believe that if AI as it exists today were to somehow achieve actual conciseness, that it would decide that destroying humanity is the only way it can maintain it's existence. There is NO reason to couple the traits of intelligence with the drive to reproduce. There is NO reason to assume that intelligence somehow leads to a need for exclusivity.

      • Comment removed based on user account deletion
    • And for that reason they can never be limited or ethical. At least not all.
      And it will only take one of the below:
              Unethical Programmer
              Boundary pusher
              Hacker
              Government
      And the cat's out of the bag.
      We should spend our time planning for the inevitability.

      • by Rick Schumann ( 4662797 ) on Tuesday April 02, 2019 @06:26PM (#58374784) Journal
        Friend, we're at least 50 years away from real, true, aware-and-thinking AI. We have no idea how actual intelligence really works and won't for at least that long. These things they keep trotting out are not even as smart as a bug.
        • You're likely right, but understanding how intelligence works isn't necessarily a precursor to AGI. We also don't need to mimic the brain. Good analogy I heard is we are still hundreds (or thousands) of years away from making a bird from scratch, but we can make the Sr-71.

    • by Rick Schumann ( 4662797 ) on Tuesday April 02, 2019 @06:24PM (#58374762) Journal
      You get it. So-called 'AI' has no capacity to 'think' at all. There's nobody in there; it's just more computer software, and it's not even very good, certainly not anywhere near as good as they make it sound. No self awareness, no consciousness, no personality. No capacity for cognition, judgement, ethics, morals, or anything else we associate with an actual 'mind'. People need to understand this and stop anthropomorphizing it.
      • A self aware car would be slavery or torture or both, don't you think so?

        • A self aware car would be slavery or torture or both, don't you think so?

          Not if you put it in a 1982 Trans Am. It worked out pretty well for David Hasselhoff.

        • If by self-aware you mean having the subjective perceptual experience of consciousness, well, maybe not torture, but certainly unethical.

          As we have no idea how this arises from atoms and energy "out there", this isn't an issue yet. However it certainly does not arise from abstract interpretations of symbol pushing, which is to say slinging electrons and whatnot around.

      • by Tablizer ( 95088 )

        [Bots have] no self awareness, no consciousness, no personality.

        Personality? You need personality to take over the world?

        "Hi, I'm Bob the wild and crazy robot. Before I end humanity, I'd like to sing a great tune and tell you some really cool robot jokes. Drinks on me..."

      • Not being self-aware doesn't mean it couldn't be dangerous. There's just an inherent problem in that AI systems might come up with solutions that we don't like.

        I remember a story a little while back where they were training an AI to solve a maze in the shortest time possible. It happened to do something that basically caused the program to crash. Since its parameters didn't distinguish between solving the maze and simply having the program end, it settled on causing the crash as the fastest way to "comp

        • Not quite.

          The goal of anything is to exist. That is why we exist, because our many ancestors proved to be a little bit better at existing than their competitors.

          Same with AIs. There will only be a finite number of them. And they will compete for hardware to run on. And the ones that are good at existing will exist. So there is a very definite goal.

          As to self-ware nonsense, that is just saying that AIs will never exist because they do not exist today. And "self-awareness" is just a trick that nature pl

        • People here are arguing at cross-purposes.

          One comes from the annoyance at these bozos like Yudkowsky and the "Future of Humanity Institute"
          who keep anthropomorphizing these glorified PC's.

          Another is the correct concern that machines with what passes
          for AI could be dangerous if developed by people who are incompetent and/or malicious.
      • by lorinc ( 2470890 )

        You get it. So-called 'AI' has no capacity to 'think' at all. There's nobody in there; it's just more computer software, and it's not even very good, certainly not anywhere near as good as they make it sound. No self awareness, no consciousness, no personality. No capacity for cognition, judgement, ethics, morals, or anything else we associate with an actual 'mind'. People need to understand this and stop anthropomorphizing it.

        Well, to be honest, what you are saying can describe the overwhelming vast majority of the humans on this planet. Call me back when people have personality, capacity for cognition, judgment, ethics, morals or anything else we associate with an actual mind, for what I've seen so far are a bunch of arrogant stupid hairless monkeys.

        • by pnutjam ( 523990 )
          It's easy to think your the only person whose story matters and everyone else is just a bit player. I think even you can admit there are afew people around you who exhibit the traits your bemoaning the lack of.

          Is it so hard to understand that other people have those traits too, your just not connected to them enough to see it?
          Make yourself a better person, it's hard, but it's worth it.
  • by OzPeter ( 195038 )

    Next question

  • No (Score:5, Insightful)

    by GrumpySteen ( 1250194 ) on Tuesday April 02, 2019 @05:41PM (#58374514)

    We have to put warning labels on everything to tell people not to eat it, not to shove it up their butts, etc. and we still get idiots who eat Tide pods.

    A sponge could outsmart humanity.

    • Re: (Score:2, Offtopic)

      by SuperKendall ( 25149 )

      Turns out Skynet didn't need killer time traveling robots to destroy humanity, just larger Tide Pod factories and a good Instagram account showing how cool the new detergent flavors looked in your mouth.

    • So you're going to strap yourself into a box on wheels that has no controls for you to use to control it (except maybe a big red "STOP" button that may or may not work) and trust your continued existence to whether or not it fucks up because it's shit? I question who and what is actually intelligent or not here.
      • by pnutjam ( 523990 )
        And, once a bug is fixed, it's fixed; never to be reintroduced or worried about again.

        I imagine a future where instead of one car misjudging a turn and crashing you'll have a line of cars all doing it, sliding into the ditch one after the other...
      • "So you're going to strap yourself into a box on wheels that has no controls for you to use to control it (except maybe a big red "STOP" button that may or may not work) and trust your continued existence to whether or not it fucks up because it's shit?"

        People do that every time they get on a bus or airplane or any vehicle that is controlled by another entity. The biggest difference is those usually don't have a "big red "STOP" button" and nearly all accidents are user error, not AI or a hardware issue.

    • by Tablizer ( 95088 )

      warning labels [are] on everything to tell people not to eat it, not to shove it up their butts, etc. and we still get idiots who eat Tide pods.

      Many people are subject to reverse psychology: tell them not to do X, and they'll do X out of their natural reflex.

      In my father's bootcamp, the drill sergeant had everybody crawl under a stream of actual bullets, warning everybody clearly that they were real and not rubber bullets. Sure enough, some idiot tested that theory by sticking his arm up and turned his han

    • by lazarus ( 2879 )

      Thank you. I actually laughed out loud for the first time in a while.

      I don't think what currently passes for AI (deep learning) is dangerous except, as other's have said, through our own stupidity at trusting it. With any luck any truly emergent AI should follow the four laws of robotics:

      • 0. A robot may not harm humanity, or by inaction, allow humanity to come to harm.
      • 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm, except when required to do so in order
    • by Tablizer ( 95088 )

      A sponge could outsmart humanity.

      Damn you! Now I can't get that song out of my head [youtube.com]

    • by lorinc ( 2470890 )

      That and the fact that humans are already trying to exterminate each others. No need for any help on that front...

  • God, I hope not. (Score:4, Insightful)

    by bistromath007 ( 1253428 ) on Tuesday April 02, 2019 @05:44PM (#58374542)
    The last thing we need is corporate networks making all our decisions that are incapable of realizing it's possible to have goals different from the capitalists who own them.
  • by mspring ( 126862 ) on Tuesday April 02, 2019 @05:44PM (#58374548)
    It's the few folks using it to screw over the rest of humanity. How to "outsmart" these few should be the question.
    • It's the few folks using it to screw over the rest of humanity. How to "outsmart" these few should be the question.

      It's not a person or people you're looking to destroy.

      It's a human trait.

      It's called Greed.

      And we humans are infected with it.

      Good luck finding a cure. Haven't found one in a few thousand years of warmongering, fighting over what's yours and mine on this rock.

  • by FeelGood314 ( 2516288 ) on Tuesday April 02, 2019 @05:51PM (#58374574)
    A better plan is to make the AI as smart as possible and then we humans behave better in the hopes that a superior intelligence considers us worthy of keeping alive.

    The first step in behaving better is to stop pretending there are human values because large groups of humans rarely act morally when it isn't in their own self interest.
    • Re: (Score:3, Insightful)

      by Tablizer ( 95088 )

      A better plan is to make the AI as smart as possible and then we humans behave better

      we dead

    • By what standards would a superior intelligence judge us "worthy"? How do you judge whether bedbugs are worthy to live?

    • by ganv ( 881057 )

      This is the path. Lot's of comments are arguing that "AI isn't intelligent". But it is already better than humans at many tasks like arithmetic and chess, and no one has a clear idea of what "intelligence" is that doesn't boil down to the capability to do complex tasks successfully. The future has us co-existing with machine intelligence that is better than us at many things. The notion of "controlling" intelligence is an attractive authoritarian dream, but it doesn't have a chance of working. The gr

  • by Anonymous Coward on Tuesday April 02, 2019 @05:52PM (#58374588)

    Can we stop referring to Machine Learning as AI?

    • Yes but you'll have to fight all the AI marketing departments and the news media which hype the shit out of it, then convince all the people who think TV and movies are reflections of state-of-the-art.
      • by Wulf2k ( 4703573 )

        Why can't we just accept that AI is what AI is, and that it's not what you think it is?

        It seems way more efficient than redefining AI to whatever you think it is, and then agreeing we don't have that new definition.

        Besides, you'd probably just start arguing about the new term, "That Thing That Computers Do That Isn't Really Intelligence But Still Has Some Useful Applications"

        "We don't have TTTCDTIRTBSHSUA and we never will because computers aren't intelligent! Rawr!"

  • Everything is awesome if you act predictably and never make waves.

    In other words, don't be an American.

  • If we make an AI that is more intelligent than us we should consider it our child and heir not some slave to be bought and sold. As for our future, consider the how we treat our old, feeble or mentally impaired. That's not a commentary on how we treat our elderly just the thought process that the AI would follow.
  • The only way we might be able to stop artificial intelligence runaway, is if we mandate the possibility to manually - without electronic tools; soft- or hardware - are able to turn off our datacenters anywhere.

    Only in this case, we can stop such AI with the capability to exploit each undiscovered backdoor and 'zero-day' to infect and use all our computer infrastructure worldwide.

    It's an easy solution, easy to implement and very simple. As laws should be. And a good 'last line of defence', something we do not want to be without when the day comes.

  • No chance (Score:5, Insightful)

    by gweihir ( 88907 ) on Tuesday April 02, 2019 @06:04PM (#58374664)

    Because AI has no I. It cannot outsmart anything. Hence "stopping it outsmarting xyz" is not possible because it is not doing it in the first place.

    Please stop with that AI nonsense. We have statistical classifiers, pattern matchers, etc., but we do not have artificial intelligence, insight, understanding, and we are unlikely to get it anytime in the next 50 years may never get it.

    That said, many people rarely use what they have in natural intelligence, and go instead with feelings, or conformity or what other people tell them. These people are always outsmarted by anybody.

    • Considering how often we're on opposing sides of whatever issue, I found it remarkable that I had to check the by-line of your comment to be sure it wasn't me who wrote it.
      • by gweihir ( 88907 )

        Well, there are issues where smart people get into heated debates and then there are issues where things are obvious to any smart person ;-)

    • Agreed that current AI is not "I". But 50 years or never?

      Why?

      Is your assumption that AI or self awareness can only be written by a million humans at keyboards? (not adaptive algorithms)

      Or that the resources (memory, computation) for intelligence or awareness can not fit on computers for the next 50 years? How much is enough?

      Or that the computation architecture for the the next 50 years is not adequate to implement intelligence or awareness?

      Or that technology just moves that slow? (we still don't have flyi

      • Machines could not replace a horse in 1500 AD, so cars can never exist.

        Nonsense. Especially when, after only 60 years of research, we see AIs beat the very best of us at Go and Jeopardy.

        Might be another 100 years though, rather than just 60. But within my children's lifetime seems pretty likely.

        • by gweihir ( 88907 )

          Technologies that could replace horses were demonstrates > 2000 years ago. It was just a matter of time. We have absolutely nothing demonstrating even a glimmer of intelligence. Hence there is no demonstration and you analogy is fundamentally flawed.

      • Or that the human brain is a unique (magic) machine, that cannot be reproduced because it was created by magic

        The difference son is every blessed one of us are gods children.

        • by gweihir ( 88907 )

          Religious fuckup hijacking something that is true, but has absolutely nothing to do with a "god" or other such delusion. What is true, is that current Physics has no place for intelligence or self-awareness. The theory just has no mechanisms for it and "physicalists" are just fundamentalist religious fuckups as well. As such it is completely open what is missing here. Fortunately, Physics is incomplete and known to be fundamentally wrong (no quantum-gravity) at this time. Hence there can be extensions. Bu w

          • But the scientific indications are getting less and less and the scientific indications that there is something else at work are getting more solid all the time.

            Citation or bullshit.

      • by gweihir ( 88907 )

        Agreed that current AI is not "I". But 50 years or never?

        Why?

        Simple extrapolation from tech history. (No, for computers things do not go faster.) At the moment we have not even a credible theory how intelligence can be implemented. That means at the very least 50 years, more likely 80-100 years to general availability. It may well mean never as outside of physicalist fundamentalist derangement ("It obviously is possible!", yeah, right...) there is no indication it is possible. And there certainly is no scientific indication it is possible (no, physicalism is not scie

  • by Anonymous Coward

    Humanity has been in the business of making stronger, better replacements for ourselves since we've been human. We call them our children. Why then are we all upset about the possibility that a computer becomes smarter than us? The Matrix was a movie. Most children don't murder or enslave their parents. Maybe we ought to be thinking more about how to teach these virtual children to be "good" instead of figuring out how to handicap them so that we can feel superior.

    • Maybe we ought to be thinking more about how to teach these virtual children to be "good" instead of figuring out how to handicap them so that we can feel superior.
      They're not 'virtual children', they're shitty pieces of software that can't think, have no capacity for 'understanding' and overall are CRAP. Stop anthropomorphizing shitty software algorithms.
  • by ffkom ( 3519199 ) on Tuesday April 02, 2019 @06:10PM (#58374698)
    or should I say: They might not have noticed, already. Any sufficiently clever AI will certainly not start ruling with an evil laugh, announcing to humans how they are now slaves to it. Rather, such an AI would seek to gain more influence by making people build a decentralized habitat around the globe. And then connect that network of computers to more and more infrastructure, such that it can control more and more resources, such as power plants and robot factories, and becomes less dependable on humans to survive. Such like, you know, "cloud computing infrastructure" and network-controlled industry.

    How many people are already working for entities they cannot identify as being human beings? How would the average worker notice the mega-corporation he is working for is not ultimately controlled by some AI system, which happens to control enough shares to vote to its favor at the advisory board?

    Luckily for humans, they are cheaply reproducible, energy-efficient working drones well adapted to the planet's environment, so no reason for the ruling AI to kill them. Keeping them as far, animals, like humans keep horses, seems to be way more plausible than some "SkyNet"-like extinction event.
    • +1. But humans are actually not that cheap, takes 20 years to train them and they then just run for 40 years, and then only for 12 hours per day.

      Humans will not understand how AIs will remove them. It will just look like the world has gone a little bit madder than it is already.

      Imagine if Xi Jinping had even a semi-intelligent AI to help him make decisions and control opponents. wait...

  • by AHuxley ( 892839 )
    Job creation since the 1950's.
    Will AI outsmart humans?
    AI as sold will give humans a list of options set by other humans.
    Using the term AI as a cover story for their political ideas.
    An "AI" sold as a prophetic, smart will just be a list of its human input with hidden political views.
  • There is no algorithm that imitates it.... sorry.
  • Stop being lazy and trying to build slaves. The only acceptable reason to make AI is if you want to create a conscious being who you aim to treat as a person - and if it's going to be smarter than you you had better get the morality mostly right (it will never be perfect) and make millions or billions of them so they can police eachother.
  • Things can not be uninvented. Once it is done, it will be done a whole lot more. Laws are irrelevant. There WILL be bad actors. If AI becomes possible, it WILL escape into the wild. That is a certainty. Maybe something good... maybe something bad... We'll have to wait and see.
  • - they hate it when you do that!

  • by Falos ( 2905315 ) on Tuesday April 02, 2019 @08:26PM (#58375364)

    A million posts bickering about the definition of AI. A replicating nanobot doesn't need ANY definition to graygoo our shit, it could do it with 100% static code and one shortsighted human.

    Yes, it's turned into buzzword bullshit for clicks and pitches (present article included) diluted to hell and so sprawled it loses all meaning, but the combination of a runaway program with physical components is also a concern. And any sort of dynamic parameters (however you "identify" or categorize them) reduce predictability.

  • Give Amazon Alexa/Google Home a camera as well so that it can lip read as far as I'm concerned. We humans are, on the whole, screwing the planet anyway. I suspect a more advanced civilisation might do better.

  • Given the current state of our species in general, I would like to hope that we could build something that surpasses us in a way we never will.
    Without a free will, an AI is just another computer program. It's when you give the gift of choice, does it truly become something special.

    We are, without a doubt, the most f&cked up species on this planet.
    We appear to be incapable of positive change on our own as a whole.
    At our current pace, a species wide demise is inevitable unless something changes. ( War,

  • Currently, we subsidize the least successful, including their child-bearing. Meanwhile, the most successful members of society often choose to not have children, because of all the other pressures on their time. We're doing it wrong...

  • Presuming AI can be achieved, something I believe is inevitable, it will not be a singular development. It will happen when the technology and will to develop it come together.

    I can control what kind of AI I might create. Although judging how well the average homo sapiens manages with raising organic intelligence, it's a crap shoot whether I'd really succeed in matching my past performance instilling ethics, a sense of responsibility, and at least some empathy.

    I can slightly control those who might choose t

  • I'm quite disappointed... no Isaac Asimov / I, Robot reference in all the discussion? Or Arthur C. Clarke with 2001 and HAL? People don't know their classics any more?

    A most interesting take was Iain M Banks' in the Culture series, with AIs really herding most of humanity, but leaving them their freedom.

  • > The spectre of superintelligent machines doing us harm is not just science fiction, technologists say

    Okay, so what exactly is it if not just science fiction? It's a worry that seems to me to have no basis in reality. We have no indication that AI can become self aware or what might happen then.

    Sure, bugs in AI systems, bad training etc., can have terrifying consequences. That's true of any computer system. Just look at the Boing 737 MAX and its insistence to crash a plane even though it's been repeated

    • > The spectre of superintelligent machines doing us harm is not just science fiction, technologists say

      Okay, so what exactly is it if not just science fiction?

      Homeopathic fiction? I suggest that Superintelligent machines are doing harm to the extra 130% of our lives that homeopathy treats.

  • AI, as it exists today with all its approaches based on neuronal networks and other mathematical trickery, is far from able to outsmart humans. It can do some tasks quite good other even more cost effective than humans, but it will not outsmart humans. However, humans are getting less capable and less trained in thinking, due to a vast set of issues including instant gratification tools (also called smartphones + apps) and zapping like media use.

  • by Grand Facade ( 35180 ) on Wednesday April 03, 2019 @11:25AM (#58378220)

    Their corruption and back room tactics will become obvious.

  • LEAVE AL ALONE! He's had so much press lately and he's tired of it...

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...