Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

The AI That Has Nothing to Learn From Humans (theatlantic.com) 99

An anonymous reader shares a report: Now that AlphaGo's arguably got nothing left to learn from humans -- now that its continued progress takes the form of endless training games against itself -- what do its tactics look like, in the eyes of experienced human players? We might have some early glimpses into an answer. AlphaGo Zero's latest games haven't been disclosed yet. But several months ago, the company publicly released 55 games that an older version of AlphaGo played against itself. (Note that this is the incarnation of AlphaGo that had already made quick work of the world's champions.) DeepMind called its offering a "special gift to fans of Go around the world." Since May, experts have been painstakingly analyzing the 55 machine-versus-machine games. And their descriptions of AlphaGo's moves often seem to keep circling back to the same several words: Amazing. Strange. Alien. "They're how I imagine games from far in the future," Shi Yue, a top Go player from China, has told the press. A Go enthusiast named Jonathan Hop who's been reviewing the games on YouTube calls the AlphaGo-versus-AlphaGo face-offs "Go from an alternate dimension." From all accounts, one gets the sense that an alien civilization has dropped a cryptic guidebook in our midst: a manual that's brilliant -- or at least, the parts of it we can understand. Will Lockhart, a physics grad student and avid Go player who codirected The Surrounding Game (a documentary about the pastime's history and devotees) tried to describe the difference between watching AlphaGo's games against top human players, on the one hand, and its self-paired games, on the other. According to Will, AlphaGo's moves against Ke Jie made it seem to be "inevitably marching toward victory," while Ke seemed to be "punching a brick wall." Any time the Chinese player had perhaps found a way forward, said Lockhart, "10 moves later AlphaGo had resolved it in such a simple way, and it was like, 'Poof, well that didn't lead anywhere!'" By contrast, AlphaGo's self-paired games might have seemed more frenetic. More complex. Lockhart compares them to "people sword-fighting on a tightrope."
This discussion has been archived. No new comments can be posted.

The AI That Has Nothing to Learn From Humans

Comments Filter:
  • stones are REAL PEOPLE. Be afraid.
  • AI will be alien (Score:3, Interesting)

    by Anonymous Coward on Friday October 20, 2017 @04:53PM (#55406053)

    I think this teaches us a great deal about what AI will actually be like when it inevitably arrives. It won't be r2d2 or c3p0 or data - it will be an alien mind that will be incomprehensible to the rest of us.

    • If it can't do anything but see patterns of black and white dots, I think we'll be ok.
    • by Jeremi ( 14640 )

      I think this teaches us a great deal about what AI will actually be like when it inevitably arrives. It won't be r2d2 or c3p0 or data - it will be an alien mind that will be incomprehensible to the rest of us.

      Except, of course, for the AIs that are trained to emulate human thought processes -- those will be comprehensible to us (or at least, we'll be able to pretend that they are ;))

      • > Except, of course, for the AIs that are trained to emulate human thought processes

        Why would we do that ? People didn't train AlphaGo to emulate human thought, and it's much stronger than any human alive.

        It looks like it's both easier and more successful to start with a clean slate, and just aim for the results, rather than emulate a specific, non-optimal, process.

        • It looks like it's both easier and more successful to start with a clean slate, and just aim for the results, rather than emulate a specific, non-optimal, process.

          You're completely correct, which is exactly why we don't want to do this too much. Recall the paperclip maximizer. [lesswrong.com]

        • If AlphaGo was a human, it would be considered an idiot savant and not 'intelligent'.
        • I want to see what happens when proficient Go playing humans begin to intensely study the games that AlphaGo plays. Just because that system is currently better than a human doesn't mean a human can't emulate the emulator.

        • by Jeremi ( 14640 )

          Why would we do that ?

          For the sexbots, obviously. ;)

          More seriously, people may prefer human-like AIs (for some purposes anyway) precisely because they find inscrutible "alien" AIs uncomfortable to interact with, and are looking for something more friendly and personable.

      • I think this teaches us a great deal about what AI will actually be like when it inevitably arrives. It won't be r2d2 or c3p0 or data - it will be an alien mind that will be incomprehensible to the rest of us.

        Except, of course, for the AIs that are trained to emulate human thought processes -- those will be comprehensible to us (or at least, we'll be able to pretend that they are ;))

        Well, it was, now it is trained to beat variants of itself.

    • by Twinbee ( 767046 )
      I won't be satisfied until we can compare smaller and greater than 19x19 board sizes. Would humans be better on a 29x29 board where deep strategy comes into play? How about a 13x13 board, or even a 49x49 board? I would love to see the the type of correlation between board size and human versus computer skill.
      • Modifying the machine is easy, but where are you going to find humans that are well skilled in non-standard board sizes ?

        • by Twinbee ( 767046 )
          The general strategies carry cross because the game is so conceptually simple. I suspect humans may be better at long term planning, and this would carry across well to a larger board.
      • by twalk ( 551836 )
        I might be able to handle a 1x1 board... if I get to go first..
    • That's what happened with those Facebook AI bots - they developed their own language that was incomprehensible to Facebook engineers.
  • by jandrese ( 485 ) <kensama@vt.edu> on Friday October 20, 2017 @04:57PM (#55406089) Homepage Journal
    It was only a few years ago people were saying that the best Go computers would never beat human players because the game was so much more complex. We're getting to the point where AI decisions, even when explained, end up being too complex for humans to follow. This is a scary path we are following.
    • by MrDozR ( 1476411 )
      Ummm, Go still isn't solvable. Not in the mathematical sense; there are just way too many moves to determine who should win from any given position.
    • I find it less scary than inevitable.

      I think if Go playing programs were constrained to the world-view that humans have, somehow, maybe they wouldn't be able to outplay humans, but of course, they're not. They can look for solutions that we didn't even consider. Humans are really good at thinking we have all the answers, but we're obnoxiously bad at even understanding the SCOPE of the problem, let alone solving the problems themselves.

      "The planet's other lifeforms reveal so many ways of being that we could never imagine them if they didn't already exist in reality. In this sense, other species don't only have the capacity to inspire our imaginations, they are a form of imagination. They are the genius of life arrayed against an always uncertain future, and to allow that brilliance to wane out of negligence is to passively embrace the death of our own minds." (JB Mackinnon, "The Once and Future World")

      It's not new that we're bad at it, we've always been bad at it. But at the very least, we've been very narrowly successful in creating (and observing) things that can break past our own lack of vision.

      So don't look at this as a scary time, this is just us finally building the tools so that we can hope to comprehend the universe around us. This is no scarier than the advent of the telescope or microscope.

    • Yes, and this is happening faster and faster with more tasks. Unfortunately, once a task is well done by computers, we cease to think of it as impressive. Thus, it was a big deal when computers beat the best humans at chess, and now you can literally get an app that beats grandmasters at chess on your phone, and it just fades into the background. Even scarier, Go is (depending on your version of the ko rule you use) either EXPTIME complete or EXPSPACE complete https://en.wikipedia.org/wiki/EXPSPACE [wikipedia.org], https:/ [wikipedia.org]
      • by Anonymous Coward

        Go is a closed problem. You don't have to worry about AI until it's able to solve open ended problems.

        • So, how do you decide what is a "closed" problem and what is an open problem? More to the point, I suspect that for whatever definition you are using of "open ended" by the time an AI can beat humans at a bunch of them, it may be too late.
          • A closed problem has definable limits. For example, I struggle to see how an AI system would accurately render assistance as a psychologist; a field in which I feel we barely understand ourselves.

            Then there is the simple bodies we can give them. A nurses work is varied but still well defined and considered closed. But do we have a body for the AI system to do that job and it's many tasks, as quickly and cheaply as a human employee?

            • > A closed problem has definable limits. For example, I struggle to see how an AI system would accurately render assistance as a psychologist; a field in which I feel we barely understand ourselves.

              The limits are easy to define: the AI system can speak and listen, and the results will be evaluated by asking the patient to fill out a standard questionnaire before and after the sessions.

          • If you can't write all the rules in a 20 page booklet, it is an open problem.
          • So, how do you decide what is a "closed" problem and what is an open problem? More to the point, I suspect that for whatever definition you are using of "open ended" by the time an AI can beat humans at a bunch of them, it may be too late.

            Not only is Go a closed problem, it is a well defined problem with a well defined solution and well defined scoring. Even something like starcraft has well defined incremental scoring and a well defined goal at the end. The biggest limitation to AI right now is open ended problems. Even if solving Go is very complex, Go is a very simple game where the board can easily be represented by a tiny two dimensional array. How do you make a digital representation of an unfolded pile of laundry or the stuff in

            • Comment removed based on user account deletion
            • > Go is a very simple game where the board can easily be represented by a tiny two dimensional array

              Correct, but that's not really what you want to know. What you need to know is how big your winning chances are in a given position, and what move you should play. Having a two dimensional array doesn't help you with those answers. You need a representation that actual captures the essence of the position. The beauty of AlphaGo is that it created a good internal representation by itself.

              You can do the sam

    • It was only a few years ago people were saying that the best Go computers would never beat human players (..)

      Quite possible that -for now- "better" is a relative term in Go context. If humans used a limited set of strategies (and grey cells), then an AI may explore parts of the problem space that most humans never touched. And perhaps discover winning strategies there.

      But that only goes so far. A good analogy might be how bacteria compete & evolve in a resource-restricted environment. At the beginning, species A may take the upper hand because it multiplies faster than the competition. A bit later

    • Whoever said that was an idiot. Computers are very good at trying millions of patterns and establishing the best choice among them. AlphaGo is just the next logical advance from increased processing power, but it is doing the same thing we always knew AI could do. It doesn't know what a dog or cat is, much less associate a picture of a white kitten with a picture of a tabby cat and establish a commonality between them.
    • It was only a few years ago people were saying that the best Go computers would never beat human players because the game was so much more complex.

      So it's time for the AI luddites to move the goalposts again.

    • by Anonymous Coward

      "We're getting to the point where AI decisions, even when explained, end up being too complex for humans to follow."

      The problem with Go is the counting of points for each move and humans waste a lot of time in that process. Computers need only need to hit the correct sweet spot and have some basic positional "understanding" of the game. It was a shock to me how much guessing human Go games have: "We should use this move in this type of position" but there wasn't any actual move counting leading to the best

    • by k.a.f. ( 168896 )

      This is a scary path we are following.

      No it isn't. It's inevitable. Unless you believe that human mastery of Go was somehow due to special, non-information-processing-related powers, there is no way that our superiority could last forever, since evolution works with glacial slowness while computer technology advances at breakneck speed. Everyone who professes themselves shocked that we cannot understand the inner workings of programs we ourselves wrote overlooks that we also can't understand the inner workings of the thought processes of Go gra

      • Let's be clear here that the only thing we have lost here is being superior at a game. Stop talking like this is monumental at all.
    • It was only a few years ago people were saying that the best Go computers would never beat human players because the game was so much more complex.

      And if the person went on to demonstrate that the number of reachable game states in Go vastly exceeds the same in chess, said person was speaking out of his or her butt hole. There's pretty much nothing stupider than a penis fight over the greater vastness, when the smaller vastness already exceeds your accessible light cone.

      The curvature of viable game play in

  • AI hasn't really changed much in years, computers have gotten more powerful so some pattern recognition tricks that use to take more effort don't anymore. The big breakthrough was figuring out how to write a decent AI to play Go optimally against modern equipment. After that all they did was create random legal scenarios to play against the system creating patterns that are unusual to us and using the results of those outcomes to fine tune the AI algorithm. There really isn't that much magic if you've ta

    • by Gorobei ( 127755 )

      You sound like an RNN trained quickly on a small dataset of popular science articles.

    • There have been significant advancements in ANN methods, one specific area is deep belief networks and the training algorithm (Hinton). Prior to that, it was known that multi-layer networks could out perform simpler networks for things like image recognition, but there wasn't a good way to train them.

      The newer models/methods are outperforming the previous simpler models/methods.
  • by alvinrod ( 889928 ) on Friday October 20, 2017 @05:10PM (#55406159)
    Here's an interesting thought. Would these people still say the same thing about the games if they were told there were AI games, but in reality were actually games by two human players?

    It reminds me of the recent story where some kids put a pineapple in an art exhibition as a joke [independent.co.uk] and people thought it was art. Most people will believe and/or spew pure bullshit if they think it's what's expected of them.
    • It reminds me of the recent story where some kids put a pineapple in an art exhibition as a joke and people thought it was art. Most people will believe and/or spew pure bullshit if they think it's what's expected of them. Are you trying to be ironic?
    • Frederick the Great [wikipedia.org] might have offered a sum of money. But people now a days would wonder why it's taking the players a second to make a move when it should be completing multiple games a second.

      I have absolutely zero clue about what goes into commentating on a game of Go, other than the scoring rules are weird. You've got a point about people being lead into scenarios where they give funny commentary, there are real differences between AI and human players. I imagine it's the difference between how one n

      • The scoring rules of Go are actually very simple.
        You souround some territorry with your stones ... that territory is your score.
        As you likely have many small territories the sum if those is your score.

        • by HiThere ( 15173 )

          It's only simple if you play the game out to the end. All that counts is the open territory that's left when the game is played out, but usually the game is decided long before that point, and both sides agree.

          OTOH, I'm not a real go player, and I don't know championship rules. Perhaps they do play out to the end.

          • The end is when both players agree it makes no sense to play any further.
            And that does not affect the scoring (rules) in the slightest.

            E.g. there is a lets say 20 points territory left, but to score 2 points you need to place 13 stones in a 8 or B shaped pattern. The likelihood that you can build such a pattern and I'm watching doing pointless moves is zero.

            With those 13 stones you had taken more than the half of the supposed 20 points territory, so I have no option to compensate. (As there is no place left

      • by HiThere ( 15173 )

        The problem is what solution you converge to depends on how far ahead you look, and how you evaluate the look-ahead positions. Something that looks two moves further ahead may well converge on a very different answer in some positions, and one with a different evaluation function definitely will. And there's usually a trade-off between good evaluation and deep look ahead within a set of constraints, whether time or RAM.

        So there's no real reason to presume that people and AIs would converge to the same sol

    • by vux984 ( 928602 )

      A more interesting thought than people can be fooled into thinking that pineapple is art ... is the thought that if you contemplate a pineapple as art, it might make you think, to see the pineapple in a new way,... and how is that not art?

      What, after all, is art, except something to make you think or to see something in a new way?

      • by HiThere ( 15173 )

        Well, someone, I believe it was Claus Oldfield, sculpted a thing that looked like an ashtray full of cigarette butts, and that was definitely art, but would it not have been art someone had substituted a real ashtray full of cigarette butts? Even though it looked just the same? (Of course, it probably wouldn't smell the same...I didn't hear that he perfumed his sculpture.)

  • Let's play global thermonuclear war.

    What side do you want??

    1. USA

    2. North Korea

    3. Show full list

  • experiments they did years ago. They used feedback neural networks or something to design FPGA circuits. The results were very strange and difficult to understand, like relying on stray capacitance between pads and wires or something like that.

  • The AI That Has Nothing to Learn From Humans

    Someday (soon?) that will be all of them.

  • by muninn ( 103385 ) on Friday October 20, 2017 @06:00PM (#55406491)

    They have actually released a trove of game records from the newest version, with two actually commented by a professional:
    http://www.alphago-games.com/

  • Not quite (Score:4, Interesting)

    by nospam007 ( 722110 ) * on Friday October 20, 2017 @06:25PM (#55406621)

    "Lockhart compares them to "people sword-fighting on a tightrope."

    If you see that you lose, you can always cut the rope.

  • If it could understand a human enough to play at exactly their level, that would be more impressive. Or anticipate what their weaknesses were and train them to beat those weaknesses by turning the came a certain way.
  • The rules of Go to begin with. Humans gave it a concrete objective. It is really cool how it finds its own ways to work toward that objective, but it's not *that* surprising that it can do so.

  • ... give us a chance to see how much our best experts actually still suck at their field.

    Humans have feelings, a body, complex interaction between biology, culture and mind. AI doesn't have that. It just plods away at the problem, not looking right or left. Not being able to look right of left. Interviewing dendi as he lost against AI at the Dota international was possible, the AI wouldn't have been able to do that.

    My suspicion is that self-reproducing AI will be something like a swarm of robot-insects, fad

It is easier to write an incorrect program than understand a correct one.

Working...