Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Google

Google's AlphaGo Will Face Its Biggest Challenge Yet Next Month -- But Why Is It Still Playing? (theguardian.com) 115

From a report on The Guardian: A year on from its victory over Go star Lee Sedol, Google DeepMind is preparing a "festival" of exhibition matches for its board game-playing AI, AlphaGo, to see how far it has evolved in the last 12 months. Headlining the event will be a one-on-one match against the current number one player of the ancient Asian game, 19-year-old Chinese professional Ke Jie. DeepMind has had its eye on this match since even before AlphaGo beat Lee. On the eve of his trip to Seoul in March 2016, the company's co-founder, Demis Hassabis, told the Guardian: "There's a young kid in China who's very, very strong, who might want to play us." As well as the one-on-one match with Jie, which will be played over the course of three games, AlphaGo will take part in two other games with slightly odder formats. But why is Google's AI still playing Go, you ask? An article on The Outline adds: Its [Google's] experiments with Go -- a game thought to be years away from being conquered by AI before last year -- are designed to bring us closer to designing a computer with human-like understanding that can solve problems like a human mind can. Historically, there have been tasks that humans do well -- communicating, improvising, emoting -- and tasks that computers do well, which tend to be those that require lots of computations -- like math of any kind, including statistical analysis and modeling of, say, journeying to the moon. Slowly, artificial intelligence scientists have been pushing that barrier. [...] Go is played on a board with an 19-by-19 grid (updated after readers pointed out it's not 18x18 grid). Each player takes turn placing stones (one player with white, the other with black) on empty intersections of the grid. The goal is to completely surround the stones of another player, removing them from the board. The number of possible positions compared to chess thanks in part to the size of the board and ability to take any unoccupied position is part of what makes it so complex. As DeepMind co-founder Demis Hassabis put it last year, "There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000 possible positions."
This discussion has been archived. No new comments can be posted.

Google's AlphaGo Will Face Its Biggest Challenge Yet Next Month -- But Why Is It Still Playing?

Comments Filter:
  • Grid (Score:4, Informative)

    by Anonymous Coward on Monday April 10, 2017 @03:01PM (#54209497)

    It's a 19x19 grid, not 18x18.

    • by pahles ( 701275 )
      and there should not be a space after the 51st comma.
    • 19x19, 13x13, or 9x9. The number of possible positions is 2^361, but the number of possible moves is a lot bigger. The problem is projecting those moves is long and difficult; you have to be able to extrapolate localized influence and identify how that impacts a global strategy in an abstract sense, or else you can't play. A computer can't track all of the possible moves because losing a big position might be less-relevant than a play elsewhere that establishes power to restrict your gains to 1/3 of the

      • by Anonymous Coward

        The number of possible positions is 2^361

        Naively I would say 3^361, since each intersection can have a black stone, a white stone, or be blank.

      • by RJBeery ( 956252 )
        Actually the number of possible positions on a 19x19 grid is 3^361 = 1.74*10^172 which is very close to the number given in the summary. A given square can be white, black or blank.
        • Then you might want to count legal or possible positions, which might have been done or merely estimated but likely not an easy problem in itself.

          • by RJBeery ( 956252 )
            I thought about that for a bit but I'm not sure there are any illegal positions.
            • One simple example is a black stone surrounded by white stones. It may happen on the board before removing the stone but I doubt it counts as a position. Have two such occurrences for the position to be definitely illegal.

              Turns out, I did once read about it and to sum it up the number of legal positions is about one in 81. It was only computed on Jan 2016 :)
              https://tromp.github.io/go/leg... [github.io]

              The software used for these computations is available at my github repository. Running "make" should compute L(3,3) in about a second. For running an L19 job, a beefy server with 15TB of fast scratch diskspace, 8 to 16 cores, and 192GB of RAM, is recommended. Expect a few months of running time. (...)

              • And then divide everything by 2, because flipping every piece on the board from black to white and vice versa yields a functionally identical state.
                Also remove rotations and mirrors. This isn't trivial.

                • by bentit ( 2763157 )
                  If you're going to divide by 2 for flipping black and white don't forget to multiply by 2 unless one knows whether it's black or white next to move. Nope it's not trivial.
                  • Nope. That's a game state, not a board state. We're talking about the number of legal positions (board states).

      • by GuB-42 ( 2483988 )

        AlphaGo is much more human-like than chess programs.
        It can go very deep very quickly down the game tree, possibly down to the very end. It eventually goes back up to explore other options as much as time allows it. It uses neural networks and randomness to select moves that "look good". Neural networks are first trained using databases of human games then follow up by competing against itself. It essentially plays like a tireless human with perfect focus.
        This is in contrast to chess AIs that analyse every s

        • This is in contrast to chess AIs that analyse every single turn in order to find the optimal solution according to some predefined heuristics.

          This is essentially what AlphaGo is, except the heuristic is a trained neural network, instead of something hand-coded (and with a domain-specific priority queue of what move to investigate next).

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Indeed, also:

      The goal is to completely surround the stones of another player, removing them from the board.

      Only beginning players think that. The true goal of the game is to occupy more territory than the opponent.

      By the way, the best source of information on go is probably Go Sensei [xmp.net]

  • The Vulcans understand what is needed to have a sentient computer.
  • Which go... (Score:3, Funny)

    by __aaclcg7560 ( 824291 ) on Monday April 10, 2017 @03:10PM (#54209561)

    Go as in... the game?

    Go as in... the programming language?

    Go as in... I had to go five minutes ago?

    • Re: (Score:2, Funny)

      by Anonymous Coward

      Go as in... go fuck yourself

    • by shaunbr ( 563633 ) *
      Someone over at Google needs to port AlphaGo to Go so someone can play a Go-based version of Go while they go.
    • I like the proverb in your sig, but a slightly more idiomatic translation might be: "The nail that sticks out furthest gets hammered the most". The best sounding translation (to my ears) probably changes the original meaning slightly, but scans better in English: "The nail that sticks out furthest gets hammered hardest".

      • I like the proverb in your sig, but a slightly more idiomatic translation might be: "The nail that sticks out furthest gets hammered the most". The best sounding translation (to my ears) probably changes the original meaning slightly, but scans better in English: "The nail that sticks out furthest gets hammered hardest".

        This is the fourth variation of this proverb I've seen regarding my Slashdot signature. The wording of my signature was how I heard it after I became a Christian in college 25 years ago. I kept getting "hammered" by others in the ministry because I threatened to raise the low bar that the leadership was comfortable at. I've always contended that God worked from the bottom-up (fellowship) and not top-down (leadership). The leadership ultimately won when they kicked me out 13 years later, spreading rumors tha

  • number of possible positions as reported by Demis Hassabis.
  • This should lead to people being more concerned about general artificial intelligence rather than less. While it is pretty clear that the methods used in things like Alpha-Go cannot by themselves do much beyond what they are intended to do, it should also be clear that we're in a situation where many rapid improvements in AI are occurring, and some of these are tackling problems that were thought to be decades away. If it turns out that general AI requires only a few additional breakthroughs, or if it turns
    • Or maybe we just have enough power to play Go now. Whether AlphaGo is AI or not, it is a totally different problem than something like driving a car and will likely lead to very little that is practical. AlphaGo is simply repeating the same calculation to greater lengths, the point being that everything inside that computation is well understood and therefore expected. The complicated and difficult part of driving a vehicle requires a computer be able to take a situation that it did not expect and make t
      • by Qzukk ( 229616 )

        It seems to me that the obvious test to see whether or not AlphaGo "understands" Go or not would be to have it try to play on a 21x21 grid.

        • That would be a natural extension of the algorithm. A better test would be to see if it can recognize its creator though a camera and decide to lose gracefully to them.
          • A better test would be to see if it can recognize its creator though a camera and decide to lose gracefully to them.

            Recommended if the creator is a Wookie.

    • by epine ( 68316 )

      that were thought to be decades away

      How can any thinking person in this day and age not realize that AI predictions are HARDLY EVER worth the paper they're no longer printed upon?

      Successful AI prediction is five years away—always has been, and always will be.

      And no, the worthless prediction of consensus merit was not "decades" but "about a decade". I've been consuming machine learning podcasts day in and day out since the end of January. AlphaGo is not infrequently mentioned.

      My two favourite guests

    • This isn't AI. This is game playing. Games have strict rules and boundaries. These are easy for computers to solve, no matter if you use brute force or some other algorithm. Calling any kind of game playing to be AI is complete and utter bullshit. It is just people trying to capture some VC money.
      • Calling any kind of game playing to be AI is complete and utter bullshit.

        Well, it is related to AI, but that's about all you can say for it.

    • For an excellent, detailed book on the potential gifts and perils of AGI, I recommend Bostrom's book "Superintelligence."

      Yes, if you are into baseless conjectures about something that no one understands.

  • logical fallacy? (Score:4, Insightful)

    by zlives ( 2009072 ) on Monday April 10, 2017 @03:32PM (#54209739)

    "designing a computer with human-like understanding that can solve problems like a human mind can"
    we are talking about a game here with rules so in essence its actually the reverse.

    this is an exercise where a human is trying to play like a computer that can plan out a million moves ahead... and some how is able to stay close!!

    • by Anonymous Coward

      Not really, because the computer can't plan a million moves ahead in Go. The branching is too wide. It's why attempts to adapt the techniques used for Deep Blue from chess to go were a failure. Because in chess, the branching factor was a low enough that you COULD just read almost every possible move deep enough. And that's precisely why many pros didn't believe it would happen. AlphaGo uses deep neural networks that are an attempt to emulate the way real neural networks in our brains work.

      • The computer is better at go because of the number of moves it can plan though, not because it is more intelligent.
        • The computer is better because it picks a small number of good moves to plan through, not because it picks a lot of them.

          • The MiniMax algorithm (one of the simplest game algorithms) does exactly the same thing. It looks a couple of moves ahead, and picks the best to its knowledge (value function of that state). It can't brute force all Chess moves either. The art is how to calculate a correct value function of the state of the board.
            The Deep Learning approach is probably more an exercise in finding features to correctly assess the state of the board at a given time rather than "brute forcing", although I'm sure it brute force
          • So you're saying if you theoretically handicapped the computer to only calculate the number of moves that a human could, then it would do just as well?
            • by zlives ( 2009072 )

              lets grant everything you and others say is 100% correct. (even when contradictory)

              my point however was that any game (or process), that is based on computational expertise should have the computer (AI, NN, DL) ahead. these games are relevant as a measure to compare a human's logical processing capability against other humans, and actually we as humans mostly fail at being good at them because ... chemicals or distractions or boredom make us less perfect players.

              thus this is not a measure of a computer thin

      • Re:logical fallacy? (Score:4, Interesting)

        by MasseKid ( 1294554 ) on Monday April 10, 2017 @06:40PM (#54210791)
        If you've read any of the professional commentary on the play style of AlphaGo, it's nothing like a human. The crux of it's ability is to be able to calculate the status of the board to the fraction of a point and take moves that advance the position by fractions of a point. Humans on the other hand tend to make moves that will swing the board by several points. As a result Alphago will never play a kami no itte, unlike Sedol who did in his single win against the computer. In other words, AlphaGo is better at microagression and the humans are still better at "the perfect play". Unfortunately for humans, it takes the absolute perfect play to beat microagression.
      • Bullshit. Computer neural networks are nothing like the brain. More hyped bullshit.
      • Go and Chess are different, but labeling the branching factor as the distinction is plain wrong.

        The main difference between the two games is that in chess the pieces move while in go the pieces do not. A go board "grows" as the game is played, while a chess board "evolves."

        In go after a few moves are played in an area it is often apparent exactly how the area is going to play out. The 'when' part of the piece play is very often unimportant, and when the order of play is important its an easy thing for a
      • Software neural networks are an extremely simplified and idealized representation that only deals with a limited high-level subset of electrical activity we could make sense of some decades ago. There's a lot of smaller scale, or short scale stuff left out as well as all the chemical activity - if neural networks worked a bit more like a brain, we would be able to try the algorithms with caffeine, heroin, cocaine, cannabinoids, opioids and other substances, to study whether and how the algorithms are workin

    • by bidule ( 173941 )

      a computer that can plan out a million moves ahead...

      Even if a computer was "human enough" to know which 10 moves are the best, it takes a million moves to plan your next 3 moves.

      Good human players feel way deeper than that.

  • As DeepMind co-founder Demis Hassabis put it last year, "There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000 possible positions."

    When you are talking to a technical audience, it is best to avoid using scientific notation. Right?

  • by mark-t ( 151149 ) <markt AT nerdflat DOT com> on Monday April 10, 2017 @04:03PM (#54209961) Journal

    Yeah, we get it... it's a big number. But writing it out longhand like that is just being needlessly cryptic... and at worst comes across as having been written by somebody who doesn't know shit about the actual number of combinations, and just decided to put a lot of zeros after the end of a 1 to make a number that sounds big. Try 1x10^172. This is far more readable, and those that know scientific notation will be able to understand just how big this number is.

    If you really feel that this doesn't adequately describe the scale of the number to people who don't know scientific notation, and want your article to be comprehensible to those people as well, then you can also add that it is considerably greater than the number of subatomic particles in the observable universe. And to be frank, if that doesn't convey just how fucking big the number is, then explicitly writing 172 zeros after a 1 isn't liable to either.

    • by mark-t ( 151149 )
      1x10^171... sorry. when I pasted it into a command line to count the characters with wc, I knew to remove the commas with sed, but apparently there was a blank in the middle there as well.
    • by bidule ( 173941 )

      Wikipedia says 2.08168199382×10^170 positions, but the same position could have multiple ko. This means the best move can be illegal and you'd have to evaluate 2 moves in those cases.

    • Nah just call it what it is, one sexquinquagintillion. Everyone knows how to name numbers larger than trillions right?
  • by Anonymous Coward

    I don't think there are any "official" rankings for this but even unofficially is this even remotely true?

    • by Anonymous Coward

      https://www.goratings.org/

  • This number is larger than the hundreds of stars in the universe [youtube.com].

  • I imagine, like most Google's projects when they go live, AlphaGo is still in beta.

  • by sootman ( 158191 ) on Monday April 10, 2017 @05:32PM (#54210469) Homepage Journal

    I ran my own calculations and only came up with 999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999, 999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999, 999,999,999,999,999,999,999,999,999,999.

    Wait, sorry, I started counting at zero. Yup, it checks out.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...