Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Google Classic Games (Games) Games

Human Go Champion 'Speechless' After 2nd Loss To Machine (phys.org) 338

Reader chasm22 points to a Phys.org report about the second straight loss of Lee Sedol to AlphaGo, the program developed by Google's DeepMind unit. The human Go champion, Sedol found himself "speechless" after the showdown on Thursday. The human versus machine face-off lasted more than four hours, which to Sedol's credit is a slight improvement over his previous match, which had ended with him resigning nearly half an hour remaining on the clock. "It was a clear loss on my part," Sedol said at a press conference on Thursday. "From the beginning there was no moment I thought I was leading." Demis Hassabis, who heads Google's DeepMind, said, "Because the number of possible Go board positions exceeds the number of atoms in the universe, top players rely heavily on their intuition." Sedol will battle Google's AlphaGo again on Saturday, Sunday, and Tuesday.
This discussion has been archived. No new comments can be posted.

Human Go Champion 'Speechless' After 2nd Loss To Machine

Comments Filter:
  • Milestone (Score:5, Insightful)

    by Lisandro ( 799651 ) on Thursday March 10, 2016 @03:30PM (#51673349)

    Having a competitive Go engine capable of beating a 9-dan player is huge. Huge.

    • by irrational_design ( 1895848 ) on Thursday March 10, 2016 @03:32PM (#51673375)

      My brother dan is older than 9, maybe he should have a go.

    • by delt0r ( 999393 )
      Not really. Sure it is for traditional "search ai". But this is hardly even a blip towards this "strong ai" that is going to replace everyone's job and make humanity its bitch. It is however a very interesting approach, using a combination of 2 different methods.
      • Re:Milestone (Score:5, Informative)

        by Lisandro ( 799651 ) on Thursday March 10, 2016 @03:45PM (#51673487)

        This is nothing like the "regular" AI used on modern chess engines - those are useless for the essentially infinite game tree possibilities like the ones presented in Go. AlpaGo decides moves using a machine learning neural network and then selects the best one using classic heuristics.

        • by KGIII ( 973947 )

          I understand that these are very different things - Chess v. Go. Why is the math so difficult for Go? Is it math or computational resources or time or?

          • Re:Milestone (Score:5, Informative)

            by Lisandro ( 799651 ) on Thursday March 10, 2016 @03:59PM (#51673605)

            Search space, basically, and the amount of moves you have to inspect before selecting a best one. Chess has about 10^120 possible moves, but you can reduce this using opening books and heuristics to a sensible number which still lets you pick very strong moves. At that point it is just a matter of throwing CPU power at the problem.

            Go is a completely different beast though. A "small" 13x13 board has 10^170 valid moves, and the count for a 21x21 board is well over 10^210. So for even small, beginner-level sized boards no amount of CPU power, now or in the future, is bound to help you solve the problem. Go is interesting because every engine out there uses some form of adaptative AI - AlphaGo uses a machine learning neural net which had to be trained like a human would, but with over 30 million recorded moves.

            • FIX: 21x21 is 10^976 valid moves. Mistaken by an inch there :)

            • by KGIII ( 973947 )

              Interesting... I wonder what (if as expected/hoped) quantum computing will do for this, as it's able to assume both states simultaneously - though, that'd be more like a solve vs. play. With those numbers, I'm not sure if it can be solved. These learning/adaptive processes, I wonder if they can be paired against each other (or themselves) and learn from playing each other? I'm guessing someone's already tried that. I did a quick Google and I didn't find anything about it. It's kind of obvious so I'm assumin

          • There are more variations on the opening in Go than there are possible parallel universes, so even a quantum computer can't select between all possible outcomes and find the best. Note that "variations" doesn't mean just what move you play, but what the eventual result is; and the third move in the game can have a major impact on a play made 120 moves later. The number of legal, reasonable outcomes from any early move quickly exceeds the number of quantum states in the universe raised to itself as a power
    • Re:Milestone (Score:5, Interesting)

      by larryjoe ( 135075 ) on Thursday March 10, 2016 @04:09PM (#51673655)

      I agree that this achievement by AlphaGo is extremely significant. However, I wonder if future experiments can focus more on the human vs. computer move selection algorithm comparison by minimizing tangential aspects. By tangential aspects, I'm referring to the other existing differences that perhaps reflect human failings more than strength of computation. In particular, I would like to see the removal of tournament rules because such rules were formulated to limit humans.

      First, I would like to see time limits for moves eliminated. The computer can be augmented with extra hardware, higher clock frequencies, etc. to render time limits inconsequential for it, but the human cannot. As was seen in the 2nd match, time potentially impacted Mr. Lee by not only limiting his thinking time but more importantly by decreasing his emotional stability.

      Second, I would like to see how the computer fares against a consensus of experts. By consensus, I'm imagining a group of 5-10 top players who discuss the best next move and then select the next move hopefully by consensus or at least by majority vote. Even the top players rarely maintain top play for every single move of a match. My hope is that a consensus of experts minimizes this human failing.

      As a side note, I think that the setup for the human vs. computer experiment is extremely significant. As an example, the Watson triumph in Jeopardy was entirely expected but not very significant in terms of comparing human vs. computer thought. Rather, human buzzer pushing reflexes were compared against computer reflexes, and for that comparison, the computer should never lose. In Jeopardy, the buzzer impact is only minimized when one contestant knows more than the others. If all contestants know most of the answers, then the game devolves to a buzzer contest, which was the situation with Watson, Ken Jennings, and Brad Rutter. What would have been much more interesting would have been to allow Watson and the humans to each respond within a certain time window and to compare the percentage of correct responses without the impact of buzzers. Watson might have still won, but I would be shocked if the disparities were large. I.e., eliminate the tournament rules that are intended to impose artificial limits on humans that were crafted to maximize entertainment value and magnify the differences between actual skill.

      • by lorinc ( 2470890 )

        By doing so, you're just delaying the inevitable. Even with your rules computer would eventually win. And eventually means sooner than you think.

      • Re:Milestone (Score:5, Insightful)

        by Shawn Willden ( 2914343 ) on Thursday March 10, 2016 @05:09PM (#51674145)

        As an example, the Watson triumph in Jeopardy was entirely expected but not very significant in terms of comparing human vs. computer thought.

        Absolutely wrong.

        Watson's win was *far* from expected, and it was very significant. Okay, sure, the machine is faster at buzzing in, but that's not what was interesting or significant. What was interesting was that Watson was able to do fairly free-form natural language processing, and able to draw on not just direct knowledge, but indirect inference, context and even metaphor. What was amazing was that Watson was able to compete on something like a level playing field against humans in this contest of very fuzzy questions, er, answers. Whether Watson won or lost didn't actually matter much. What was amazing was that it was able to compete at all.

        Most AI researchers would actually have predicted that Jeopardy was a tougher game for a computer to win than Go.

        • As an example, the Watson triumph in Jeopardy was entirely expected but not very significant in terms of comparing human vs. computer thought.

          Absolutely wrong.

          Watson's win was *far* from expected, and it was very significant. Okay, sure, the machine is faster at buzzing in, but that's not what was interesting or significant. What was interesting was that Watson was able to do fairly free-form natural language processing, and able to draw on not just direct knowledge, but indirect inference, context and even metaphor. What was amazing was that Watson was able to compete on something like a level playing field against humans in this contest of very fuzzy questions, er, answers. Whether Watson won or lost didn't actually matter much. What was amazing was that it was able to compete at all.

          Most AI researchers would actually have predicted that Jeopardy was a tougher game for a computer to win than Go.

          What we're saying is not conflicting. It was a huge deal that Watson could play and form correct responses in a Jeopardy game. It wasn't at all significant that it beat humans because given it's buzzer advantage, it could know less than the humans and still win handily. What I'm suggesting is to structure future human-computer competitions to eliminate the tangential distractions.

        • Both of them are hard......unless you know the trick. If you only know the trick to one of them, then the other one is harder.
      • by Bender0x7D1 ( 536254 ) on Thursday March 10, 2016 @05:24PM (#51674251)

        Second, I would like to see how the computer fares against a consensus of experts. By consensus, I'm imagining a group of 5-10 top players who discuss the best next move and then select the next move hopefully by consensus or at least by majority vote.

        So you want a RAGE against the machine? (Redundant Array of Go Experts)

      • Comment removed based on user account deletion
    • Re:Milestone (Score:5, Interesting)

      by bluefoxlucid ( 723572 ) on Thursday March 10, 2016 @04:17PM (#51673729) Homepage Journal

      Li got slightly better against it; I'd wager a 10 or 20 game match would see him immediately competitive. The machine will eventually behave like an AI, and human go players will essentially learn how you think and counteract your particular behaviors. Li will eventually learn to manipulate the machine; it's *very* intelligent, but not creative or insightful.

      • Re: (Score:2, Interesting)

        by Anonymous Coward
        That's not how it happened with Chess. My prediction is that no human will ever beat the top Go AI again.
      • Re:Milestone (Score:4, Interesting)

        by Joce640k ( 829181 ) on Thursday March 10, 2016 @05:06PM (#51674127) Homepage

          it's *very* intelligent

        No it isn't. It has a lot of processing power and well-tuned set of heuristics allowing it to assign a score to each board position. That's not intelligence, it's data processing.

      • Re:Milestone (Score:4, Interesting)

        by beelsebob ( 529313 ) on Thursday March 10, 2016 @05:22PM (#51674235)

        Quite the opposite. Google actually expected that the AI would lose a couple of matches while it learned the kinds of moves that a really top go player made, and then improve. I expect that no one will ever beat this AI unless they come up with a completely novel strategy for how to play Go, and even then, they're unlikely to win.

        • Re:Milestone (Score:5, Informative)

          by LetterRip ( 30937 ) on Thursday March 10, 2016 @06:25PM (#51674625)

          The designers at DeepMind didn't expect it to lose a couple of matches and the version of AlphaGo was finalized before the start of the match and thus couldn't 'learn the kinds of moves that a really top go player made'. So you are wrong on all accounts.

          They had a good idea of the playing strength when the offered the challenge, and the playing strength was probably such that they expected at minimum a 3:2 win for the bot.

      • Why do you say that? The commentators all described AlphaGo as "creative". If you think it wasn't, I'd love to know why.

        • People describe coffee mugs with eyes and mouths as creative, intelligent, and emotional. AlphaGo knows how to execute a particularly advanced search algorithm; it doesn't combine old knowledge to create novel knowledge. A creative human will say, "Hmm, this class of tesuji are useful in these situations; but here's a different situation I can't find a good answer for. I see a few shapes and patterns that look vaguely like some familiar pattern I've used these tesuji in; perhaps I could alter the tesuji

          • Re: Milestone (Score:5, Interesting)

            by Your.Master ( 1088569 ) on Thursday March 10, 2016 @06:19PM (#51674589)

            It's a trained neural network and other machine learning techniques, not just a bespoke algorithm. It operates specifically by combining old knowledge to create novel knowledge. That's the fundamental of the algorithm.

            It's not obvious why this "creativity" in the context of Go is fundamentally less effective than human "creativity" in the context of Go.

          • AlphaGo updates its Policy Network and Value Network based on self play. So yes it does indeed combine 'old knowledge' to create novel knowledge.

      • Re:Milestone (Score:5, Informative)

        by ljw1004 ( 764174 ) on Thursday March 10, 2016 @06:18PM (#51674585)

        Li will eventually learn to manipulate the machine; it's *very* intelligent, but not creative or insightful.

        Not creative? That's your opinion. Here are what other people (including serious Go professionals) think...

        "AlphaGo met Lee’s solid, prudent play with a creativity and flexibility that surprised professional commentators" - https://gogameguru.com/alphago... [gogameguru.com]

        An Youngil (8d) wrote of AlphaGo playing Black: "Black 13 was creative ... Black 37 was a rare and intriguing shoulder hit ... Black 151, 157 and 159 were brilliant moves, and the game was practically decided by 165 ... AlphaGo’s style of play in the opening seems creative! Black 13, 15, 29 and 37 were very unusual moves." -- https://gogameguru.com/alphago... [gogameguru.com]

        Redmond (9d) wrote "I was impressed with AlphaGo’s play. There was a great beauty to the opening ... It was a beautiful, innovative game. ... AlphaGo started with some very unusual looking moves ... It played this shoulder-hit here which was a very innovative move ... I really liked the way it played in the opening, because I wasn't so impressed about the orthodox October games, but now it's playing a much more interesting exciting game." -- http://googleasiapacific.blogs... [blogspot.com]

        Anders Kierulf (3d, creator of SmartGo) wrote: "The peep at move 15: This is usually played much later in the game, and never without first extending on the bottom. AlphaGo don’t care. It adds 29 later, and makes the whole thing work with the creative shoulder hit of 37 ... AlphaGo don’t care, it just builds up its framework, and then shows a lot of flexibility in where it ends up with territory." -- http://www.smartgo.com/blog/al... [smartgo.com]

        Maybe you start with a philosophical axiom that "a computer can by definition never be considered creative". That's fair enough, but it's not the way that the Go playing community use the word "creative".

      • The machine will eventually behave like an AI, and human go players will essentially learn how you think and counteract your particular behaviors.

        Except that the machine is able to learn from past experiences as well. So playing against a "creative" player can be input back into the learning algorithm and like the born assimilated. It's not like AlphaGo is a hardcoded playing style. I'm sure the developers don't even really know exactly how it works since a lot of the internal neural network learning is pretty obtuse. If anything AlphaGo should be able to 'emulate' playing styles of different players by weighting inputs from past games of speci

  • Welcome to (Score:5, Funny)

    by Avarist ( 2453728 ) on Thursday March 10, 2016 @03:32PM (#51673371)
    something something.... overlords... something something....
    • by Anonymous Coward

      Robots will be having debates on whether or not those pesky bald monkeys actually created them. They will be digging up old electronic waste and claiming that they evolved from the iPad and iPhone and the assembly line robots.

      There will be debates about what to download to their children.

      There will be the "Save the Humans" organizations to keep robots from indiscriminately killing the bald monkeys that inhabit their attics and basements. Human traps will be available at Robo*Mart.

      And I have been watching w

  • Date clarification (Score:5, Informative)

    by Anonymous Coward on Thursday March 10, 2016 @04:24PM (#51673781)

    Sedol will battle Google's AlphaGo again on Saturday, Sunday, and Tuesday.

    Note that for many people in the western hemisphere, the days are actually Friday, Saturday, and Monday.

    Live streams are here. [youtube.com]

  • by blindseer ( 891256 ) <blindseer.earthlink@net> on Thursday March 10, 2016 @04:26PM (#51673793)

    While the loser of the match was struck silent by the defeat the computer just... will... not.. stop... talking. GAWD! How annoying.

    Does the computer not know either pity or remorse for its opponent?

  • No, Mr. Lee, it is the computer that is speechless.
  • the problem (Score:3, Interesting)

    by Triklyn ( 2455072 ) on Thursday March 10, 2016 @05:00PM (#51674083)

    he's playing against it like it's a human opponent, he's playing against it like he's a go champion, he needs to play against it like he's a programmer. I would be curious as to how it deals with mirror play, or wildly suboptimal plays. I would wonder if it's overfit to go played well.

    • Re:the problem (Score:5, Interesting)

      by LateArthurDent ( 1403947 ) on Thursday March 10, 2016 @05:25PM (#51674257)

      he's playing against it like it's a human opponent, he's playing against it like he's a go champion, he needs to play against it like he's a programmer. I would be curious as to how it deals with mirror play, or wildly suboptimal plays. I would wonder if it's overfit to go played well.

      Ever tried that with a chess program? Doesn't go over well. A wildly suboptimal play just makes the tree search look really good for the computer. It doesn't get emotionally distraught because it thinks you've seen something it didn't. It just sees better valued moves.

      This Go algorithm is even more complex. It's a neural-network algorithm combined with tree-search (I don't play Go, but as I understand it, the number of permutations are so high, tree-search alone isn't feasible). This neural network was trained using input from previous tournaments, using games against expert players, and using games against itself. I don't think you can throw anything at it that will break it. Computers have officially become better than humans at Go. In a decade or so, when the really good Go programs can run in your phone, you'll be seeing the same type of cheating attempts going on that currently plague chess competitions.

      • If you take a chessboard and randomize the pieces, like a truly statistically random placement, it levels the playing field of humans a ton. Masters perform much closer to inexperienced players because one of the things humans rely on is seeing patterns they recognize and working from that, which doesn't happen. However chess programs do just fine. They can still simulate out all the moves to a good number of turns ahead and statistically decide the more optimal ones.

    • by Tablizer ( 95088 )

      he's playing against it like it's a human opponent ... [instead] he needs to play against it like he's a programmer

      You mean download a dodgy API from the Web that allegedly does close to what the customer needs, shoe-horn it into the app, give it to the naive intern to debug, and leave early for the day?

  • What about if you gave him time to get use to his opponent over six months? A human can learn and adapt. Will this "AI" adapt at least equally? Indeed what would happen after six months? I predict the man would definitely win.

  • AlphaGo 'Speechless' after 2nd Win vs Human Go Champion
  • by JustAnotherOldGuy ( 4145623 ) on Thursday March 10, 2016 @06:40PM (#51674701) Journal

    My chess computer beat me every &^#@! time, but it was no match for me at kickboxing.

  • Well it might be good at Go, but I wonder if it can play this game [youtube.com]. Or a good game of chess?
  • My understanding from 20 years ago was that the geometric progression of possible game permutations was so large that you couldn't possibly brute force search very many moves ahead, so AI players used book openings and brute force lookahead for end game, but were pretty useless for the middle part of the game. How did they conquer the law of large numbers and solve this? Human players see patterns composed of large numbers of pieces rather than individual pieces, I think that's how they handle the complexit
    • You don't need to map the entire landscape if you combine knowledge of the major maxima and minima with axioms for searching for local detail. It is this combination, a map of rule parameters rather than a complete map of the territory that makes the difference.

So... did you ever wonder, do garbagemen take showers before they go to work?

Working...