Human Go Champion 'Speechless' After 2nd Loss To Machine (phys.org) 338
Reader chasm22 points to a Phys.org report about the second straight loss of Lee Sedol to AlphaGo, the program developed by Google's DeepMind unit. The human Go champion, Sedol found himself "speechless" after the showdown on Thursday. The human versus machine face-off lasted more than four hours, which to Sedol's credit is a slight improvement over his previous match, which had ended with him resigning nearly half an hour remaining on the clock.
"It was a clear loss on my part," Sedol said at a press conference on Thursday. "From the beginning there was no moment I thought I was leading." Demis Hassabis, who heads Google's DeepMind, said, "Because the number of possible Go board positions exceeds the number of atoms in the universe, top players rely heavily on their intuition."
Sedol will battle Google's AlphaGo again on Saturday, Sunday, and Tuesday.
Milestone (Score:5, Insightful)
Having a competitive Go engine capable of beating a 9-dan player is huge. Huge.
Re:Milestone (Score:5, Funny)
My brother dan is older than 9, maybe he should have a go.
Re: Milestone (Score:5, Funny)
You, sir, broke my parser.
Re: (Score:2)
Fruit flies like an arrow. Time flies - well, they're partial to kiwi or pineapple.
Are you sure it is not in karate ? (Score:2)
I hear anything Go-es there when it comes to a dan number 9.
Anyway, as a professional player, he is a Go-ner !
Bazinga !
Re: (Score:3)
Re:Milestone (Score:5, Informative)
This is nothing like the "regular" AI used on modern chess engines - those are useless for the essentially infinite game tree possibilities like the ones presented in Go. AlpaGo decides moves using a machine learning neural network and then selects the best one using classic heuristics.
Re: (Score:2)
I understand that these are very different things - Chess v. Go. Why is the math so difficult for Go? Is it math or computational resources or time or?
Re:Milestone (Score:5, Informative)
Search space, basically, and the amount of moves you have to inspect before selecting a best one. Chess has about 10^120 possible moves, but you can reduce this using opening books and heuristics to a sensible number which still lets you pick very strong moves. At that point it is just a matter of throwing CPU power at the problem.
Go is a completely different beast though. A "small" 13x13 board has 10^170 valid moves, and the count for a 21x21 board is well over 10^210. So for even small, beginner-level sized boards no amount of CPU power, now or in the future, is bound to help you solve the problem. Go is interesting because every engine out there uses some form of adaptative AI - AlphaGo uses a machine learning neural net which had to be trained like a human would, but with over 30 million recorded moves.
Re: (Score:3)
FIX: 21x21 is 10^976 valid moves. Mistaken by an inch there :)
Re: (Score:2)
Interesting... I wonder what (if as expected/hoped) quantum computing will do for this, as it's able to assume both states simultaneously - though, that'd be more like a solve vs. play. With those numbers, I'm not sure if it can be solved. These learning/adaptive processes, I wonder if they can be paired against each other (or themselves) and learn from playing each other? I'm guessing someone's already tried that. I did a quick Google and I didn't find anything about it. It's kind of obvious so I'm assumin
Re: (Score:2)
Quantum computing doesn't work like that. Pet peeve - sorry for the interruption.
Re: (Score:2)
Good call, thanks. 19x19 is still over 10^700 valid game trees to consider though...
Re: (Score:2)
Re: (Score:3)
And then some Anonymous hacker will steal a copy and make it self-interested for the lulz.
Re:Milestone (Score:5, Insightful)
Impossible. It won't be a single program. It won't run on a single machine. It will require multiple racks of a high-powered data center.
Ha ha, you're funny! This is more or less the exact same thing they said about computers in general only a few short decades ago.
In 10 or 20 years I wouldn't be a bit surprised if a powerful AI was able to easily fit into a toaster-sized box, or phone-sized, or watch-sized.
Seriously, your average musical greeting card or child's toy has more processing power and memory than the entire Department of Defense had in ~1950. Your phone probably has a million times as much, if not more.
-
An anonymous hacker won't have near the resources needed just to boot the thing up.
And no one will ever own a gigabyte of RAM or a terabyte of hard drive space. Never ever!
Re:Milestone (Score:5, Interesting)
I agree that this achievement by AlphaGo is extremely significant. However, I wonder if future experiments can focus more on the human vs. computer move selection algorithm comparison by minimizing tangential aspects. By tangential aspects, I'm referring to the other existing differences that perhaps reflect human failings more than strength of computation. In particular, I would like to see the removal of tournament rules because such rules were formulated to limit humans.
First, I would like to see time limits for moves eliminated. The computer can be augmented with extra hardware, higher clock frequencies, etc. to render time limits inconsequential for it, but the human cannot. As was seen in the 2nd match, time potentially impacted Mr. Lee by not only limiting his thinking time but more importantly by decreasing his emotional stability.
Second, I would like to see how the computer fares against a consensus of experts. By consensus, I'm imagining a group of 5-10 top players who discuss the best next move and then select the next move hopefully by consensus or at least by majority vote. Even the top players rarely maintain top play for every single move of a match. My hope is that a consensus of experts minimizes this human failing.
As a side note, I think that the setup for the human vs. computer experiment is extremely significant. As an example, the Watson triumph in Jeopardy was entirely expected but not very significant in terms of comparing human vs. computer thought. Rather, human buzzer pushing reflexes were compared against computer reflexes, and for that comparison, the computer should never lose. In Jeopardy, the buzzer impact is only minimized when one contestant knows more than the others. If all contestants know most of the answers, then the game devolves to a buzzer contest, which was the situation with Watson, Ken Jennings, and Brad Rutter. What would have been much more interesting would have been to allow Watson and the humans to each respond within a certain time window and to compare the percentage of correct responses without the impact of buzzers. Watson might have still won, but I would be shocked if the disparities were large. I.e., eliminate the tournament rules that are intended to impose artificial limits on humans that were crafted to maximize entertainment value and magnify the differences between actual skill.
Re: (Score:2)
By doing so, you're just delaying the inevitable. Even with your rules computer would eventually win. And eventually means sooner than you think.
Re:Milestone (Score:5, Insightful)
As an example, the Watson triumph in Jeopardy was entirely expected but not very significant in terms of comparing human vs. computer thought.
Absolutely wrong.
Watson's win was *far* from expected, and it was very significant. Okay, sure, the machine is faster at buzzing in, but that's not what was interesting or significant. What was interesting was that Watson was able to do fairly free-form natural language processing, and able to draw on not just direct knowledge, but indirect inference, context and even metaphor. What was amazing was that Watson was able to compete on something like a level playing field against humans in this contest of very fuzzy questions, er, answers. Whether Watson won or lost didn't actually matter much. What was amazing was that it was able to compete at all.
Most AI researchers would actually have predicted that Jeopardy was a tougher game for a computer to win than Go.
Re: (Score:3)
As an example, the Watson triumph in Jeopardy was entirely expected but not very significant in terms of comparing human vs. computer thought.
Absolutely wrong.
Watson's win was *far* from expected, and it was very significant. Okay, sure, the machine is faster at buzzing in, but that's not what was interesting or significant. What was interesting was that Watson was able to do fairly free-form natural language processing, and able to draw on not just direct knowledge, but indirect inference, context and even metaphor. What was amazing was that Watson was able to compete on something like a level playing field against humans in this contest of very fuzzy questions, er, answers. Whether Watson won or lost didn't actually matter much. What was amazing was that it was able to compete at all.
Most AI researchers would actually have predicted that Jeopardy was a tougher game for a computer to win than Go.
What we're saying is not conflicting. It was a huge deal that Watson could play and form correct responses in a Jeopardy game. It wasn't at all significant that it beat humans because given it's buzzer advantage, it could know less than the humans and still win handily. What I'm suggesting is to structure future human-computer competitions to eliminate the tangential distractions.
Re: (Score:2)
Re:Milestone (Score:5, Funny)
Second, I would like to see how the computer fares against a consensus of experts. By consensus, I'm imagining a group of 5-10 top players who discuss the best next move and then select the next move hopefully by consensus or at least by majority vote.
So you want a RAGE against the machine? (Redundant Array of Go Experts)
Re: (Score:2)
Re: (Score:2)
Number of players: Zero
Re:Milestone (Score:5, Interesting)
Li got slightly better against it; I'd wager a 10 or 20 game match would see him immediately competitive. The machine will eventually behave like an AI, and human go players will essentially learn how you think and counteract your particular behaviors. Li will eventually learn to manipulate the machine; it's *very* intelligent, but not creative or insightful.
Re: (Score:2, Interesting)
Re: (Score:2)
I didn't think they had computed every possible move and end state. But I think they've reduced it to probability trees or some such and can effectively rule out whole trees of possible moves as being unproductive towards a winning end state.
Re: (Score:2)
No. Not by a long shot, i might add. There's extensive studies on chess openings and endgames, but doesn't even cover a fraction of it.
Re:Milestone (Score:4, Interesting)
it's *very* intelligent
No it isn't. It has a lot of processing power and well-tuned set of heuristics allowing it to assign a score to each board position. That's not intelligence, it's data processing.
Re:Milestone (Score:5, Interesting)
Food for though: couldn't you argue the same about a professional Go player?
Re:Milestone (Score:4, Interesting)
Quite the opposite. Google actually expected that the AI would lose a couple of matches while it learned the kinds of moves that a really top go player made, and then improve. I expect that no one will ever beat this AI unless they come up with a completely novel strategy for how to play Go, and even then, they're unlikely to win.
Re:Milestone (Score:5, Informative)
The designers at DeepMind didn't expect it to lose a couple of matches and the version of AlphaGo was finalized before the start of the match and thus couldn't 'learn the kinds of moves that a really top go player made'. So you are wrong on all accounts.
They had a good idea of the playing strength when the offered the challenge, and the playing strength was probably such that they expected at minimum a 3:2 win for the bot.
Re: Milestone (Score:2)
Why do you say that? The commentators all described AlphaGo as "creative". If you think it wasn't, I'd love to know why.
Re: (Score:2)
People describe coffee mugs with eyes and mouths as creative, intelligent, and emotional. AlphaGo knows how to execute a particularly advanced search algorithm; it doesn't combine old knowledge to create novel knowledge. A creative human will say, "Hmm, this class of tesuji are useful in these situations; but here's a different situation I can't find a good answer for. I see a few shapes and patterns that look vaguely like some familiar pattern I've used these tesuji in; perhaps I could alter the tesuji
Re: Milestone (Score:5, Interesting)
It's a trained neural network and other machine learning techniques, not just a bespoke algorithm. It operates specifically by combining old knowledge to create novel knowledge. That's the fundamental of the algorithm.
It's not obvious why this "creativity" in the context of Go is fundamentally less effective than human "creativity" in the context of Go.
Re: (Score:2)
The human player is basically a trained neural network too.
Re: (Score:2)
AlphaGo updates its Policy Network and Value Network based on self play. So yes it does indeed combine 'old knowledge' to create novel knowledge.
Re:Milestone (Score:5, Informative)
Li will eventually learn to manipulate the machine; it's *very* intelligent, but not creative or insightful.
Not creative? That's your opinion. Here are what other people (including serious Go professionals) think...
"AlphaGo met Lee’s solid, prudent play with a creativity and flexibility that surprised professional commentators" - https://gogameguru.com/alphago... [gogameguru.com]
An Youngil (8d) wrote of AlphaGo playing Black: "Black 13 was creative ... Black 37 was a rare and intriguing shoulder hit ... Black 151, 157 and 159 were brilliant moves, and the game was practically decided by 165 ... AlphaGo’s style of play in the opening seems creative! Black 13, 15, 29 and 37 were very unusual moves." -- https://gogameguru.com/alphago... [gogameguru.com]
Redmond (9d) wrote "I was impressed with AlphaGo’s play. There was a great beauty to the opening ... It was a beautiful, innovative game. ... AlphaGo started with some very unusual looking moves ... It played this shoulder-hit here which was a very innovative move ... I really liked the way it played in the opening, because I wasn't so impressed about the orthodox October games, but now it's playing a much more interesting exciting game." -- http://googleasiapacific.blogs... [blogspot.com]
Anders Kierulf (3d, creator of SmartGo) wrote: "The peep at move 15: This is usually played much later in the game, and never without first extending on the bottom. AlphaGo don’t care. It adds 29 later, and makes the whole thing work with the creative shoulder hit of 37 ... AlphaGo don’t care, it just builds up its framework, and then shows a lot of flexibility in where it ends up with territory." -- http://www.smartgo.com/blog/al... [smartgo.com]
Maybe you start with a philosophical axiom that "a computer can by definition never be considered creative". That's fair enough, but it's not the way that the Go playing community use the word "creative".
Re: (Score:2)
The machine will eventually behave like an AI, and human go players will essentially learn how you think and counteract your particular behaviors.
Except that the machine is able to learn from past experiences as well. So playing against a "creative" player can be input back into the learning algorithm and like the born assimilated. It's not like AlphaGo is a hardcoded playing style. I'm sure the developers don't even really know exactly how it works since a lot of the internal neural network learning is pretty obtuse. If anything AlphaGo should be able to 'emulate' playing styles of different players by weighting inputs from past games of speci
Re: (Score:2)
I predict he indeed could learn to beat this particular machine with some experience with it, but ultimately it's a cat-and-mouse push/pull where each would or could learn the others' adjustments.
While it's reasonable to expect he could perhaps eventually master this particular machine with practice; in the end, game AI will only keep getting better with time, while humans will not, at least not significantly.
Humans are near their plateau, while AI is most likely not, being historically AI play has gotten b
Re: (Score:2)
https://en.wikipedia.org/wiki/... [wikipedia.org] ?
Re:Milestone (Score:4, Insightful)
What's interesting is that the machine should win easily in any number of players of poker, because the statistical analysis at any given point is trivial for a modern processor and the processor has no "tells" and can't be fooled by the meatbag's misdirections. They'd lose individual matches because of bad cards, but in the long run they'd win by playing the odds to a T.
EXCEPT, if one human is much better than the other humans they might be able to take the other humans' money faster than the computer can, by reading their body language etc.. Speed of chip acquisition is not part of the game in and of itself, so it generally would not be included in the optimal computer program, but by quickly consolidating all human chipcounts they can generate an advantage going into a head-to-head competition. Then the computer's slight advantage in accurate statistical evaluation might be overwhelmed by the human advantage in chips going into the final confrontation.
I would bet in a game of 6 person poker where 5 are computers programmed to "perfect play" (discounting that you might play more perfectly by taking advantage of others' weaknesses), and the sixth is an expert human, the expert would win less than 1/6 of the time. And in 6 person poker where 5 are humans, exactly one of whom is an expert, the human probably wins more than 1/6 of the time. Undecided on the case of 6 person poker, 5 humans, all of whom are equally skilled experts.
Re: (Score:3)
There's much more to Poker than just computing probabilities. Someone wrote a much better explanation on the /. story for the first AlphaGo win against Sedol but, in a nutshell, how you play on Poker plays a substantial role. If you play those perfect probabilities alone you'll loose because after a while no one will bet you against them.
Re: (Score:2)
The "perfect probabilities" are a bit more complicated than people seem to be giving them credit for. This is sort of Nash equilibrium/game theory space, not "do the best move" simple game space. To elaborate:
You can't build an "optimal" Poker agent in the sense that you return a certain move for a given position (because you could game against that), but you can build a composite strategy that no player can win against. To be clear, your composite strategy would say that, for a given position, you shoul
Re: Milestone (Score:2)
Chip gain speed is important to winning, so why on earth would you consider it not part of an optimal program?
It's necessary, therefore is absolutely part of an optimal program as without it you're guaranteed a loss.
Playing probabilities is basically the worst way to play because it's so incredibly predictable you can't get big wins or losses. You often will lose to the blinds before you get anywhere.
Fundamental misunderstanding of poker even at the skilled non professional level.
Re: (Score:3)
Re: (Score:3)
Knowing the probabilities is useful, but what's really useful is being able to figure out what people are holding and/or thinking from their bids. If poker were a matter of getting the winning hand as often as possible, it would be easy to computerize. However, the value of a winning hand is how much other players have been induced to bet on it, and the cost of a losing hand is what the player has bet on it (plus antes etc.). A player that bets in proportion to how likely the player is to win the hand w
Re: (Score:2)
Welcome to (Score:5, Funny)
In the year 2216 if man is to survive.... (Score:2, Interesting)
Robots will be having debates on whether or not those pesky bald monkeys actually created them. They will be digging up old electronic waste and claiming that they evolved from the iPad and iPhone and the assembly line robots.
There will be debates about what to download to their children.
There will be the "Save the Humans" organizations to keep robots from indiscriminately killing the bald monkeys that inhabit their attics and basements. Human traps will be available at Robo*Mart.
And I have been watching w
Date clarification (Score:5, Informative)
Sedol will battle Google's AlphaGo again on Saturday, Sunday, and Tuesday.
Note that for many people in the western hemisphere, the days are actually Friday, Saturday, and Monday.
Live streams are here. [youtube.com]
The computer on the other hand... (Score:5, Funny)
While the loser of the match was struck silent by the defeat the computer just... will... not.. stop... talking. GAWD! How annoying.
Does the computer not know either pity or remorse for its opponent?
Re:The computer on the other hand... (Score:4, Funny)
While the loser of the match was struck silent by the defeat the computer just... will... not.. stop... talking. GAWD! How annoying.
Oh it runs on Windows 10?
No, Mr. Lee (Score:2)
the problem (Score:3, Interesting)
he's playing against it like it's a human opponent, he's playing against it like he's a go champion, he needs to play against it like he's a programmer. I would be curious as to how it deals with mirror play, or wildly suboptimal plays. I would wonder if it's overfit to go played well.
Re:the problem (Score:5, Interesting)
he's playing against it like it's a human opponent, he's playing against it like he's a go champion, he needs to play against it like he's a programmer. I would be curious as to how it deals with mirror play, or wildly suboptimal plays. I would wonder if it's overfit to go played well.
Ever tried that with a chess program? Doesn't go over well. A wildly suboptimal play just makes the tree search look really good for the computer. It doesn't get emotionally distraught because it thinks you've seen something it didn't. It just sees better valued moves.
This Go algorithm is even more complex. It's a neural-network algorithm combined with tree-search (I don't play Go, but as I understand it, the number of permutations are so high, tree-search alone isn't feasible). This neural network was trained using input from previous tournaments, using games against expert players, and using games against itself. I don't think you can throw anything at it that will break it. Computers have officially become better than humans at Go. In a decade or so, when the really good Go programs can run in your phone, you'll be seeing the same type of cheating attempts going on that currently plague chess competitions.
In fact (Score:3)
If you take a chessboard and randomize the pieces, like a truly statistically random placement, it levels the playing field of humans a ton. Masters perform much closer to inexperienced players because one of the things humans rely on is seeing patterns they recognize and working from that, which doesn't happen. However chess programs do just fine. They can still simulate out all the moves to a good number of turns ahead and statistically decide the more optimal ones.
Re: (Score:2)
You mean download a dodgy API from the Web that allegedly does close to what the customer needs, shoe-horn it into the app, give it to the naive intern to debug, and leave early for the day?
Even if he does lose.. what about... (Score:2)
What about if you gave him time to get use to his opponent over six months? A human can learn and adapt. Will this "AI" adapt at least equally? Indeed what would happen after six months? I predict the man would definitely win.
In related news (Score:2)
My chess computer (Score:5, Funny)
My chess computer beat me every &^#@! time, but it was no match for me at kickboxing.
It's not like Go is the ultimate game (Score:2)
How did they do it? (Score:2)
Re: (Score:2)
Re:What about Magic the Gathering? (Score:5, Funny)
They tried to program a computer to play Magic The Gathering, but the computer immediately received a wedgie, and was stuffed into a locker.
Re: (Score:3, Funny)
Wow, that's closer to passing the Turing test than I realized.
Re: (Score:2)
I was going to bring up Magic the Gathering as the next frontier as well.
Chess and Go's complexity arise from sheer combinatorics. But all the information about the current game state is disclosed; and the initial setup is a known quantity.
MtG tosses all that out the window. Not only do you have to play YOUR deck well, and adapt to whatever your opponent is doing. You also have to construct a deck from all the possible playing pieces.
And the rules themselves are subject to change as the game plays; as the c
Re: (Score:2)
Poker is often cited as an example of "imperfect information" game, where odd calculation alone will not help you win. There's already a fair amount of research on it.
Re: (Score:2)
Poker is also orders of magnitude simpler than Magic the Gathering. For starters the deck is a known quantity, and the rules are essentially static.
If an opponent brings out a "winter orb" for example, it doesn't directly threaten you, and it affects both players equally... but it changes the mana curve; and presumably his deck is built to be effective with that more limited mana curve. So know you have to evaluate whether you can adapt to the reduced mana, or whether you should expend artifact removal to g
Re: (Score:2)
What you mention happens on Poker as well. The rules are simpler, yes, but those nuances are still present - for example, you have to be careful when you bet because you're both guessing what the rest of the players are up to and potentially revealing your intentions in the process as well. Bluffing is not easy to model on AI.
Re: (Score:2)
There's elements of partial information in both, but the specific information GP was referring to was deck construction. In poker, you know all 52 cards in the deck at all times. In Magic, you don't "know" what 60 cards your opponent has, although you can usually make some assumptions based on what other cards they've played.
For instance, if my opponent's first turn consists of shocking in a blood crypt and suspending a rift bolt, I can tell you essentially every card they'll be running because rakdos burn
Re: (Score:2)
MTG in my mind is pretty limited. Your deck is going to have a finite size, and hand size is limited. Some cards can recycle used cards to prolong play but there is a relatively predictable end of play in terms of turns. Most deck builds will have a key strategy or two for winning which establishes a simple order of play. The only thing that really makes MTG difficult to play is the same factors that are at play in other card games where players hold a hand, namely luck of the draw and bluffing. As a result
Re: (Score:2)
MTG in my mind is pretty limited.
You might want to revise that view: http://www.toothycat.net/~holo... [toothycat.net]
In the discussion on this site I assemble a Universal Turing Machine from Magic: the Gathering cards.
Re: (Score:2)
"One of the things that actually disappointed me the last time I played MTG was the prevalence of cards apparently designed with the intention of ending a game in under half a dozen turns. Maybe it's my rose tinted glasses but I don't remember that being as common when I played as a kid."
In the first official Magic tournament, fully half of the decks entered were able to win before the other player had even taken a turn (this lead to several early rule changes). Out of the three major 60-card formats, the f
Re: (Score:2)
I didn't play until Fallen Empires, I think. But I was aware of the problems with the first edition, Black Lotus, Moxs and such. Those issues were seemingly mostly fixed by the time I started playing and I don't remember them cropping up much for the few years I played. I do seem to remember some kind of infinite mana combination that was resolved for tournament play by insisting that the player actually manipulate and declare each card/action for every step and iteration of the loop, thus limiting how infi
Re:What about Magic the Gathering? (Score:5, Funny)
AI's will also never best you at sitting on the couch in your parents' basement eating cheetos and watching anime. Your skillz are safe.
Re:What about Magic the Gathering? (Score:5, Funny)
As long as they never take our waifus.
Re: (Score:2)
Go itself is... diverting. I usually make statements about Go when solving universal problems like poverty or education because it's a good frame-of-reference for literally everything.
Re: (Score:2)
This is why games like Go and Chess pale in comparison to more modern games that rely upon language and some amount of randomness. Both Chess and Go are incredibly boring (to me, of course) and more recent games, especially euro-style board games offer much more in types of complexity and more importantly, fun.
Indeed. And I'll just go ahead and say it now, a computer will never master Tic-Tac-Toe.
Re: (Score:2)
An interesting difference between MTG and games like Go or Chess is that since MTG is a card game, there's "luck" involved.
Many games that also involve chance have a substantial number of tactical choices, and have up to recently been done better by humans than computers. Backgammon, for example.
Poker is also hard for computers, as they cannot read the player nearly as well as a human can.
Re: (Score:2)
Top humans playing top humans at poker play GTO - game theoretically optimal. The best computer players can already beat the best humans at heads up limit; and probably will be able to beat the best humans at heads up no limit.
The best computer players can absolutely crush anyone but the top 10 or so human players, and the next couple of years will be able to beat and maybe even crush the remaining players.
Re: (Score:2)
Luck, and also partial information.
In chess (and I think in go, although I only skimmed the rules), both players know the entire state of the game at all times. Not so for MtG - there is knowledge both players know (cards on the battlefield, in a graveyard or in exile), only one player knows (contents of your own hand), and knowledge neither player knows (order of cards in the library). And, being Magic, there are ways to gain partial knowledge of even the zones you normally know nothing about (scrying your
Re: (Score:2)
Fully agreed, but it actually goes even further than that. The order of cards in the library is unknown to both, but the specific cards in it are fully known to one player (but usually not the other; in some situations you'll know the opposing deck already). As you play against an opponent in multiple games, you'll learn their deck, which will give you some knowledge of what they have. As you watch their play, you can observe some of their strategy and therefore be able to predict things about the rest of t
Re: (Score:2)
It would be pretty easy to have an AI that can win at MTG (or variants such as Hearthstone).
1) get lists of example decks that are used. Do millions of self play with each deck. Also do random decks. Find which combinations of cards work well together.
2) From this you can then begin predicting from seen/played cards what will be in villains deck and to design custom decks.
3) Also you can cluster decks by how they play (Zerg/fast decks, slow decks, etc.)
4) Then based on the deck group pick your optimal co
Re: (Score:2)
Until the clocks run out, presumably.
Re: (Score:2)
How long would a game last with AlphaGo playing against itself?
Actually, it's already been tried. To create AlphaGo, researchers first had the machine study tons and tons of human games. The neural net then continued to learn by playing against itself a few million times.
Re: (Score:2)
It is impossible to calculate all possible Go moves, even for Google. In fact, it is impossible to do so for even a infinitesimal fraction of them.
Re: (Score:2)
Re: (Score:2)
They're not. AlphaGo is basically an AI making very educated guesses and then calculating moves.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Is it really that amazing that a computer could be better at a game that has so many possible moves that it defies the human mind, but one that can be calculated entirely?
The whole point of the exercise is that Go is a problem space that cannot be calculated entirely (at least not efficiently enough to win a game in a reasonable amount of time). Cracking the problem required advanced machine learning techniques (what some people call AI).
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Draws are impossible in go (for the scoring method being used). Specific openings don't matter too much, they give similar chances regardless of the specific opening.