Computer Beats Go Champion 149
Koreantoast writes: Go (weiqi), the ancient Chinese board game, has long been held up as one of the more difficult, unconquered challenges facing AI scientists... until now. Google DeepMind researchers, led by David Silver and Demis Hassabis, developed a new algorithm called AlphaGo, enabling the computer to soundly defeat European Go champion Fan Hui in back-to-back games, five to zero. Played on a 19x19 board, Go players have more than 300 possible moves per turn to consider, creating a huge number of potential scenarios and a tremendous computational challenge. All is not lost for humanity yet: DeepMind is scheduled to face off in March with Lee Sedol, considered one of the best Go players in recent history, in a match compared to the Kasparov-Deep Blue duels of previous decades.
Re:The Future! (Score:4, Interesting)
What makes this especially interesting, is the victory was not achieved with the sort of brute-force approach used by Deep Blue in chess. This used a deep neural net, and algorithms similar to how we believe that humans think. Last time I heard about this, they could consistently beat humans on a 9x9 board, and were working on 13x13. I was surprised to hear that can already win on a full sized 19x19 board. I thought that was still a few years away. This is amazing progress.
Re:The Future! (Score:5, Informative)
It doesn't quite use a "brute-force" approach, but it certainly does use significant, and intelligently designed, Monte Carlo searches which are informed by well-trained neural networks. The neural-network alone approach, without any Monte Carlo search during play, is not as strong, though it does appear to equal a state of the art conventional Go program. See Figure 4b.
And the training of the neural networks and construction of their training sets certainly did need quite a bit of 'brute force' as well as 'efficiently wielded force in large quantity'.
Re: (Score:3, Funny)
You read...the...paper?!
I don't know how they do things wherever you come from, but this is Slashdot.
Next time, just read the headline and skim the summary, then spout off whatever pops into your head.
Informed commentary, sheesh!
Re:The Future! (Score:4, Insightful)
And the training of the neural networks and construction of their training sets certainly did need quite a bit of 'brute force' as well as 'efficiently wielded force in large quantity'.
To be fair, it'd take a fair bit of brute force training for a human to beat Fan Hui too - you aren't exactly going to rock up, read a pamphlet explaining the rules and win 5-0 on your first ever attempt at the game.
Re: (Score:2)
What makes this especially interesting, is the victory was not achieved with the sort of brute-force approach used by Deep Blue in chess. This used a deep neural net, and algorithms similar to how we believe that humans think.
Mainly due to the different goals of Chess and Go. In Chess you can have as much material and positional advantage as you want but it's worthless if your opponent can mate, which means you have to calculate the ways that could happen. A blunder and a sacrifice might look the same unless you look deep enough. In Go there is from what I've understood just stones, it's not like cornering one king stone turns the game. It seems their key break-through was being able to evaluate the position and find winning pat
Re: (Score:1)
Re: (Score:2)
[Cum grano salis alert: extremely amateur go player here. ]
Also, Go often has many local "battles" going on simultaneously. If you've figured out that you are losing in one corner, you switch to a different corner. If the opponent finishes you in the first corner, that gives you one or more free moves in the new corner. So, the opponent will generally follow you to the new corner. What you can then try to do is build from the new position towards the old in the hope of rescuing that position. But this means
Re: (Score:1)
What makes this especially interesting, is the victory was not achieved with the sort of brute-force approach used by Deep Blue in chess.
Wake me when a computer can beat a human champion while using 100 w of power or less - about the equivalent power consumption of a human. Actually, the brain uses about 20% of this but lets be generous.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
people typically learn object recognition with far fewer examples than a computer requires using a deep learning approach.
Single Sample Face Recognition using Deep Learning Autoencoders [ieee.org].
IMHO deep learning is just the latest fad, popular with AI programmers who otherwise don't have a clue.
How many world class Go players have you defeated?
Beats? Danger Danger! (Score:2)
When I first read the headline, I pictured a robot's arms flailing about, whacking its human competitor upside the noggin. "So, A.I. finally got the emotion thing down."
Re: (Score:2)
Here is a relevant Gaston Lagaffe comic : http://a407.idata.over-blog.co... [over-blog.com]
It is in French so here is an approximate translation
1.
- What is this strange thing Gaston??!?
- Wonderful isn't it! I bought it in a flea market, it is an automaton that plays chess!!
2.
- I am curious to see this!!
- It is very strong said the seller...
3.
- I played! what will the champion do?
- It seems like it always win!
5.
- You see, you see!!
Re: (Score:2)
Re:This doesn't surprise me at all (Score:5, Insightful)
No, this is not an accurate understanding of Go strategy or how it is played at the highest level.
In fact, if the game is played in the way you describe, previous computer algorithms were quite good at analyzing the local interactions of pieces, yet were roundly defeated even by top-level amateurs with handicaps. The reason is that at more sophisticated levels of play, one's skill level is correlated with how one perceives and evaluates the entire board. There is a sort of "gestalt" of Go that good players seem to grasp in ways that are very difficult to objectively describe, and sometimes a stone placement can seem arbitrary but become pivotal many, many moves later. This is reflective of a deep and global strategy that computer algorithms--at least until now, it seems--have had tremendous difficulty in emulating.
Re: (Score:2)
This is one of the worst comments ever. While it's true that it's failings belong to the developers, so do all of it's successes.
It's asinine to believe a computer or algorithm has abilities or brilliance that are not the creator's, in the same way it would be asinine to praise an automobile itself.
This is a fallacy. With a completely *prescriptive* algorithm where the learning itself is baked into the code you might be right, but with more modern "ai-like" algorithms, it is often the algorithm + training and not the developer that is responsible for the abilities/brilliance of the machine.
That's like a parent taking all the credit/blame for the brilliance of their offspring when mostly all they did was pass down the "algorithm". The teachers, peers, and the rest of the environment are likely deserve
Re: (Score:1)
Hmm... You'd not praise a car for being a good car? I'm not sure that I understand why you'd think the creation was not deserving praise. Much like you praise good art, the art itself, yet still have praise for the artist, so too can you praise (or damn) a device for its inherent qualities. It's not like there is a value of praise where there's none left for the creator or user of a tool. I dare say, we humans praise a whole host of tools all the time. Everything from programming languages to operating syst
Re:This doesn't surprise me at all (Score:4, Interesting)
Re: (Score:2, Insightful)
We can't (and the existing AI Chess players don't try to) brute force every move in Chess. The compute resource needed would be quite extreme, and if you could do it at all, you'd _solve_ Chess and we'd be able to say confidently "White always wins" or, perhaps as likely "It's a draw" when players know what they're doing. For example we have solved Tic-Tac-Toe and Connect Four, in both those games the first player wins if they play perfectly, regardless of what their opponent does.
AI Chess players begin wit
Re: (Score:2)
"There is significant strategy involved in the game, and the number of possible games is vast (10^761 compared, for example, to the estimated 10^120 possible in chess)"
And from the Computer Chess page:
"The Nalimov tablebases, which use state-of-the-art compression techniques, require 7.05 GB of hard disk space for all five-piece endings. To cover all the six-piece endings requires approximately 1.2 TB. I
Re: (Score:2)
tic-tac-toe is trivial to solve (I did so as a child -- there aren't very many moves) and, yes, the first player wins. Every time. Unless they're you, I guess.
A strange game. (Score:2)
tic-tac-toe is trivial to solve (I did so as a child -- there aren't very many moves) and, yes, the first player wins. Every time. Unless they're you, I guess.
The only winning move is not to play...
Please watch wargames scene [youtube.com]...
Re: (Score:2)
Almost certainly this has never been true. Chess may be better known in Europe and America - approaching a billion people, being generous - but Go has been by a margin the most popular board game in China, Japan the Koreas, Taiwan and other oriental countries, currently tota
Re: (Score:1)
" I suspect what really slowed down Go progress was that Chess was simply more popular
Almost certainly this has never been true"
OK, let's rephrase it: I suspect what really slowed down Go progress was that Chess was simply more popular... among those that mattered.
While Go seems to stress different ways than Chess, and it's even acceptable to say it's more difficult to approach by a computer, it also seems true that chess was chosen because it was popular among those in computer sciences. Go may be play
Re: (Score:3)
I never claimed that Go was popular in the west. Or even known. Unusually I had heard about it before I came to university - that was very unusual. But then I hit the proselytization trail for the next 3 years.
Significantly, I had to do far less explanation amon
Re: This doesn't surprise me at all (Score:2)
Well chess is giant in India, Europe, NA, and generally more popular in SA.
And at least in Japan, while Go is popular, in all my years living there I've only ever met shogi players (and the Chinese I have interacted with quite regularly don't play at all).
But oddly every Chinese person I know has played chess.
Without a doubt, both in absolute and computer professional relative terms, chess is more popular.
Really it's more likely brute force is easy to program compared to neural networks and in general, the
Re: (Score:2)
I'm surprised that you didn't meet any Go players in Japan. You may not have known that some people you knew were players, but that's a different point. When I had a weekend in Seoul, I was keeping my eyes open for the characters for a "go club", but I didn't find one. It's difficult to find things in a culture you don't know.
Re: (Score:2)
You are correct, and the three other clowns that responded to you don't understand your post.
In Go, symmetrical board states are identical, and thus, for the beginning several dozen moves of a game, the effective search space is much, much, smaller than a naive approach considering all 361 positions.
The popularity of Chess over Go in the west is absolutely why it was the focus of early publicity stunts and man-vs-machine matches. The corporations building and programming the machines were all western and f
Re: (Score:2)
If by enough storage you mean more bits than there are atoms in the universe, and by time you mean longer than the life of our solar system, you are correct. However, constructing and powering your computer does not seem a trivial task.
Not AI (Score:1)
Re: (Score:1)
Come on, Siri would be quite impressive if you showed it to a researcher from the 1950's. Same with the IBM chess win. Our expectations have simply increased and/or it "looks simple" after you see how its done.
Re: (Score:2)
The first computers were very cool, and could calculate faster than people. We have a name for things that are cool, and do intellectual tasks faster than humans: it's weak AI.
In this case, the computer is still using the MonteCarlo approach to finding a move.....which is roughly "choose a bunch of moves at random and choos
Re: (Score:2)
In this case, the computer is still using the MonteCarlo approach to finding a move.....which is roughly "choose a bunch of moves at random and choose the best one." It's one way to prune the tree, and it is surprisingly effective in the case of Go. But it's not how humans think.
We don't know that. It certainly isn't how players usually characterize what they consciously do. However, it is certainly possible dozens or even hundreds of possible arbitrary moves are tested and discarded unconsciously, and a few "good" moves, selected by various heuristics such as studied joseki, are bubbled up for conscious consideration. We only ever hear about the conscious portion, and even then it's jumbled. It's hard to know what's getting them to that point, and once past it even good players ca
Re: (Score:2)
We don't know that.
That is exactly the problem.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Eliza gave mostly fake, canned responses with the user's key-words sprinkled in. It was not practical.
I will agree that Siri's technology existed in labs by the late 70's, for the most part, but moving from lab/garage/experiment to common use quite often takes a couple a decades. Same with TV, cars, transistor radios, and others.
You can ask Siri about type-of-food restaurant locations, current weather, appointments, and most of the other typical smart-phone services, all in a little box. It's not as flexibl
Re: (Score:1)
Correction, it probably should be "from a little box" instead of "in a little box". Smart-phones often rely on servers they network with for many types of queries.
Re:Not AI (Score:5, Insightful)
There is a name for this "not AI" comment: The AI effect. Basically, whatever can be done with a machine is automatically considered "not AI", because it's no longer magical, just engineering.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
> Calculating all of the possible moves then picking the best one is hard to call intelligent. The book on AI we use at our university disagrees with you.
That's because most AI researchers are quite happy to call anything involving a machine doing something "artificial intelligence" even if it's just the speaking clock.
Re: (Score:2)
That's because most AI researchers are quite happy to call anything involving a machine doing something "artificial intelligence" even if it's just the speaking clock.
That's because most researchers have to deal with undergraduates, so they've had to relax their definition of "intelligence".
Re: (Score:2)
Recreating human intelligence is already trivial
Well it certainly didn't work in your case.
Re: (Score:2)
Re: (Score:2)
It's not AI like you see threatening the world in movies. No one has ever made a general artificial intelligence, and this particular example does not bring us any closer to it. It is, however, one more thing that computers shall forever be better than humans at.
Re: (Score:2)
If you use a trick to solve the problem, then good job, but it's not AI.
Re: (Score:3)
That's because AI has a real definition: "computers that think like humans."
I dispute that. What a computer that definitely thinks, but not like a human? What if we could develop a computer that thought like a dolphin? Would that not be AI?
Re: (Score:2)
What if we could develop a computer that thought like a dolphin? Would that not be AI?
That would be cool, too.
Right now humans and dolphins are closer to each other than computers.
Re: (Score:2)
That's because AI has a real definition: "computers that think like humans." If you use a trick to solve the problem, then good job, but it's not AI.
Except that it is beginning to look like human intelligence is also just 'tricks'. Your brain takes short-cuts, makes assumptions, 'fills things in' both perceptually and conceptually, and forms a consciousness that is largely made up from evolutionary history and previous memories. Yes, it's wet and it evolved, but it's just a bundle of ad hoc solutions that combine to form your mind.
Re: (Score:2)
Re: (Score:2)
"There is no such thing as AI currently."
The fact that it's currently pretty stupid doesn't make it Artificial Intelligence any less.
Re: (Score:2)
As someone put it, a real AI would spend much of its time wondering whether to kill itself.
Ex Machina was quite nice actually, for the whole question of how to test whether a thing is sentient.
But I'd guess that the "brain-machine" is what produces/structures any phenomena/data, like being able to recognise a tree amongst all the patterns of colour, or the right moves in a game, whereas sentience is that which experiences that data — so artificial intelligence can be any clever data processing, suffic
Re: (Score:2)
This isn't AI.... (Score:1)
Evaluating every board combination to search-tree depth isn't intelligence. If anything, its a parlor trick that shows that a system with *absolutely no intelligence of its own whatsoever* can be designed to play a game with sufficient apparent skill that it can beat a human player.
When you are evaluating so many orders of magnitude more board combinations than a human could ever hope to, it seems inevitable that at some point, you will eventually find a tipping point that overwhelms human capacity to d
Re: (Score:2)
Re: (Score:2)
Chess is brute forced; this is not.
Oddly enough, I'm currently in an AI class and brought up Go just yesterday... The improved algorithm and neural net is one thing, but I wouldn't be surprised if they still tossed more computing power at it than Deep Blue.
Re: (Score:2)
So intelligence is 'doing things the way humans do'? There can't be any other type of intelligence?
If a problem requires intelligence to solve, then any solution to that problem on a computer is artificial intelligence regardless of what 'parlour tricks' are used. And yeah, humans are really good at pattern recognition while computers are really good at arithmetic so I would expect artificial intelligence to differ significantly from human intelligence.
PS: This AI evaluates significantly fewer moves than
Re: (Score:2)
Re: (Score:2)
The most a computer will ever be is an algorithm with some clever programming. Are you saying that AI is impossible.
Take go for example. Let's say hypothetically that I develop this mega-awesome heuristic for evaluating go positions. So good that without search (1 ply) I can play a mean game. That heuristic evaluation function is either: me encoding knowledge about how to play the game into programming or it is 'learned' through random manipulation of data on a computer which is rewarded when it wins ga
Re: (Score:2)
Re: (Score:2)
The most a computer will ever be is an algorithm with some clever programming. Are you saying that AI is impossible.
It all depends how meta you want to go, I would say that as a minimum an AI should be able to come up with its own algorithms and solution strategy. Like if you hand it a book of chess rules it should be able to work out by itself that an opening book is useful, maybe an end-game database, maybe some brute force search, some positional analysis, monte carlo searches, neural nets, whatever. Not just finding the right parameters/weights or crunching through someone else's algorithm, it has to be able to funda
Re: (Score:2)
Like if you hand it a book of chess rules it should be able to work out by itself that an opening book is useful, maybe an end-game database, maybe some brute force search, some positional analysis, monte carlo searches, neural nets, whatever.
Did you figure these things out on your own, or read them somewhere, or someone told you?
For anyone who wants to enjoy playing chess, I don't think memorizing a book full of openings is going to be "obvious."
Re: (Score:2)
I've never seen a discussion of AI that REQUIRED them to mimic humans. In fact, the unintelligent neural network approach to antenna design (artificial, with NO implication of intelligent search for a solution) produces designs explicable to physics, which work (your cell phone may well rely on one), but no-one knows how they get from this design to that. Their non-intelligence is different to human i
Re: (Score:3)
Good luck with that. I mean, people have only been trying that for ...
The first computer-Go programmer I met and played against was in 1984. He was trying to do that, on IIRC a BBC Micro with 128kb of RAM.
I've been following the subject since then. Shockingly, you describe EXACTLY the process that most people have tried to implement. It was only about a decade
Re: (Score:2)
It isn't doing that.
They trained some strong neural networks to first predict move probabilities from 30 million expert moves and positions. That was just the start. Then they used 'reinforcement learning' where they played games against itself and propagated back the final outcome (game won/lost) all the way through the net space to improve the learning to the correct outcome (game won vs lost) vs matching expe
Re: (Score:2)
From the announcement's lack of associated bullshit, I take it that deGroot has been thoroughly walked past. Good. In the same way that he walked over other people's work after misappropriating it.
Whichever way it works, it's an improvement to previous programmes. Unfortunately it remains computer-Go, so I'm unlikely to ever sit down a
Wake me... (Score:2)
Re: (Score:2)
This is worth waking up for. It wasn't long ago when Go programs couldn't beat most amateurs. They're improving fast.
Re: (Score:2)
I'll still prefer the click of stone on wood though. Computer Go just doesn't do it for me. If that means that a busy year is a half-dozen games ... well that still beats playing Go on the computer, even against a human.
Go Champion (Score:2)
Impending Doom (Score:2)
Using well known and solid techniques along with vast computing power, Google has finally broken into the majors of Go. The next question is whether a home computer can run the neural network now that it's been trained;.. or do the CPU and RAM requirements still place this level of play into the corporate-only bracket.
Once we can run our own purpose-designed expert systems on commodity hardware, that's when the social change AI will bring [youtube.com] will be nigh. Whether it's beneficial to everyone, to just the 1%, or
Re: (Score:2)
Using well known and solid techniques along with vast computing power, Google has finally broken into the majors of Go. The next question is whether a home computer can run the neural network now that it's been trained;.. or do the CPU and RAM requirements still place this level of play into the corporate-only bracket.
You can easily run the neural network and the other parts of AlphaGo in a home computer, but you'll get worse performance than they do in one of their beefy machines (48 CPUs, 8 GPUs). They also have a cluster version (1202 CPUs, 176 GPUs), which is much stronger.
Re: (Score:2)
The best computer Go programs of a decade ago were all requiring Beowulf clusters to run on. Every one of about 5 competing designs (with several implementations of each approach). You might b
The Algorithm (Score:1)
1) Trained a deep neural net to predict a human player's move. Correct predictions were 57% based off 30 million samples. The previous best was 44%.
2) Create a second deep neural net to determine the value of a board, meaning if you're winning or losing.
3) Use the two networks as the heuristics in a tree search.
4) Let the computer play itself to get better (basic reinforcement learning).
5) Have excellent hardware to run the tree search during a real game.
This is all standard AI stuff. Here's a quote from
Misleading press release (Score:4, Interesting)
I googled Fan Hui: one source says he's 8 dan amateur, another that he's 2 dan pro. That's only a little bit better than go programs have been for several years, and much weaker than the best professional players. If he's a top player in Europe, that mostly says that go isn't played at a very high level in Europe. I think that the progress that has been made on go software is really great, but the claim to have beat a 'go champion' seems a bit of a spin.
Re: (Score:1)
The player in question gained professional status when he was in china, which would put him at top 1000 in the world.
Re: (Score:3)
These two statements are not incompatible. 6 to 8 dan difference between amateurs and professionals sounds perfectly reasonable. They are on different scales.
Yes. And tell us something we don't know?
Almost every high-grade European player has had to travel to the Orient to improve, because they simply cannot get the oppositi
Re: (Score:1)
Is he better than rust, ruby, python, or c++?
Re: (Score:3)
I don't see the problem. The guy has won European tournaments in the last three year. He is not the world champion, but he is certainly no joke. Beating him shows significant improvement in the quality of the machine playing Go. And it certainly is an important step in getting asian players to accept to play against the machine.
5-dan pros have been beaten in the past (Score:2)
Computers have beaten higher-ranked players (Catalin Taranu, 5-p) on the 9x9 board. Computer go is nowhere near computer chess where humans cannot stand a chance against the top engines like Komodo, which is rated over 3300 ELO.
I cannot help but notice that Google are advertising their AI system, after IBM pushed Watson for years, and Microsoft have recently open-sourced their system:
https://github.com/Microsoft/CNTK/ [github.com]
I am curious though about the result against a 9-dan pro, and what will such a player say a
let's play global thermonuclear war (Score:4)
let's play global thermonuclear war
System Shock (Score:3)
If the computer could beat a 2-dan professional, then it's clearly even smarter than SHODAN!
Videos (Score:4, Informative)
Videos [deepmind.com] are available.
Re:Don't fool yourself, poker will be solved easy (Score:4, Informative)
Poker is a game of incomplete knowledge - you don't know what cards are in the other players hands.
Go is a game of complete knowledge. As is chess. And draughts.
The two classes are completely different.
Re: (Score:1)
Of course you cannot "solve" poker or other games with randomness or hidden info in the sense of guaranteeing a win.
But that doesn't mean that such games cannot be solved in a more general optimization sense, e.g. maximizing your probability of winning a single match, or maximizing your expected monetary gain in poker.
E.g. consider a typical late endgame situation in Backgammon. You cannot "solve" it in the sense of guaranteeing a win. But clearly there is a "best move" in the sense of maximizing your proba