Computer Defeats Human At Japanese Chess 178
Calopteryx writes "A computer has beaten a human at shogi, otherwise known as Japanese chess, for the first time. As New Scientist reports, computers have beaten humans at western chess before, but that game is relatively simple, with only about 10^123 possible games existing that can be played out. Shogi is a *bit* more complex, offering about 10^224 possible games."
When a computer program can... (Score:3, Interesting)
... design and write another computer program to beat a human at chess or shogi - THEN i'll be worried.
First move (Score:3, Interesting)
Chess has a natural limit since the number of pieces monotonically decreases during the game. Shogi lets you drop (add) pieces that you capture, so a game can go on for a long time.
Re:*yawn*. Call me when we lose at Go. (Score:3, Interesting)
Re:*yawn*. Call me when we lose at Go. (Score:5, Interesting)
I spent a summer once working for a professor who has spent his life trying to develop an AI for Go!
In particular I was compressing read-only hash tables of end states. He was basing his approach on the work of someone who had developed AI for checkers but I think it's obvious that Go is a little bit bigger problem.
(To be specific: http://lie.math.brocku.ca/twolf/home/publications.html#3 [brocku.ca])
Re:Nice headline (Score:5, Interesting)
Re:Same Old Song And Dance (Score:1, Interesting)
As long as the algorithm is rather simple and it's achieved just with more bruteforce (=more processing power), it's all rather pointless. Things get interesting, when computers analyze problems themselves, the rules, and come up with their own approach, learn and modify their strategies. This is what I'd call true AI.
Forget Shogi - The real story is this (Score:4, Interesting)
If you bother to read the article:
"IBM say they have improved artificial intelligence enough that Watson will be able to challenge Jeopardy champions, and they'll put their boast to the test soon, says The New York Times. "
Do you realize what this means? Ken Jennings versus robots. They could make an entire new show out of this and I'd watch it religiously.
Re: chess on steroids (Score:1, Interesting)
Look up Bughouse.
Re:*yawn*. Call me when we lose at Go. (Score:2, Interesting)
If a game like shogi or chess was extended to 19x19 it would be vastly harder for a computer.
The difference is that nobody would want to play a chess game on a board that size. Go grew to 19x19 by player preference, not as some artificial limit to make it hard to beat the computer.
What makes Go hard isn't anything particularly neat about the game.
Concepts and patterns are more important in Go. There isn't a simple piece count that dominates the evaluation.
Re:*yawn*. Call me when we lose at Go. (Score:4, Interesting)
What makes Go hard isn't anything particularly neat about the game.
Incorrect. There are many things that make go difficult for a computer to play: positional evaluation is tough. The branching factor is huge (unlike Chess and similar games, the number of available moves in a given board configuration is very large, as a stone can be played virtually anywhere on the board). Life-and-death is difficult to calculate. There are interactions between local and global play...
Go's board size is certainly a factor, yes, but if it were the only one, computers should excel at 13x13 or 9x9 games, and yet they don't.
Arimaa : the next 8x8 programing challenge (Score:3, Interesting)
See Arimaa [arimaa.com], a new game [wikipedia.org] with a board and set similar to Chess *but* with specific rules made to be difficult for a computer to play, and easy for a child.
How many options do you have when it's your turn to play with chess ? The average branching factor in a game of Chess is about 35, whereas in Arimaa it is about 17281 !
This is why a computer which can search to a depth of eight turns for each player in chess, can only search about three turns deep for each player in Arimaa...
This game is the new challenge for IA, easy for a child, difficult for a computer. A average human player wins against best programs.
Re:Arimaa : the next 8x8 programing challenge (Score:3, Interesting)
This game is the new challenge for IA, easy for a child, difficult for a computer.
I looked at Arimaa a long time ago and keep tabs on it's progress occasionally. It's still very much a niche game after all these years. The two biggest problems with it:
A average human player wins against best programs.
Actually, the top programs these days are already at expert level, but still far behind the master level players.