Google's AlphaGo Will Face Its Biggest Challenge Yet Next Month -- But Why Is It Still Playing? (theguardian.com) 115
From a report on The Guardian: A year on from its victory over Go star Lee Sedol, Google DeepMind is preparing a "festival" of exhibition matches for its board game-playing AI, AlphaGo, to see how far it has evolved in the last 12 months. Headlining the event will be a one-on-one match against the current number one player of the ancient Asian game, 19-year-old Chinese professional Ke Jie. DeepMind has had its eye on this match since even before AlphaGo beat Lee. On the eve of his trip to Seoul in March 2016, the company's co-founder, Demis Hassabis, told the Guardian: "There's a young kid in China who's very, very strong, who might want to play us." As well as the one-on-one match with Jie, which will be played over the course of three games, AlphaGo will take part in two other games with slightly odder formats. But why is Google's AI still playing Go, you ask? An article on The Outline adds: Its [Google's] experiments with Go -- a game thought to be years away from being conquered by AI before last year -- are designed to bring us closer to designing a computer with human-like understanding that can solve problems like a human mind can. Historically, there have been tasks that humans do well -- communicating, improvising, emoting -- and tasks that computers do well, which tend to be those that require lots of computations -- like math of any kind, including statistical analysis and modeling of, say, journeying to the moon. Slowly, artificial intelligence scientists have been pushing that barrier. [...] Go is played on a board with an 19-by-19 grid (updated after readers pointed out it's not 18x18 grid). Each player takes turn placing stones (one player with white, the other with black) on empty intersections of the grid. The goal is to completely surround the stones of another player, removing them from the board. The number of possible positions compared to chess thanks in part to the size of the board and ability to take any unoccupied position is part of what makes it so complex. As DeepMind co-founder Demis Hassabis put it last year, "There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000 possible positions."
Grid (Score:4, Informative)
It's a 19x19 grid, not 18x18.
Re: (Score:2)
Re: (Score:3)
19x19, 13x13, or 9x9. The number of possible positions is 2^361, but the number of possible moves is a lot bigger. The problem is projecting those moves is long and difficult; you have to be able to extrapolate localized influence and identify how that impacts a global strategy in an abstract sense, or else you can't play. A computer can't track all of the possible moves because losing a big position might be less-relevant than a play elsewhere that establishes power to restrict your gains to 1/3 of the
Re: (Score:1)
The number of possible positions is 2^361
Naively I would say 3^361, since each intersection can have a black stone, a white stone, or be blank.
Re: (Score:1)
Re: (Score:2)
Then you might want to count legal or possible positions, which might have been done or merely estimated but likely not an easy problem in itself.
Re: (Score:1)
Re: (Score:3)
One simple example is a black stone surrounded by white stones. It may happen on the board before removing the stone but I doubt it counts as a position. Have two such occurrences for the position to be definitely illegal.
Turns out, I did once read about it and to sum it up the number of legal positions is about one in 81. It was only computed on Jan 2016 :)
https://tromp.github.io/go/leg... [github.io]
The software used for these computations is available at my github repository. Running "make" should compute L(3,3) in about a second. For running an L19 job, a beefy server with 15TB of fast scratch diskspace, 8 to 16 cores, and 192GB of RAM, is recommended. Expect a few months of running time. (...)
Re: (Score:2)
And then divide everything by 2, because flipping every piece on the board from black to white and vice versa yields a functionally identical state.
Also remove rotations and mirrors. This isn't trivial.
Re: (Score:1)
Re: (Score:2)
Nope. That's a game state, not a board state. We're talking about the number of legal positions (board states).
Re: (Score:2)
AlphaGo is much more human-like than chess programs.
It can go very deep very quickly down the game tree, possibly down to the very end. It eventually goes back up to explore other options as much as time allows it. It uses neural networks and randomness to select moves that "look good". Neural networks are first trained using databases of human games then follow up by competing against itself. It essentially plays like a tireless human with perfect focus.
This is in contrast to chess AIs that analyse every s
Re: (Score:2)
This is in contrast to chess AIs that analyse every single turn in order to find the optimal solution according to some predefined heuristics.
This is essentially what AlphaGo is, except the heuristic is a trained neural network, instead of something hand-coded (and with a domain-specific priority queue of what move to investigate next).
Re: (Score:2, Informative)
Indeed, also:
The goal is to completely surround the stones of another player, removing them from the board.
Only beginning players think that. The true goal of the game is to occupy more territory than the opponent.
By the way, the best source of information on go is probably Go Sensei [xmp.net]
My mind to your mind, my thoughts to your thoughts (Score:1)
Re: (Score:2)
When researchers talk about AI, they usually mean weak AI. Because they have no clue how to achieve strong AI.
Re: (Score:2)
the real nature of Go is that the opponent is almost irrelevant: it's a game that one plays where the main opponent is one's self.
That's a very Eastern way of looking at it, which I believe may contain much truth at the very highest levels of the game. At my infinitely more humble level, I get only the briefest glimpse into this, but it's clearly an evolving process.
The application to computer Go, though? The computer does not have personal foibles or blind spots. I don't think the computer has to be self-aware to master Go, and it looks like AlphaGo is very far along the path to mastery.
That sort of begs the question: Can self-awaren
Re: (Score:2)
If the computer, more precisely, the algorithm, has no way to reflect on its self, introspect itself, store/remember 'previous thoughts', contemplate on them: he/it can not be self aware. Because it is not aware of anything he/it is doing.
Re:In Go you play against yourself, not the oppone (Score:5, Insightful)
Go is a struggle against one's fears and doubts; one's desires and ambitions; one's ability to control one's urges and anxieties.
Go is more like a martial art, where the battle with one's opponent is secondary to the battle with one's self.
What a load of crock. It's a game, with simple rules and high complexity. And yes, much like with martial art, Eastern philosophies and superstitious mumbo-jumbo has become part of the culture around it, to the point where players let masters win just like in martial arts, because a master not winning would be unthinkable.
But really, it's a game. When playing online against unknown opponents, none of this comes into play, and it's just a question of thinking ahead and strategies.
Re: (Score:1)
In martial arts the opponents actually develop the ability to read each other's mind and foresee their moves. Once this ability is perfected, the player essentially becomes invincible. All this means is that while we have better computers and programs, our players have lost the subtler skills needed for a martial art.
Re:In Go you play against yourself, not the oppone (Score:4, Insightful)
In martial arts the opponents actually develop the ability to read each other's mind and foresee their moves. Once this ability is perfected, the player essentially becomes invincible.
Anime is not real life.
Re: (Score:1)
And if you lose a game of Go, you actually win. Namaste.
Re: (Score:2)
Re: (Score:2)
So... in some sense the only way to win is not to play?
Would you prefer a nice game of Chess?
Which go... (Score:3, Funny)
Go as in... the game?
Go as in... the programming language?
Go as in... I had to go five minutes ago?
Re: (Score:2, Funny)
Go as in... go fuck yourself
Re: (Score:2)
Go as in... go fuck yourself
No, no, no. That's too boring.
Re: (Score:1)
Re: (Score:2)
I like the proverb in your sig, but a slightly more idiomatic translation might be: "The nail that sticks out furthest gets hammered the most". The best sounding translation (to my ears) probably changes the original meaning slightly, but scans better in English: "The nail that sticks out furthest gets hammered hardest".
Re: (Score:2)
I like the proverb in your sig, but a slightly more idiomatic translation might be: "The nail that sticks out furthest gets hammered the most". The best sounding translation (to my ears) probably changes the original meaning slightly, but scans better in English: "The nail that sticks out furthest gets hammered hardest".
This is the fourth variation of this proverb I've seen regarding my Slashdot signature. The wording of my signature was how I heard it after I became a Christian in college 25 years ago. I kept getting "hammered" by others in the ministry because I threatened to raise the low bar that the leadership was comfortable at. I've always contended that God worked from the bottom-up (fellowship) and not top-down (leadership). The leadership ultimately won when they kicked me out 13 years later, spreading rumors tha
More than a google but less than a googleplex (Score:2, Interesting)
This should lead to more concern about AI (Score:2)
Re: (Score:2)
Re: (Score:2)
It seems to me that the obvious test to see whether or not AlphaGo "understands" Go or not would be to have it try to play on a 21x21 grid.
Re: (Score:2)
Re: (Score:2)
A better test would be to see if it can recognize its creator though a camera and decide to lose gracefully to them.
Recommended if the creator is a Wookie.
Re: (Score:2)
How can any thinking person in this day and age not realize that AI predictions are HARDLY EVER worth the paper they're no longer printed upon?
Successful AI prediction is five years away—always has been, and always will be.
And no, the worthless prediction of consensus merit was not "decades" but "about a decade". I've been consuming machine learning podcasts day in and day out since the end of January. AlphaGo is not infrequently mentioned.
My two favourite guests
the Mike Tung exemplar (Score:2)
Okay, I can't shut up. He's the other half.
Mike is a smart guy. I like him. But watch him blow his own foot off and gnaw upon the bloodied remains starting at about 7m50 into the following presentation:
Text By the Bay 2015: Mike Tung, Turning the Web into a Structured Database [youtube.com]
"What about this frontal lobe?"
He wants to tackle this "next" project by some loose, unspecified analogy with recent surprising progress in computer vision and computer robotics.
Of course, how could the frontal lobe be different tha
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Calling any kind of game playing to be AI is complete and utter bullshit.
Well, it is related to AI, but that's about all you can say for it.
Re: (Score:2)
Nope, fuck you. Binary guy is correct (shockingly).
A simulation of intelligent behavior must appear to be intelligent. Playing Go doesn't require intelligence. Its complexity comes primarily from the size of the board.
No one is going to interact with AlphaGo and think "Gee golly, that's an intelligent entity on the other end!".
Show me an AI that can play Go and then learn (or create) a new, unknown game without any intervention from a programmer.
Show me an AI that can potty train a toddler.
Show me an AI
Re: (Score:2)
So, chimps are not intelligent, because they can't explain a Shrek movie ?
Re: (Score:2)
'A simulation of intelligent behavior must appear to be intelligent.'
Does it? I would say a simulation of intelligent behavior must appear to be behavior. That's like teaching dogs to speak maybe, and either AI will never be more than that or we'll never get AI. Even a self-driving car is a system tuned for the goal of "Drive from A to B without killing or maiming anyone" and will never be able to do anything else - save for additional functions like running an inference engine to play Go on the car's GPUs,
Re: (Score:2)
For an excellent, detailed book on the potential gifts and perils of AGI, I recommend Bostrom's book "Superintelligence."
Yes, if you are into baseless conjectures about something that no one understands.
logical fallacy? (Score:4, Insightful)
"designing a computer with human-like understanding that can solve problems like a human mind can"
we are talking about a game here with rules so in essence its actually the reverse.
this is an exercise where a human is trying to play like a computer that can plan out a million moves ahead... and some how is able to stay close!!
Re: (Score:1)
Not really, because the computer can't plan a million moves ahead in Go. The branching is too wide. It's why attempts to adapt the techniques used for Deep Blue from chess to go were a failure. Because in chess, the branching factor was a low enough that you COULD just read almost every possible move deep enough. And that's precisely why many pros didn't believe it would happen. AlphaGo uses deep neural networks that are an attempt to emulate the way real neural networks in our brains work.
Re: (Score:3)
Re: (Score:2)
The computer is better because it picks a small number of good moves to plan through, not because it picks a lot of them.
Re: (Score:2)
The Deep Learning approach is probably more an exercise in finding features to correctly assess the state of the board at a given time rather than "brute forcing", although I'm sure it brute force
Re: (Score:2)
Re: (Score:2)
lets grant everything you and others say is 100% correct. (even when contradictory)
my point however was that any game (or process), that is based on computational expertise should have the computer (AI, NN, DL) ahead. these games are relevant as a measure to compare a human's logical processing capability against other humans, and actually we as humans mostly fail at being good at them because ... chemicals or distractions or boredom make us less perfect players.
thus this is not a measure of a computer thin
Re: (Score:2)
Re: (Score:2)
The part of a good Go player's brain that recognizes whether a position is good isn't Turing complete. It's just matching patterns.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Inspired is the correct term, even if there are differences. And differences aren't bad, the brain is most likely sub-optimal because it is severely limited by the materials and process it has to work with. I'm sure that we can do better with artificial networks.
Re: (Score:2)
The brain does seem to utilize what is has better than computers, e.g. in computer chips memory cells store bits and communicate a binary signal. This could be all analog and exploit the voltage range better instead. (Flash memory cells now store a charge which can be at four, eight or sixteen (experimental) levels and translate that to two, three or four bits, that's why you can have a cheap 32GB or 64GB memory card)
A large part of chips is spent ferrying data around and worrying about the clock (although
Re: (Score:2)
The one on the "hype train" is you and only you.
Everyone who has the slightest grasp about the topci knows: artificial Neural Networks where invented some 70 years ago. So .... there goes your idotic idea of a hype.
Then again: while artificial neural networks obviously don't work like natural morons (oops) the way how they simulate their behaviour is in a mathematical sense: close enough.
As we all know you are super goood in insulting fellow /. ers but obviously completely incompetent in using google and ed
Re:logical fallacy? (Score:4, Interesting)
Re: (Score:2)
Re: (Score:2)
The main difference between the two games is that in chess the pieces move while in go the pieces do not. A go board "grows" as the game is played, while a chess board "evolves."
In go after a few moves are played in an area it is often apparent exactly how the area is going to play out. The 'when' part of the piece play is very often unimportant, and when the order of play is important its an easy thing for a
Re: (Score:2)
Software neural networks are an extremely simplified and idealized representation that only deals with a limited high-level subset of electrical activity we could make sense of some decades ago. There's a lot of smaller scale, or short scale stuff left out as well as all the chemical activity - if neural networks worked a bit more like a brain, we would be able to try the algorithms with caffeine, heroin, cocaine, cannabinoids, opioids and other substances, to study whether and how the algorithms are workin
Re: (Score:2)
a computer that can plan out a million moves ahead...
Even if a computer was "human enough" to know which 10 moves are the best, it takes a million moves to plan your next 3 moves.
Good human players feel way deeper than that.
The correct way to write a number. (Score:2)
As DeepMind co-founder Demis Hassabis put it last year, "There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000 possible positions."
When you are talking to a technical audience, it is best to avoid using scientific notation. Right?
Re: (Score:2)
1 × 10^147 if not mistaken
Re: (Score:2)
Yo mamma took them all and put them into one nerd [youtube.com].
Re: (Score:1)
Geeze, use scientic notation already! (Score:5, Informative)
Yeah, we get it... it's a big number. But writing it out longhand like that is just being needlessly cryptic... and at worst comes across as having been written by somebody who doesn't know shit about the actual number of combinations, and just decided to put a lot of zeros after the end of a 1 to make a number that sounds big. Try 1x10^172. This is far more readable, and those that know scientific notation will be able to understand just how big this number is.
If you really feel that this doesn't adequately describe the scale of the number to people who don't know scientific notation, and want your article to be comprehensible to those people as well, then you can also add that it is considerably greater than the number of subatomic particles in the observable universe. And to be frank, if that doesn't convey just how fucking big the number is, then explicitly writing 172 zeros after a 1 isn't liable to either.
Re: (Score:2)
Re: (Score:2)
Wikipedia says 2.08168199382×10^170 positions, but the same position could have multiple ko. This means the best move can be illegal and you'd have to evaluate 2 moves in those cases.
Re: (Score:2)
Ke Jie is world number 1? (Score:1)
I don't think there are any "official" rankings for this but even unofficially is this even remotely true?
Re: (Score:1)
https://www.goratings.org/
To really get an idea of the scale (Score:2)
This number is larger than the hundreds of stars in the universe [youtube.com].
Ready, set ... (Score:2)
I imagine, like most Google's projects when they go live, AlphaGo is still in beta.
Are you sure about that math? (Score:5, Funny)
I ran my own calculations and only came up with 999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999, 999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999, 999,999,999,999,999,999,999,999,999,999.
Wait, sorry, I started counting at zero. Yup, it checks out.
Re: (Score:2)
It learns the meaning of futility.