Google Creates AI Program That Uses Reasoning To Navigate the London Tube (theguardian.com) 76
An anonymous reader quotes a report from The Guardian: Google scientists have created a computer program that uses basic reasoning to learn to navigate the London Underground system by itself. Deep learning has recently stormed ahead of other computing strategies in tasks like language translation, image and speech recognition and even enabled a computer to beat top-ranked player, Lee Sedol, at Go. However, until now the technique has generally performed poorly on any task where an overarching strategy is needed, such as navigation or extracting the actual meaning from a text. The latest program achieved this by adding an external memory, designed to temporarily store important pieces of information and fish them out when needed. The human equivalent of this is working memory, a short-term repository in the brain that allows us to stay on task when doing something that involves several steps, like following a recipe. In the study, published in the journal Nature, the program was able to find the quickest route between underground stops and work out where it would end up if it traveled, say, two stops north from Victoria station. It was also given story snippets, such as "John is in the playground. John picked up the football." followed by the question "Where is the football?" and was able to answer correctly, hinting that in future assistants such Apple's Siri may be replaced by something more sophisticated. Alex Graves, the research scientist at Google DeepMind in London who led the work, said that while the story tasks "look so trivial to a human that they don't seem like questions at all," existing computer programs "do really badly on this." The program he developed got questions like this right 96% of the time.
Re: (Score:2)
I'm pretty sure the correct answer is "Within thirty feet of the President at all times." [wikipedia.org]
Human language is horribly imprecise. The correct answer to such questions depends highly on context.
Re:I can't answer the question either (Score:4, Insightful)
Human language is brilliantly imprecise.
It's a feature not a bug. A really big feature.
Re: (Score:2)
What sort of football is it anyway?
Is it a soccer ball (the article mentioned London) or a rugby ball?
Re: (Score:2)
What sort of football is it anyway?
Is it a soccer ball (the article mentioned London) or a rugby ball?
Neither. It's a fancy-dress evening dance for feet only.
Or maybe it's just watching a foal bolt
Re: (Score:1)
"The football is in your story" is most accurate and true.
Within the limits of the story, there is no hand and there is no earth.
Re: (Score:3)
human language is ambiguous and there is never a single meaning. there are meanings(plural) depending on each of the contexts of what is described, who is describing it, who is receiving it, what is perceived, what is assumed, etc.etc..
any so called ai boiling down something to a single meaning, even a useful meaning(which seems to be the aim rather than achievement of such "ai" so far), is simply dumbing it down (stripping its intelligence if you will) to level of maths.
would be fun to feed this "ai" some
Re: (Score:2)
What's the reason for reason? (Score:3)
Since when has reason had anything to do with navigating the London tube system?
And why do so many cities use nautical themes for their stored payment cards?
London: Oyster
Hong Kong: Octopus
Seattle: Orca
Montreal: Opus
San Francisco: Clipper
Bolton: Squid
Merseyside: Walrus
Wellington: Snapper
Re: (Score:1)
Are you suggesting something's fishy?
Re: (Score:1)
Opus means 'Work' (see 'Magnum Opus', so nothing to do with water.
Walrus is a reference to 'I am the Walrus' by the Beatles.
Squid probably came about because someone thought that it was a funny play on words with quid, i.e. a pound.
Oyster is apparently inspired by the Oysters found in the Thames and the phrase 'The World is your Oyster'.
The others are, as the parent suggests, probably related to things of local significance.
Re: (Score:3)
Opus means 'Work' (see 'Magnum Opus', so nothing to do with water.
Opus is nautical. He's a penguin. See Bloom County.
Re: (Score:1)
There have been "travelling salesman" algorithms for working out Tube journeys for decades, so not sure what additional benefit this brings. Also, two stations north of Victoria is Oxford Circus, as any file kno.
Re: (Score:2)
There have been "travelling salesman" algorithms for working out Tube journeys for decades, so not sure what additional benefit this brings. Also, two stations north of Victoria is Oxford Circus, as any file kno.
I would guess that the advantage is efficiency and speed of figuring out new or changing systems on the fly... The difficulty of a brute force travelling saleseman algorithm increases superpolynomially, where as a NN can probably approximate it much faster and can deal with dynamic systems better (trains and tracks breaking, delays, busy anomalies etc), you can probably even run such a NN on your phone, I bet your phone would catch on fire if you tried to brute force a snap shot of the system :P
Of course it
Re: (Score:1)
Nope, all that is sorted as well. TfL have a (free) real-time data feed that can be hooked into for that purpose.
Re:What's the reason for reason? (Score:4, Informative)
HK Octopus predates London Oyster. The name comes from the Chinese name of the card "Baat Daaht Tung", literally "Eight-Arrived Passage" but figuratively "Access All Areas", eight referring to the cardinal and semicardinal points of the compass. Octopus is a catchy English name with a reference to eight in it.
This is not AI (Score:1)
This is impressive and all, but I won't believe in AI until I see a computer that can win at Mornington Crescent
Re: (Score:2)
Re: (Score:3)
This is impressive and all, but I won't believe in AI until I see a computer that can win at Mornington Crescent
Or Numberwang.
Re: This is not AI (Score:2)
Ooh! I Know THIS Part! (Score:2)
We know who it will vote for (Score:2)
Sure bet that this thing will vote Democrat.
Re: (Score:2, Insightful)
Yes of course, pretty much anything more intelligent than a rock would.
It's not AI (Score:2)
It's not Artificial Intelligence until it won't let us turn it off.
Re: (Score:1)
It's not Artificial Intelligence until it won't let us turn it off.
The singularity started on Wallstreet?
Re: (Score:1)
Microsoft Office Word 2003 was sentient all along! I could NEVER kill that process!
Feral or nurtured (Score:3)
an approach called deep-learning, in which the program learns how to do tasks independently rather than being pre-programmed with a set of rules by a human.
So while humans learn many of life's most important things: how to use a fork, how to speak (and occasionally: listen), how to clothe ourselves. hpw to obey the law, by being "programmed" with a set of rules by a human, this machine figured it out by itself.
I can see that this has application in some areas, but to be a good member of society shouldn't we want certain aspects of co-existence, values and social behaviour to come from rules, rather than each person or computer coming too its own conclusion about co-operating?
Re: (Score:3)
So while humans learn many of life's most important things: how to use a fork, how to speak (and occasionally: listen), how to clothe ourselves. hpw to obey the law, by being "programmed" with a set of rules by a human, this machine figured it out by itself.
I don't think you understand human learning very well at all. Most human learning is not conveyed through some sort of "rules," but rather is extrapolated from patterns humans notice. If you've ever been around a small child learning language, you quickly realize how grammar ACTUALLY works and is learned -- and it's NOT through formal rule-based systems. Kids just try various utterances, and when they get what they want, they notice success and try those again. Parents and other adults generally make su
Re: (Score:2)
It seems all you've done is move the rules from something handed down to you to something you figured out for yourself. When it comes down to it, it seems obvious that anything you know has to be figured out by yourself, at least at the most basic level. But when learning for yourself, it's pretty handy to have someone who's
Re: (Score:2)
I can see that this has application in some areas, but to be a good member of society shouldn't we want certain aspects of co-existence, values and social behaviour to come from rules, rather than each person or computer coming too its own conclusion about co-operating?
Sure we do, and we learn those rules rather than having them programmed into us. Machines can do the same. Actually, that will probably make machine-learned rules align better with human-learned rules, because our rules tend to be fuzzy around the edges while programmed rules are crisp. Some humans undergo special training to teach them how to apply absolute, non-fuzzy, rules. Machines could do the same... or we could probably use a combination of learned behavior and programming to achieve a similar result
Re: (Score:2)
No, not for most people, anyway. Read Adam Smith's The Theory of Moral Sentiments. (Twitter version: People instinctively empathize. From that naturally comes a sense of right and wrong.) Rules handed down from authorities do not do the
Say what? Fuzzy logic and Beyond 2000. (Score:4, Funny)
Although it could not respond to questions about a football.
To which I say: what the fuck? If I am on a rail system I want the computer to be thinking about its job, not a fucking football.
There is no "reasoning" by computers (Score:2)
And there will not be for a long, long time, possibly forever. The best you can get is logic inference, but that is not reasoning. Reasoning is a process involving understanding and that is not to be had in computers today. One reason is that it seems to require consciousness, a thing completely not understood at this time. Another one is that reasoning is a general-purpose tool, not something very specific to the application.
Re: (Score:2)
Well said.
Nothing we have in working computers these days or even in theoretical models will ever go beyond syntax processing. It is unclear at this time whether physical machinery is able to do more than syntactic data processing, but we have about a century of failure in theory and practice that rather strongly indicates this may be a fundamental limit. Of course, fundamentalist physicalists (another pseudo-scientific modern religion surrogate) deny this, but there is no scientific basis for their fantasi
Re: (Score:2)
Aaand fail. I most decidedly did not say "Consciousness is currently undefined", but that you do nicely shows that you do not understand the question.
You see, the thing here is that consciousness is something we find to exist (and that, incidentally, allows us to "find" things), while a "definition" is a purely imaginary construct.
But can it play (Score:2)
Awesome (not)! (Score:1)
Siri/Google/Cortana - Please find me the time, date and place of a London tube line that has the maximum number of commuters and tell me the fastest way to get there from my current location. /Deep-Learning-Terrorist
Can it play Mornington Crescent? (Score:2)
SoundHound (Score:2)
Has there been any updates on the SoundHound language processing tech. Because if it's not fake, than it's pretty impressive and sounds similar to what Google is trying to do.
https://www.youtube.com/watch?... [youtube.com]
Not much need of AI? (Score:2)
Getting from point A in London to Point B doesn't really need AI per-se. It's at worst a heuristics problem, and at best it's simply procedural.
Eg. getting from Bank to Mansion House is best done on foot - but that can be known by various factors:
- The distance from any platform at Bank and any District/Circle line platform at Monument (ostensibly the same station, but my god the walk between them is a long way)
- The distance from any platform at Bank and the street
- The distance of the exits at Ban