Google Buys UK AI Startup Deep Mind 113
TechCrunch reports that Google has acquired London-based artificial intelligence firm Deep Mind. TechCrunch notes that the purchase price, as reported by The Information, was somewhere north of $500 million, while a report at PC World puts the purchase price lower, at mere $400 million. Whatever the price, the acquisition means that Google has beaten out Facebook, which reportedly was also interested in Deep Mind. Exactly what the startup will bring to Google isn't clear, though it seems to fit well with the emphasis on AI that the company underscored with its hiring of futurist Ray Kurzweil:
"DeepMind's site currently only has a landing page, which says that it is 'a cutting edge artificial intelligence company' to build general-purpose learning algorithms for simulations, e-commerce, and games. As of December, the startup had about 75 employees, reports The Information. In 2012, Carnegie Mellon professor Larry Wasserman wrote that the 'startup is trying to build a system that thinks. This was the original dream of AI. As Shane [Legg] explained to me, there has been huge progress in both neuroscience and ML and their goal is to bring these things together. I thought it sounded crazy until he told me the list of famous billionaires who have invested in the company.'"
Deep Thought... (Score:4, Funny)
Re: (Score:1)
Gotta have a towel to get there.
Re: (Score:1)
if money=power (as everyone knows it does), then Google eclipses the NSA... it's more likely the NSA is working for Google.
Re:No matter which "Deep Thought" ... (Score:4, Interesting)
As Snowden already hinted, it's highly likely that NSA and large US companies actually exist in symbiotic relationship in reality, in spite of all the angry public outbursts. NSA likely shares the intelligence data on things like business secrets with US companies, especially when competition is involved.
Re:No matter which "Deep Thought" ... (Score:5, Informative)
Peanuts (Score:2)
Re: (Score:1)
Snowden should have dumped the data into the public immediately and in one huge batch to show the extent of the perversion of our freedoms and rights
one problem with that... you're assuming the public would even care let alone know what to make of it
Re: (Score:1)
HAL9000 in https://www.youtube.com/watch?... [youtube.com]
Re: (Score:1)
Actually, when searching Google for "Terminator near Sarah Connor" they give me results for "Terminator in Sarah Connor" instead.
I have seen the future. Rule 34 will save us from Skynet.
Re: (Score:1)
their AI would be more inclined to sell us something than to kill us off
they give me results for "Terminator in Sarah Connor" instead
along with ads for "terminator" vibrators
http://www.dinodirect.com/indu... [dinodirect.com]
Oh Yeah, Well (Score:4, Funny)
If anyone needs me, I'll be in my underground bunker.
Re: (Score:2)
**LOCATION INDEXED**
Re: (Score:1)
Google Street View car pulls up, guy takes a 360 degree picture of the entrance.
Voice assistant (Score:5, Funny)
Since Google still seems to believe Glass has potential to be the "next big thing" and it's entirely voice controlled, it makes sense that they'd want a voice assistant that can respond more intelligently than "I don't have a clue what you're talking about, should I search the web?" Maybe this company's AI would be adaptable to something along those lines?
Personally, I'm not a big fan of talking to machines. Yeah, it looks awesome in sci-fi, but in real life it just makes you look like a hipster douchebag when you're out in public talking to the little robotic voice inside your mobile device.
Re: (Score:1)
Yeah, it looks awesome in sci-fi, but in real life it just makes you look like a hipster douchebag when you're out in public talking to the little robotic voice inside your mobile device.
Two suggestions for the hipster image problem:
1. Stop using an iPhone
2. Don't end every Siri command with "...but you've probably never heard of it."
Re:Voice assistant (Score:5, Funny)
it just makes you look like a hipster douchebag when you're out in public talking to the little robotic voice inside your mobile device.
Who are you calling a douche? I'm actually talking to the little robotic voice in my head, the mobile device is just there for camouflage.
Re: (Score:2)
Re:Voice assistant (Score:4, Insightful)
No they weren't. Cellphones were cool from the start. At least, around here anyway. Everyone wanted one. The problem with glass is the same with bluetooth headsets. People ware them even when they're not using them... which makes you look like a douche. Once Google has these embedded in regular glasses this will stop being an issue.
Re: (Score:2)
No they weren't. Cellphones were cool from the start. At least, around here anyway. Everyone wanted one. The problem with glass is the same with bluetooth headsets. People ware them even when they're not using them... which makes you look like a douche. Once Google has these embedded in regular glasses this will stop being an issue.
Agree with the first part, but on BlueTooth headsets - what's one supposed to do with them, take them off and pocket them? That risks losing them. I leave mine in place, even when turned off, when I'm out and about. 'Cause I know I'd lose it otherwise.
Maybe it helps that I grew up in a household where hearing aids were worn by a family member, so having something in the ear was normal. On the other hand, I hated wearing ear buds for the longest time, 'til I recognized the usefulness of them.
Re:Voice assistant (Score:4, Interesting)
Since Google still seems to believe Glass has potential to be the "next big thing" and it's entirely voice controlled, it makes sense that they'd want a voice assistant that can respond more intelligently than "I don't have a clue what you're talking about, should I search the web?" Maybe this company's AI would be adaptable to something along those lines?
Personally, I'm not a big fan of talking to machines. Yeah, it looks awesome in sci-fi, but in real life it just makes you look like a hipster douchebag when you're out in public talking to the little robotic voice inside your mobile device.
I still find it amusing that command lines are seen as the least intuitive interface and voice control is seen as the second-most intuitive (after mind-controlled), even though voice control is just a command line over a noisy, ambiguous channel, where you can't even see the commands you're inputting.
Re: (Score:2)
Re:Voice assistant (Score:5, Informative)
The kind of voice control Google is after (as in "the second-most intuitive interface") is hardly the same as the kind of voice control that is available today. The first would be able to interpret your intent as well as a human could, possibly better (filtering out noise, asking to clarify ambiguities rather than making assumptions). And it's nothing like the command line, which does no interpreting, refining or clarification at all; it just executes a limited set of commands exactly as entered, with no room for so much as a misplaced comma.
It's exactly like a commandline, which have been attempting to interpret their input for decades (most famously with http://en.wikipedia.org/wiki/D... [wikipedia.org] ).
The two reasons modern commandlines don't do this are 1) lack of effort and 2) that it's often a very bad thing. According to http://www.nhplace.com/kent/Pa... [nhplace.com] one of the motivating factors for defining Common LISP was to stop DARPA from rolling out INTERLISP, and therefore DWIM, across all their projects.
As for clarification, I run into this all the time when typing non-existant commands (thanks to the "command not found" program) or using undefined variables (thanks to GHC).
Re: (Score:2)
And it's nothing like the command line, which does no interpreting, refining or clarification at all; it just executes a limited set of commands exactly as entered, with no room for so much as a misplaced comma.
ZORK I (1979):
Re: (Score:1)
The problem is the command line is incredibly unintuitive in that one must learn / memorise a special language to make use of it.
The ``Outland'' interface would be ideal --- but I don't see much progress on it.
Where are the general-purpose natural language command languages and parsers?
Re: (Score:3)
Where are the general-purpose natural language command languages and parsers?
They're sat in the middle of whatever voice-command pipeline you're imagining, between the speech-recognition layer and the voice synthesiser. The advantage of the CLI is that you don't need to recognise speech or synthesise a voice.
Re: (Score:2)
I'd rather human augmentation than voice assistants.
You may still need some sort of AI stuff to do that, but the focus is different. One path focuses on augmenting humans, allowing them to more directly be superhuman. The other path has humans requesting stuff from smarter and smarter AIs.
If it were up to me, it'll be more about thought macros and more:
http://hardware.slashdot.org/c... [slashdot.org]
http://tech.slashdot.org/comme... [slashdot.org]
Re: (Score:1)
The reason talking to machines seems so awesome in sci-fi is that the machines can respond and argue back with human or almost human intelligence. When AI can do that there will be a surge in voice-controlled computers.
Re: (Score:2)
Personally, I'm not a big fan of talking to machines. Yeah, it looks awesome in sci-fi ..
https://www.youtube.com/watch?... [youtube.com]
Voice needs context (Score:2)
Voice interface is one of the hardest things to implement well in AI because there are so many sentences that sound similar, understanding depends so much on context.
Without understanding the context of the conversation, a voice interface will not be able to know if you are talking about sodas or sawdust, robots or row boats, new displays or nudist plays.
Look at the upsides (Score:1)
"Sorry, I cannot open the pod bay doors" does sound better in a British accent.
Re: (Score:1)
Money can't buy you intelligence (Score:3)
I thought it sounded crazy until he told me the list of famous billionaires who have invested in the company.
I'd like a copy of that list. It'll be like mining for gold in Fort Knox.
Re: (Score:2)
Yeah, it's funny how people can latch onto a flawed metric like that.
Here's a fun idea - let's take this current list and cross-reference it against the list of excited tech luminaries that told us "Ginger" was going to revolutionize our lives...
Re: (Score:1)
Re: (Score:2)
Go read about the founders of DeepMind. These are not kooks, they are people with publication lists a mile long coming out of academia.
You know how industry always gets things later than academia? Well guess what academia's been working on for the past decade or so...
Billionaires (Score:4, Interesting)
Re: (Score:3)
Right those guys are good at exactly one thing for the most part, buzzword BINGO. They get in before the institutional folks do, and get out as they in turn enter. Those guys are good at following the billionaire "smart money" and knowing how to get at as the second tier and retail folks buy in. Then the music stops
Re: (Score:2)
Or an elaborate get-rich-quick scheme.
We will never be free.. (Score:1)
Let's see what Marvin Minsky has to say about this (Score:2)
Re: (Score:2)
Marvin "no intelligence" Minsky? Why do you even care?
I for one welcome... (Score:1)
Legg (Score:5, Informative)
Shane Legg's research is pretty cool, since it deals with very sci-fi-like problems in a pretty rigorous way. For example, his PhD dissertation "Machine Superintelligence" approaches intelligence in a non-anthropocentric way, from the perspective of computability http://www.vetta.org/documents... [vetta.org]
More recently he's tried to define an IQ-like metric for comparing different AI projects and measure progress in the field http://www.vetta.org/2011/11/a... [vetta.org]
Re: (Score:3)
His thesis looks more like an elaborate Survey-Paper that only marginally adds to the existing research. (May still be enough for a PhD, I am not criticizing that, adding "marginally" to complex theory is an accomplishment and worthwhile doing.) Certainly no break-through in there.
I also found it badly structured. For example, at my institution, a chapter "contributions of this thesis" is mandatory for acceptance.
Re: (Score:2)
This is one of DeepMind's recent papers, Playing Atari with Deep Reinforcement Learning [arxiv.org] [PDF]
smart move (Score:3)
If Deep Mind really has the knowledge and capability to form strong AI, then this is a smart move.
Deep Mind could have become the next Google.
However, I find it unacceptable that big mega-corps just go out and buy companies with talent.
Just imagine what the world would have looked like when Microsoft had bought Google when it was in its infancy...
Re: (Score:2)
This kind of thing doesn't happen, because the kind of startup that looks attractive to an existing megacorporation
DEC could have bought Google in the time (as they already had developed and marketed Altavista).
It would not have been unrealistic.
Re: (Score:2)
I don't agree. It's only whether the larger company has the ability to identify talent.
The can be lucky -- it isn't always a; "Time Warner buy AOL right near the end of dial up."
If companies keep getting wealthier and more profits, they can just hedge their bets, because money is nothing to them and dear to others. It's more of a problem of pooled capital than it is anything else.
Re: (Score:2)
If Deep Mind really has the knowledge and capability to form strong AI, then this is a smart move.
Deep Mind could have become the next Google.
However, I find it unacceptable that big mega-corps just go out and buy companies with talent.
Just imagine what the world would have looked like when Microsoft had bought Google when it was in its infancy...
I'm sure by now Microsoft would be dealing with teenage rebellion; "No Dad, I'm not going to be a hypocrite like you and force my vendors to bundle my software -- I'm going to data mine my customers and make my money and advertising like a 2 dollar whore. Just like Mom!"
onward, to the Optimal Satisfaction of Values (Score:2)
(through Friendship and Ponies)
Paired with Glass (Score:1)
And you have quite a surveillance platform.
"Famous billionaires" as scientific justification? (Score:3)
WTF? I mean, seriously, these people have zero qualifications and are know to invest in things they have not researched. I predict this is just a colossal waste of money as they cannot succeed at this time. There is not even any credible theory how true AI could be implemented, nobody can promise they have a real chance of doing it at this time without either lying through their teeth or being grossly incompetent.
Incidentally, Ray Kurzweil is an incompetent hack. Google did itself no favor by hiring him. This person has grand visions but zero understanding of actual reality.
Re: (Score:2)
>> Ray Kurzweil is an incompetent hack.
True. Google must have wanted him as a PR figurehead type role in re. the mainstream media, as his hack-status is well known in sci/tech circles..
Most likely, yes.
Re: (Score:1)
Re: (Score:2)
Ad hominem is for those that have nothing worthwhile to say. You seem to qualify.
Re: (Score:2)
Google doesn't care about building an artificial human. Google wants algorithms that can better predict what ads will work on you. And that CAN be done at this time. The field of machine learning has come a long way in the last five years.
Re: (Score:2)
I do know very well what Google wants. But that is not what the story implied.
"Ray Kurzweil is an incompetent hack"? (Score:2)
Incidentally, Ray Kurzweil is an incompetent hack. Google did itself no favor by hiring him. This person has grand visions but zero understanding of actual reality.
Oh, really? A quick visit to Wikipedia [wikipedia.org] finds:
Kurzweil was the principal inventor of the first CCD flatbed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first commercial text-to-speech synthesizer, the first music synthesizer Kurzweil K250 capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition. Kurzweil received the 1999 National Medal of Technology and Innovation, America's highest honor in technology, from President Clinton in a White House ceremony. He was the recipient of the $500,000 Lemelson-MIT Prize for 2001, the world's largest for innovation. And in 2002 he was inducted into the National Inventors Hall of Fame, established by the U.S. Patent Office.
I wish everyone was 1/10 that much of an "incompetent hack." If he thought Deep Mind was worth buying, that's the way I'd bet.
Re: (Score:2)
Kurzweil is obviously a smart guy. However, although his name seems synonymous with AI these days, I don't see many references to how he's innovated in this field? What has he actually achieved in the realm of AI, apart from co-opting the term Singularity from Vernor Vinge?
Re: (Score:2)
There's "narrow" AI, where Kurzweil has major achievements: e.g. speech recognition. Artificial general intelligence (AGI) is a whole 'nother ball game. The field is largely speculative, because it doesn't really exist yet. So it's not unfair to say Kurzweil is big in AI, even though we don't yet have AGI.
Re: (Score:2)
I guess my definition of AI has been narrow because I haven't considered narrow AI :-)
Billionaires (Score:1)
Unfortunately, American politics shows that all too many billionaires are, in fact, crazy, and American business shows that all too many billionaires make bad investment decisions.
Re: (Score:2)
Not really. Good modern machine learning algorithms, like the ones Google already uses, take a vast amount of unlabelled data and extract features from it. Then a small set of labelled data is used at the end. Somebody has to go through a few videos and label the cats, but the program goes through hundreds of thousands learning to recognize things, including cats. That's the same way we learn - a baby doesn't only benefit from experiences where adults point at something and say "cat."
In other cases, the
Re: (Score:3)
I used to think that all the hillbillies fearing on the census takers were nuts until I found out that Sherman used the census to plan his march through Georgia almost a year before he did it.
Re: (Score:1)
I used to think that all the hillbillies fearing on the census takers were nuts until I found out that Sherman used the census to plan his march through Georgia almost a year before he did it.
Interesting GIS project,
http://proceedings.esri.com/library/userconf/educ02/pap5001/p5001.htm
Re: (Score:2)
Re: (Score:1)
Google makes databases of images to help navigate.. Wow, thats insightful..
OMG driverless tanks! Uhm.. yeah, no shit. Thats how we fight in the US.. We expend money and machines wholesale in order to preserve (our) lives.
We all know that even if Google != u.s. government the data is all shared.. So, yeah, we will be using
Re: (Score:1)
And you'll keep taking it, until you pass!
Strong AI is Inevitable (Score:2)
As a human, I might value myself or my loved ones, and might want to reduce suffering and increase happiness for all, but at the grandest level, I don't know why I should value "the human" and "humanity" as models.
The transition does not need to be oppressive in nature, especially if what comes next is much brighter. They will be the normative continuation of us, so they might even want to keep some of us as pets.
I think the worry comes from the belief that there really is no reason to care for humans. But
Re:strong AI is pointless (Score:4)
Re: (Score:1)
I disagree, humans are arbitrary and capricious one moment Google engineer is something people are hoping their kids aspire to be the next they are attacking the company bus and it's not Goolgle the institution that changed.
No I think any "intelligentence" that does not attempt to place itself outside the dependence and perhaps eventually even influence of humans probably isn't intelligent at all, it will just be some expert system using big data and algorithms designed by humans to mimic intelligence. It
Re: (Score:3)
It's tempting to anthropomorphise strong AI. But if we get to dictate all of its preferences then we get to decide what it wants. Changes in goal do not count as improvements in intelligence. If we decide that it doesn't want independence from humans, then it doesn't. Whether that makes it naive or 'stupid' from a human perspective is irrelevant.
What would indeed be stupid is creating an AI with a drive to dominate and then attempt to stop it from doing so, especially if it deals with information in a quali
Re: (Score:2)
Start with a machine designed for survival - situational awareness, means of defense, mobility. Now add in your 'preferences' - don't injure humans, be nice, don't lie. Mass produce a few million of these and distribute into the population. Along comes a reason the manufacturer or government finds to deactivate them all, mix in a little human attachment and hacker mentality.. Survival of the fittest. If these things are smart enough to build/engineer themselves..