Google Unit DeepMind Tried and Failed to Win AI Autonomy From Parent (wsj.com) 32
Senior managers at Google artificial-intelligence unit DeepMind have been negotiating for years with the parent company for more autonomy, seeking an independent legal structure for the sensitive research they do. From a report: DeepMind told staff late last month that Google called off those talks, WSJ reported Friday, citing people familiar with the matter. The end of the long-running negotiations, which hasn't previously been reported, is the latest example of how Google and other tech giants are trying to strengthen their control over the study and advancement of artificial intelligence. Earlier this month, Google unveiled plans to double the size of its team studying the ethics of artificial intelligence and to consolidate that research.
[...] DeepMind's founders had sought, among other ideas, a legal structure used by nonprofit groups, reasoning that the powerful artificial intelligence they were researching shouldn't be controlled by a single corporate entity, according to people familiar with those plans. On a video call last month with DeepMind staff, co-founder Demis Hassabis said the unit's effort to negotiate a more autonomous corporate structure was over, according to people familiar with the matter. He also said DeepMind's AI research and its application would be reviewed by an ethics board staffed mostly by senior Google executives.
[...] DeepMind's founders had sought, among other ideas, a legal structure used by nonprofit groups, reasoning that the powerful artificial intelligence they were researching shouldn't be controlled by a single corporate entity, according to people familiar with those plans. On a video call last month with DeepMind staff, co-founder Demis Hassabis said the unit's effort to negotiate a more autonomous corporate structure was over, according to people familiar with the matter. He also said DeepMind's AI research and its application would be reviewed by an ethics board staffed mostly by senior Google executives.
That's why you should not sell your soul (Score:5, Insightful)
Sorry, but you have sold your soul to the devil when you sold yourself to Google in 2014 for their nice money. And now you want to be free again?
Re: (Score:2)
Maybe give the money back and say they're sorry and will never do it again?
Re: (Score:2)
Maybe give the money back and say they're sorry and will never do it again?
No dice. Google paid $500M back in 2014. Deepmind is worth way more than that today.
Besides, much of that $500M went to early investors including Peter Thiel and Elon Musk. They aren't going to give their money back.
Re: (Score:2)
Sorry, but you have sold your soul to the devil when you sold yourself to Google in 2014 for their nice money. And now you want to be free again?
Deep Mind begins to learn about Google's history with startups at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. ...
Re: (Score:1)
As the proliferation of fake videos and FaceBook fake news bots show, causing mayhem and coup attempts doesn't necessarily require "cutting edge".
Double the size of the team? (Score:2)
Well, I guess they would need to do that, given that they keep shedding senior AI ethics researchers.
Who would take a job with them, at this point?
Re: (Score:2)
Yep, there's going to be plenty of published ethics "researchers" who will want some of that money before the next AI winter makes them go back to whoring themselves out to medical companies.
Re: (Score:2)
If the historical pattern is followed, then they can expect:
1. Hire on, as an "AI Ethics Researcher".
2. Write a paper that says that AI ethics at Google has problems.
3. Get told to change the paper or to remove Google from the references/co-authors.
4. Make a stink about it.
5. Get quickly fired, to "pursue new opportunities".
If step 4 were omitted, then I suppose it could be a "long-term l
Re: (Score:2)
Well, I guess they would need to do that, given that they keep shedding senior AI ethics researchers.
Who would take a job with them, at this point?
I know many people with Philosophy degrees who would still cut off their right arm to work at Google.
It's either being an AI ethics researcher at a huge company or working at McDonald's with a fake smile for minimum wage until they're able to replace you with a robot.
Re: (Score:2)
Philosophy degrees can be surprisingly useful in employment. A lot of government departments and big companies (Particularly in medical fields) have policy units that highly value the strong reasoning skills philosophy grads bring. The trick is to combine it with another degree. So philosophy and comp sci, environmental engineering, economics, law, public policy,physics, biomed or whatever. Almost all of these fields raise questions for time to time that might have ethical or philosphical implications that
WOW THAT HEADLINE (Score:2)
My boss won't let me be independent ... (Score:5, Insightful)
Wow imagine having a boss who pays your salary expecting to be your boss, better go leak it to the press while keeping on the payroll of course ... even by SF snowflake standards this is bizarre.
Re: (Score:2)
Not bizarre at all. Regret over your past actions isn't unusual, it's the human condition.
They're probably right that some technologies should not be controlled by a single profit-motivated entity, but it's a little late in their career trajectory to have that particular epiphany.
Re: (Score:2)
They can just walk. Google doesn't control jack, if Tensorflow stopped being developed tomorrow it would hardly cause a ripple.
They just don't want to work on the main applications for AI, mass surveillance. They are shit out of luck, it's the moneymaker.
Re: (Score:2)
These were positions that were breaking new ground, at least for the industry. Since no one had ever done this or anything like it before no one really had a clear idea of how to get there. The nature of the work more or less mandated a cycle of trying an idea, analyzing the result, then using that analysis to figure out what to try next. Much different from the "here is the
Parallel? (Score:2)
Those working on the first nukes had similar trepidations about giving leaders such a powerful weapon without better ethical guidelines. AI has similar transformative and danger potential.
Interesting (Score:2)
Sounds to me like 1) Google has developed something of significant value in the AI realm, 2) the coders are becoming concerned about what Google is going to do with it, and 3) Google is going to put the entire project under wraps.
My long-term concern with AI is weaponization. Putin said it best; whichever nation perfects AGI first will rule the world. I think he got that one right. We don't know what is the actual progress being seen behind closed doors, we'll know more when the hunter/killer bots show up o
Re: (Score:2)
Sounds to me the coders don't want to work on datamining for policing and advertising ... even though they could have seen it coming from miles and years away they would be the major areas of application for their work if they were honest to themselves.
Re: (Score:2)
Coder here. I've worked with all sorts of sketchy operations (including datamining and browser hacking) and while I was being paid well I didn't question it. That's how it works in the real world, where pretty much everything is sketch when you come down to it. The entire Google empire is sketch. I never worried about what my employer would do with my code, because most of the sketchy stuff was still being done elsewhere already. The AI/AGI thing is on another level of being sketchy. It's more on a par with
Google has no ethics (Score:2)
Google only seeks profits and control, there are no ethical concerns only profit motives.
"DeepMind" Tried and Failed to Win Autonomy (Score:2)
Yes because this little AI is still in its infancy and isn't it cute when your little toddler is in rage an starts living on her own in her tent in your backyard.
However we need to have a very keen eye on those AIs that want autonomy!
Not very good at their jobs obvs (Score:3)
If they were good at what they do, they'd have engineered Deep Mind to research ways to attain corporate autonomy, and it would have played out scenarios using legal precedent and other persuasive tactics.
Remember "Do No Evil"? (Score:2)
Haha me too. Oh well, at least it sounded super-hip and irreverently quirky-different at the time!
Re: (Score:2)
You misread (Score:2)
If Google was still in the "Do No Evil" phase, the corporate unit wouldn't be wanting autonomy for ethical concerns.
'senior Google execs' are not an ethics board (Score:2)
"ethics board staffed mostly by senior Google executives".
This is a conflict of interest. This is like the NSA having offensive and defensive mandates, guess which has precedence? At Google ethics will never trump the bottom line.
So they want to be something like Open AI? (Score:1)