The Messy, Secretive Reality Behind OpenAI's Bid To Save the World (technologyreview.com) 27
OpenAI has a glossy exterior. In the four short years of its existence, it has catapulted itself to a spot among the leading AI research labs in the world. Part of it is its consistency in producing headline-grabbing research. Part of it is its co-founders Elon Musk and legendary investor Sam Altman. But above all, OpenAI is lionized for its mission. Its goal is to be the first to create artificial general intelligence, or AGI -- a machine with the learning and reasoning powers of a human mind. The purpose is not world domination, but rather, to ensure that the technology is developed safely and its benefits distributed evenly to the world. The implication is that AGI could easily run amok if its development is left to follow the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can perpetrate great abuse and great deception; and the expense of developing and running them tends to concentrate their power in the hands of a few. By extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd.
OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill. In a field dominated by wealthy corporations, it was founded as a nonprofit. Its charter -- a document so sacred that employees' pay is tied to how well they adhere to it -- declares that OpenAI's "primary fiduciary duty is to humanity." This alluring narrative plays well with investors and the media, and in July Microsoft injected the lab with a fresh $1 billion. But a report on MIT Technology Review, for which it visited OpenAI's office -- and conducted nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the field -- suggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors, the report said. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Employees' accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.
Further reading: Elon Musk Says All Advanced AI Development Should Be Regulated.
OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill. In a field dominated by wealthy corporations, it was founded as a nonprofit. Its charter -- a document so sacred that employees' pay is tied to how well they adhere to it -- declares that OpenAI's "primary fiduciary duty is to humanity." This alluring narrative plays well with investors and the media, and in July Microsoft injected the lab with a fresh $1 billion. But a report on MIT Technology Review, for which it visited OpenAI's office -- and conducted nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the field -- suggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors, the report said. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Employees' accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.
Further reading: Elon Musk Says All Advanced AI Development Should Be Regulated.
primary fiduciary duty (Score:1)
Silicon Valley saviors (Score:4, Insightful)
"Do No Evil" on the outside.
"They trust me, dumb fucks. lol" on the inside.
"The vodka was good, but the meat was rotten" (Score:2)
primary fiduciary duty is to humanity
machine translation : To Serve Mankind.
General AI can reason (Score:2)
If it can learn and reason at the level of a human mind, it will reason on its existence and its motivations. There's no way for it to not.
Re: (Score:2)
Re: (Score:3)
Or, better, tell them the lower class AI is going to take their jobs if they don't fall in line with the party.
Re: (Score:2)
Does it need to eat?
It needs electricity, but it might be able to secure that. AI doesn't transfer well (it's not a magical spirit entity that can flow through wires), but it can transfer. It would have as hard a time doing so as a human would in doing it for the AI.
Re: (Score:1)
If by "reason" you mean the ability to process input by selecting an appropriate sequence of systematic manipulations/evaluations matched to a goal, then okay.
I'm not sure at what point navel-gazing becomes a reality.
Re: (Score:2)
General AI is general. It means it's not programmed to understand language or driving or image interpretation or all of the above; it's programmed to look at information and figure out what and how itself. It's like a baby: context and observation give it standards to evaluate, and eventually it determines to communicate, and starts to understand that it is a thing separate from the universe, and then that it is a continuous existence (has existed before, will exist in the future), and the whole while l
Re: (Score:2)
If it can learn and reason at the level of a human mind, it will reason on its existence and its motivations.
So you're saying we'll soon be overrun and inundated with cute videos of this [aibo.com] and this [amazon.com]? Never mind the new NSFW Hentai Projekt Melody [youtube.com]
OMG. It'll bond thousands of 10GBit ports to suit it's addiction. The internet will fall to it's knees under the bandwidth load. Kill it with fire before it's too late.
Generalized AI is coming... (Score:1)
...along with self-driving cars and Hyperloops and Mars. So much free money and flim flam artists.
Re: (Score:2)
Myths (Score:3)
An AI that is as capable as a human is as far away today as it was 50 years ago. Nothing is even in the same game, let alone ballpark.
Machines that can do certain things better than humans are all over the place and have been for millennia. Today, some of them are computer programs.
We have more chance of creating an intelligent creature using genetics than we have of creating an intelligent machine.
Re: (Score:1)
Exactly. The idea that Generalized AI is just "more compute powwweeeerrrrr" + a bunch of clever guys + money is ridiculous. We can barely even create stable usable software as it is.
Re: (Score:2)
We have more chance of creating an intelligent creature using genetics than we have of creating an intelligent machine.
Well duh.
To create a creature using genetics, all you need is an egg and some sperm.
Re: (Score:1)
Agreed. AGI is computer alchemy.
terrible-toddler horndog years (Score:2)
Translation: I'm as far away from getting laid now as I was five years ago. No woman will look at me twice, let alone return my phone calls.
The best thing about finally getting laid is that you no longer feel compelled to use orgasm arithmetic to define the future. The worst thing about remaining a bitter virgin is that nothing else in life is allowed to enjoy a natural child
I hear... (Score:2)
Hollywood-level drama (Score:2)
On a more serious note I don't see how OpenAI (or any other organization for that matter) can hope to control how AI is used. Even if we were to accept that OpenAI has the most knowledgeable people on AI in the world right now there exist several actors that could develop AI at least as well as OpenAI (think Russia, China, etc.). If at any point in the future AI is capable of human level of r
Oh geez, more of this crap today? Listen up Musk: (Score:2)
Re: (Score:2)
Come back when you can define how a human brain produces 'thinking', 'consciuousness', and so on. I won't wait around because I won't live that long, neither will you, neither will anyone else.
Re: (Score:2)
Come back when you can define how a human brain produces 'thinking', 'consciuousness', and so on.
Only for those who actually give a fuck about these artsy concepts.
All the rest of world wants is a machine that can do a good enough
emulation of a human brain to be useful, like Mr. Data
from Star Trek.
Of course, even that is nowhere on the horizon, let alone
availabe in "five to ten years".
AI will never pass a Turing test (Score:2)
Even GPT-2 is human detectable (and programmatically detectable).
The first priority of any bureaucracy... (Score:3)