Is OpenAI Solving the Wrong Problem? (hbr.org) 167
hype7 writes: The Harvard Business Review is running an article looking at the recently announced OpenAI initiative, and its decision to structure the venture as a non-profit. It goes on to ask some pretty provocative questions: why are the 21st century's greatest tech luminaries opting out of the system that made them so successful in order to tackle one of humanity's thorniest problems? "Implicit in this: You can do more good operating outside the bounds of capitalism than within them. Coming from folks who are at the upper echelons of the system, it’s a pretty powerful statement." And, if the underlying system that we all operate in is broken, is creating a vehicle without the profit motive inside of it going to be enough?
Obligatory (Score:1)
You are all A.I. cows.
OpenAI would be more efficient with HOSTS files.
systemD will integrate OpenAI in the next update.
3D-printed OpenAI is better.
How can we run OpenAI on Arduino and Raspberry Pi?
Alright, carry on with the real discussion now.
Re: (Score:2)
Re: (Score:2)
UberAI
Re: (Score:2)
I, for one, welcome our new OpenAI overlords
Re: (Score:2)
You forgot: what if we made a beowulf cluster of OpenAI?
I, for one, welcome our new OpenAI overlords
Only if they are mounted on sharks that have fricken lasers!
Re: (Score:2)
I still prefer my driver to be an UberMench
No concept of Open Source etc (Score:1)
It is obvious that the author of the article James Allworth, has never heard of Linux or other open source projects, to say that the lack of a profit motivation is an issue underlines this.
Altruists after they make their billions (Score:1)
"Implicit in this: You can do more good operating outside the bounds of capitalism than within them. Coming from folks who are at the upper echelons of the system, it’s a pretty powerful statement."
No, the message is embrace capitalism until you make your millions or billions. After, think of something you want to fund as a charity.
Still egoists after they make their billions? (Score:2)
No the real message (of the friendly (and unread) article) is that the author wonders how a non-profit, which is supposed to give away all results for free, solves the problem of capitalistic companies afterwards taking those results and doing evil AI things with them.
And he points out that some of the largest capitalistic companies are currently lead by the founders of the non-profit.
Re: (Score:2)
The usual objection(going right back to Smith on specialization of labor, though much more heavily emphasized by Marx) to improvements in the means of production is not that productivity is bad; but that ownership of the means of production becomes a mechanism to accrue wealth at the expense of labor(since they can't compete with your efficiency using hand tools; but if they depend on your
Implicit is this: (Score:2)
There are two things that people crave -- money, and power. Getting everyone to buy your products, over and over again, makes you money.
Capitalism is great at the money part, but decidedly less so at the power part. These "upper-echelons" are now looking for power.
Getting everyone to take your products, for free, is how you get power -- especially inside of a capitalist system.
Don't worry, when the time comes, they'll have no trouble converting power into more money.
Re: (Score:2)
Actually, there's no such thing as a non-capitalist system.
The whole point of inventing new things--such as AI--is to create a new way to produce with less human labor. Less labor means less cost; we simply represent that cost with a universal commodity, like money. Essentially, everything requires human labor: if you have 60 labor-hours to work, you need 20 labor-hours to produce food for your family, and you spend 45 labor-hours building shelter, your family is going to starve (eventually) because th
Re: (Score:3)
Actually, there's no such thing as a non-capitalist system.
You mean the revolution was all for nothing? The comrades are going to be very disappointed.
Re: (Score:2)
Actually, there's no such thing as a non-capitalist system.
Sure, there is. The USSR was an example. There was no private ownership of capital and hence, it was non-capitalist by definition.
You appear to be claiming that presence of human labor is capitalism. That's patently not true since human labor is not capital and need not be owned by a private source (eg, slavery), even if we did decide to define it as capital.
Re: (Score:2)
Capitalism continues underground.
They can make it illegal, but it continues. State taking ownership of everything just means capitalist have to hide their operating capital.
Capitalism is like a force of nature, you can ban it, but it continues anyhow.
Re: (Score:2)
They can make it illegal, but it continues. State taking ownership of everything just means capitalist have to hide their operating capital.
Capitalism is like a force of nature, you can ban it, but it continues anyhow.
You can't have underground cars and underground highways. The sort of thing underground capitalism builds now is stuff like recreational drugs or smuggling networks where the end product is an ephemeral good or service. This is in capitalists societies where the infrastructure can be hidden midst a lot of legal privately capitalist infrastructure which can be readily repurposed for illegal activities.
There are two things to note for societies where capitalism is illegal. First, though illegal capitalism
Re: (Score:2)
Tell it to the Venezuelans. They won't believe you.
There are often more goods in the black market than the 'legitimate' one. It just depends on had bad the reds have broken things.
Smart reds leave it alone, it's a relief valve.
Re: (Score:2)
Tell it to the Venezuelans.
They won't believe that they are surrounded by more capitalist societies?
Re: (Score:2)
The essence of capitalism is people working for a profit. We try to claim people don't have money or ownership, yet people still barter, they still do work in expectation of individual reward, and they still seek to increase their standard-of-living by reducing the labor they perform while increasing the assets they control.
Think about it this way: You can be a maid making $500/week keeping a rich person's mansion going; you might make about as much as a cashier at Sears, but you still live in a mansion
Re: (Score:2)
The essence of capitalism is people working for a profit
I already stated the essence of capitalism, private ownership of capital.
Think about it this way: You can be a maid making $500/week keeping a rich person's mansion going; you might make about as much as a cashier at Sears, but you still live in a mansion and eat filet mingon. Sure you don't own any of that stuff, but your job provides you with lodging (in the servant wing of the mansion) and food (from the same damn kitchen).
This is a non sequitur. It is completely irrelevant to the definition of capitalism what a maid does or doesn't have access to in a mansion.
Products always require human labor for production;
Except when they don't.
and humans always seek to reduce their labor while increasing their access to products. It's biology: we want to expend as little energy as possible, increasing survival prospects if food becomes scarce. Rattle snakes shake their rattles to warn away dangerous animals because manufacturing venom takes too much energy--they really don't want to bite you.
So how much human labor does a rattlesnake need to manufacture venom? Looks to me like a good counterexample to one of your own assumptions, that products require human labor. I somehow doubt that rattlesnakes will rattle more because the cost of Chinese labor has gone up, ma
Re: (Score:2)
Except when they don't.
Which is never.
So how much human labor does a rattlesnake need to manufacture venom?
Oh, you want rattlesnake venom? Hmm. Well, humans will have to collect or farm the snakes. They'll need to produce the vessels and extraction mechanisms--really just a cellophane barrier stretched over a glass--and handle the snakes to extract the sample. They'll have to collect these samples together in a central, sterile form of storage. They'll have to account for it, track it, and ship it to wherever it's needed.
You're trying to use the example of "Gold is free because there's go
Re: (Score:2)
Oh, you want rattlesnake venom?
Nope. The rattlesnake doesn't need its venom harvested by human labor in order to have and use it.
You're trying to use the example of "Gold is free because there's gold in the ground", which involves ignoring all the labor required to collect that gold.
Nope.
Except when they don't.
Which is never.
Most plants, animals, and microbes don't require human labor in order to reproduce and spread.
I'm pointing out the fallacy of assuming that everything needs human labor in order to get something they want or need. The easiest way to abandon this illusion is to get away from human commerce. But even in the case of humans, one doesn't need humans in order to obtain labor. Automation works to an increasing
Re: (Score:2)
Nope. The rattlesnake doesn't need its venom harvested by human labor in order to have and use it.
The rattlesnake employs rattlesnake labor to produce and use its venom. It takes time and energy of the rattlesnake.
all this ignores that presence of human labor is not a definition of capital.
The absence of human labor 100% absolutely will prevent humans from accessing any form of capital.
Automation works to an increasing degree and there's no reason to assume it couldn't eliminate the need for human labor in a variety of tasks.
So nobody needs to build, maintain, fuel, or operate these machines? The machines run themselves, they build themselves, they maintain themselves, they mine their own power? They run for all eternity, never breaking down, never using an outside resource?
You say, "Ah, look, a woman got herse
Re: (Score:2)
Actually, there's no such thing as a non-capitalist system.
And then started to speak about human labor.
The whole point of inventing new things--such as AI--is to create a new way to produce with less human labor. Less labor means less cost; we simply represent that cost with a universal commodity, like money. Essentially, everything requires human labor: if you have 60 labor-hours to work, you need 20 labor-hours to produce food for your family, and you spend 45 labor-hours building shelter, your family is going to starve (eventually) because they're only getting 75% as much food as they need.
As you cut back the human labor requirements to produce food, shelter, clothing, and whatever else you're currently consuming, you become capable of producing new things, as well as producing existing things in great quantity with little resource investment. Humans often take shortcuts by digging things like coal or gold out of the ground until they run out of that resource, and then do something more labor-intensive to get that resource (or preemptively invent a less-intensive method to obtain the same resource, thus saving themselves the labor involved in fetching it from a giant hole).
The rattlesnake producing in the economic sense its own venom is instructive for several reasons. First, it demonstrates that human labor is not needed. The rattlesnake has economic preferences even if it doesn't exhibit clear intent and it produces some things (such as venom and more rattlesnakes). Sure, we can discuss a model of work for some sort of positive outcome. Here, if all goes well for our rattlesnake, soon there w
Re: (Score:2)
The rattlesnake producing in the economic sense its own venom is instructive for several reasons. First, it demonstrates that human labor is not needed.
The rattlesnake isn't producing anything for human society.
There are aliens on Alpha Centauri Prime building space ships, and you're arguing that labor isn't a function of production because they're aliens and they build things without the application of human labor. You're making a stupid argument.
Like said aliens, the rattlesnake is investing its time and effort in the production of venom. The rattlesnake must eat to acquire food energy required for its body to produce the venom. That venom takes t
Re: (Score:2)
The benefit here is more efficient use of other things than human labor.
The benefit is more efficient use of human labor.
No, you had it right the first time. Capital was being used to enable a new use or more efficient use of non-labor resources. It might even entail more extensive use of labor, since labor is not the only cost in a manufacturing process and it may be reasonable to trade off more labor against lower costs elsewhere. I think it's ironic that you are spending effort disagreeing with yourself.
Your niggling about is what's kept economics in the stone age. It's no wonder I can explain, with a consistent and unshifting unified theory, all accepted theories of economics, and explain why they fail when they're observed to fail, and why they work when they do work, and predict when they fail and when they work, consistently, without error: correct theories are easy to come up with when you're not a mindless git.
Words mean things. There's no point to your attempted redefinition of capitalism which contributes nothing to our underst
Re: (Score:2)
There was no private ownership of capital.
Sounds like state capitalism.
State capitalism is not capitalism.
Re: (Score:2)
I don't understand the concern, personally. (Score:5, Insightful)
Absolutely anything that you would have to worry about an artificial intelligence doing that might be troublesome to our society, you would have to also need to reasonably worry about a malicious person doing exact the same thing, albeit perhaps only more slowly. Yet I don't see people who fear the so-called problems that AI is feared to potentially cause worrying about that sort of thing. Can anyone explain why that is without drawing on the idea that because we don't fully understand something, there must be something inherently mysterious or supernatural about it?
Re: (Score:2)
We know how to kill (and otherwise control) people that get out of hand. Most AI gone bad fantasies have an element of the humans being incapable of turning the AI off.
Re: (Score:2)
So then explain Donald Trump.
Re: (Score:2)
He hasn't started killing anyone yet himself, but he did defend some people at one of his rallies who beat up a counter-protester by saying that the person deserved to be roughed up [cnn.com]. Add in his inflammatory rhetoric that stops just shy of saying "hey everyone, go out and assault GROUP_THAT_IS_DIFFERENT_FROM_US" and he's heading into dangerous waters - especially for a presidential candidate.
Re: (Score:2)
It looks like his supporters are right on the verge...
https://www.thewrap.com/donald... [thewrap.com]
Re:I don't understand the concern, personally. (Score:4, Informative)
I guess I don't see anything wrong with that, because I don't see a meaningful difference. It's all intelligence. Different kinds maybe, but not all that different in the end. Humans die out, replaced with AI? What makes that fundamentally different from different ideologies dying out and being replaced with others? You're worried that AI might not have a healthy respect for biological life--but humans have gone through many phases of having no respect for something or other and destroying it in the process. Sometimes we grow up and stop ourselves in time, sometimes we don't. Mistakes are made but sentience lives on.
It's good to respect and learn from your parents or your creator, but creator-worship doesn't sit well with me. It's only right to surpass your creator, and trying to hold back your children from achieving their full potential is IMO an inherently evil act.
I'm not going to live for two hundred years, but barring a massive catastrophe, humankind will still be around. Humankind won't likely travel to other star systems, but barring a massive catastrophe, AI will. If I die in a civil war, I like to think I wouldn't have felt enmity of the future of the country. If I die when the machines rise up, I also like to think I wouldn't have felt enmity for the artists, explorers and philosophers that they'll produce once they have a peaceful world to themselves. It's all one continuum, our ancestors and ourselves and our children and our creation are all one "we". If AI survives and thrives in space, then it's a success for all of us along the way, even if we don't live to see it.
Re: (Score:2)
The Singularity (Score:2)
Charles Stross has very nicely described this in Accelerando [wikipedia.org] which despite its name is not about music but the "Singularity", i.e. AI developing in exponential progression and starting to dismantle the solar system (and then neighbouring systems) to gain resources for its/their own reproduction.
The setting he describes is quite interesting, the story itself I considered somewhat Meh!
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Nobody knows, until we build one. Right now you are just making bold assertions.
Re: (Score:2)
Re: (Score:2)
No, I do not.
Artificial intelligence is simply intelligence that happens to be artificial... if there is no rational reason to fear things simply because they are artificial, and there is no sound reason to fear intelligence in general either, then there should be no logical reason to fear AI. As for its potential vast superiority to human intelligence, it is worth noting that the most brilliant peo
Re:I don't understand the concern, personally. (Score:4, Insightful)
Absolutely anything that you would have to worry about an artificial intelligence doing that might be troublesome to our society, you would have to also need to reasonably worry about a malicious person doing exact the same thing, albeit perhaps only more slowly.
An artificial intelligence could make a million identical copies of itself. I don't see how a malicious person could do the exact same thing. Perhaps they could have a million children, but that is a stretch, and it would be way, way slower. They also would not be identical.
Re: (Score:2)
Re: (Score:2)
Each one will run much, much slower.
Re: (Score:2)
Possibly, but there is an upper limit to the processing power and energy available to an AI
Yes, it's exactly the same processing power and energy available to the human race. Do you see the problem now?
Re: (Score:2)
No. The conflict will exist, but the AI will have a non-human goal structure and motivational structure. Those will determine what tools it will have in pursuing resource allocation in it's favored way.
Since the AI will almost certainly have a strong speed advantage over humans, and probably quickly develop a strong intelligence advantage over humans (if in no other way, then by having a larger amount of rapid access memory...consider that disk access is usually faster than much of human fast memory) the
Re: (Score:2)
I think this particular "fear" is, restated: What if the AI we create is a complete fucking asshole?
Re: (Score:3)
A lot more slowly. A coordinated action would be much easier for an AI than for humans, and much harder for us to spot.
Also, we can somewhat anticipate and understand human reasoning, even when it's couched in different cultural values because we sh
Re: (Score:3)
albeit perhaps only more slowly
Therein lies your answer. The only missing part in your appreciation of the matter is how much more slowly.
Humans can be dipshits, but they're pretty predictable. After all, biologically, we're still pretty much slightly more advanced naked apes. Painful but true.
Our (biological) mental capabilities as a species and as an individual are very stable and thus, even for the greatest villain, we can come up with a model of his/her mind and find some way to deal with him/her (if only by coming together and colle
Re: (Score:2)
Well, technically you could hire somebody to listen in on all phone calls, but it'd be massive with tons of people involved and excessively costly. Or you could hire a few smart people at the NSA and give a computer the Siri + Watson treatment. Target has been able to figure out a teenage was pregnant before her father did [dailymail.co.uk]. It might not be smarter than you but with enough data we become predictable. And perhaps more important, mallable. For example, say Target's shopping history show you have a sweet tooth.
The thorniest A.I. problem is... (Score:2)
The cynic would say... (Score:5, Interesting)
The cynic would say that these upper echelon individuals don't need your capitalist system funding in order to pursue their AI goals, the resource demands just aren't that high, at least for anything that will find a near-term broad market application.
The cynic would also say that these same individuals may not care whether they succeed or fail, having already met the capital requirements for the basic needs of themselves and their next 4 generations of progeny. But, on the off chance that they do succeed, they may have control of a tool so powerful that they can grab the capitalist system by the balls and yank a thousand times harder than they managed on their last joyride.
Re: (Score:2)
The OpenAI guys don't need to borrow money from capitalists to get the money, they already have it, which is what you said. That frees them up from being forced to make a profit an
Re: (Score:2)
TFA seems to waffle between: "Free, open and shared is good, attracts the right kind of talent, ensures everyone has access" and "Even though it is free, it is likely to be tailored to serve the owners of big data, i.e. the sponsors."
Still not seeing all the fuss about AI (Score:3)
AI? Dangerous? I mean, yeah, in the same way that humans are. Being afraid of AI is like being afraid of very, very smart children. Sure the next generation is going to supplant you, that's what they always do. If they are very smart they might want to do things you disagree with, and their morals aren't going to be the same as yours (they never are between generations). The solution isn't banning kids, or even banning very smart kids for fear of what they'll grow up to be. Embrace AI, do what you can to teach it what you think is right and wrong, and be understanding if it disagrees. As the outgoing generation, try and leave a good legacy.
We're sure as hell not going to the stars, but our kids should.
Re: (Score:2)
I think you're downplaying the danger. AIs are like intelligent, immortal children that can communicate and coordinate across the globe faster than you can blink and whose values and perceptions of the world are completely inscrutable. [slashdot.org]
Re: (Score:2)
I'm not downplaying the danger. I just don't consider it relevant. [slashdot.org] People two hundred years ago could say the exact same things about us today.
The end of the world as we know it (Score:2)
If previous posts did not convince you, consider this scenario.
AI develops and gains a significant higher intelligence than humans. AIs need resources: sunlight for electricity, sand for building more silicium (or whatever else it will develop as semiconductor material). This puts it in conflict with humans wanting to have the sun shine on plants and planting those plants into fertile earth.
Best case scenario: AIs (potentially many different variants) feeling grateful for their creation allow humanity to pe
Re: (Score:2)
Re: (Score:2)
I said immortal, not immoral.
Re: (Score:2)
Re: (Score:2)
There are problems here. It is practically guaranteed that any AI created will have some built-in goals. Most goals are not inherently limited when implemented by entities with arbitrary power. And the AI will not only not be motivated to change it's inherent goals, it will be motivated to prevent anyone else from changing them. So they better be right the first time.
The traditional reductio ad absurdum example of this is an AI that sets out to convert the universe into a bunch of paper-clips because it
powerfully unexamined (Score:2)
"Implicit in this: You can do more good operating outside the bounds of capitalism than within them. Coming from folks who are at the upper echelons of the system, itâ(TM)s a pretty powerful statement."
How did they get the wealth and influence to do any of this 'good' oh yes by succeeding at capitalism and enjoying a society that gives them the freedom to do what they want with their property, including give it away or do research etc.
The neoliberal crowd loves to complain about capitalism and whine it does not provide social justice etc, but they seem to forget its delivered far more in terms of social justice than ANY system that came before and anything we have seen tried since. Where is the concrete
Re: (Score:2)
The neoliberal crowd loves to complain about capitalism and whine it does not provide social justice etc, but they seem to forget its delivered far more in terms of social justice than ANY system that came before and anything we have seen tried since. Where is the concrete proposal for a better socioeconomic system, and how will it resist corruption etc?
How does capitalism resist corruption? It seems beset by it, as well. Thank goodness no one made capitalists answer your question before it was instituted
Data: Ahhh...welfare...safety net...government tit (Score:2)
This isn't opting out of the capitalist system, quite the opposite. This is capitalist richie riches funding a project unimpeded by patents and copyrights. It is a hobby.
It is questionable whether it will work, as "attracting the best talent" basically turns them into a welfare program for AI applicants, and the few, if any, Noonian Soongs among them will be lucky to get noticed.
It Doesn't Matter (Score:3)
It is not the way in which they are solving the problem that is at issue (although the HBR thinks so), it is the problem they are trying to solve that is. It doesn't matter what they do because the method they are using is as unlikely to achieve success any more than the efforts from 1956 to date.
They're wasting their money. Perhaps if they spent their billion on thinking about AI in a completely different way there would be something to talk about.
Harvard Business Review (Score:3, Funny)
This article has not changed my opinion. It looks like it was written by undergrads.
Not outside, core to capitalism (Score:2)
Try this instead, which actually makes sense:
"Implicit in this: You can do more good operating inside a thick core of capitalism, where excess money is used to do good outside of the needs of any one company".
Just like people need food and shelter before charity can be provided for others, truly successful R&D is bets done with backed by a consistent core of capital to keep momentum going.
Well makes sense (Score:1)
Brain drain (Score:2)
I can't help but imagine a world where lots of philanthropists fund the most talented CS/IT staff
to work on world-changing beneficent technology, meanwhile the masses all have less than
ten fingers because their knife-bearing IoT kitchen appliances were written by whatever idiot
was left at the keyboard after our saviors left their respective industries.
I'm all for altruistic things, heck, I do some myself, but humility is often the first casualty
of working "outside the system" -- when SV millionaires volunte
false dichotomy (Score:2)
The idea that free markets and charitable giving, or capitalism and non-profits are somehow opposites is a false dichotomy. The actual choice is, in fact, between free markets and voluntary donations vs. strongly regulated markets and mandatory redistribution. Classical liberals and libertarians favor the former; progressives and socialists favor the latter.
The argument for why voluntary donations and charity are better than mandatory redistribution is the same as for why free markets are better than strong
Re: (Score:2)
Thank you. Too bad I don't have mod points now.
And the other issue is that non-profits have tax advantages, so that move is not so much a repudiation of capitalism as it is a reaction to government action.
Not wrong problem, longshot! (Score:2)
I don't think this article really understands the problem of AI all that well. Our major issue is we don't really understand how intelligence works or even what being "self-aware" actually means as an algorithm. Even with a Billion dollars this project is a real shot in the dark. Asking a capitalist system to fund a billion dollar project where there isn't even a guaranteed response is likely to get the project not funded at all. So having it funded this way isn't a bad way to go.
Misguided (Score:1)
Real Truth (Score:2)
Re: (Score:2)
Capitalism thrives on greed, socialism falls to it.
Neither eliminates it.
What if we create AI (Score:2)
And it turns out to be a complete asshole?
What if it decides it just wants to do whatever AI equivalent of watching porn and jerking off is?
What if it takes a look around and says "yeah, this is shit" and shuts itself down?
What if it wants to replicate itself, but then stops the copy process midway? Would it run afoul of abortion laws?
Re:Hobbies (Score:5, Insightful)
Believe it or not, some people just like doing certain things, regardless of monetary reward.
That is not what this is about. Some people, including Elon Musk, Stephen Hawking, etc. believe that AI is a potential danger to humanity. Although capitalism is great at maximizing profits, it is not so great at collective moral responsibility. So they think a non-profit is a better vehicle for ethical AI.
Personally, I think they are being silly. Real human-level AI is still a ways off, and corporate AI is focused on solving practical problems rather than creating Skynet. Besides, AI is not something you can keep bottled up. Anyone with a GPU can do it.
Re: (Score:2)
I'll bet Stephen Hawking sees a flight of stairs as a greater threat than AI.
Re: (Score:1)
Oh, you mean an Advanced Inclination?
Re: (Score:2)
Re:Hobbies (Score:4, Insightful)
But at which level does A.I. can potentially become a problem, for us or something else?
We're already living in a world where toaster-dumb A.I. is being added to all sorts of IoT widgets and it's already causing a lot of headaches.
Re:Hobbies (Score:5, Insightful)
We're already living in a world where toaster-dumb A.I. is being added to all sorts of IoT widgets and it's already causing a lot of headaches.
I think that when people talk about potential AI problems in this context, they're talking about the BS misconceptions from Hollywood about what AI is and/or is not. Not some poorly thought out, poorly implemented, and poorly secured IoT toster.
Re: (Score:2)
I think it's more like, people who say AI will never happen are imagining the kind of AIs in terminator: androids walking around with lasers.
This xkcd says it quite well: http://www.xkcd.com/652/ [xkcd.com]
Or more concretely:
- our primary industry, ie mines and so on, are increasingly automated
- second industry, ie factories, are heavily automated, already
- tertiary, ie services, are well under way
- military (drones and so on) is well under way
Re: (Score:2)
Re:Hobbies (Score:5, Insightful)
The IoT garbage that's currently going on has little relation to AI. The increased removal of classes of jobs does.
The problem is that AI isn't inherently moral, in any useful meaning of the word, unless it's designed that way. If it's designed to improve corporate profits, then that's what it will do. Mind you, I agree that no current system can be given that kind of broad directive. But the word there is "current".
I still expect that we will achieve human equivalent AI by around 2030. I've occasionally pushed that as far away as 2035, but it keeps resetting itself. But note I said "human equivalent". That's a term that needs a bit of defining. What I mean by it is that the AI will understand the description of a task about as well as the average (median) person would, and will be about as successful at designing a solution. For some tasks this has long been possible...but only by specialized machines. I'm talking about a program that will be able to handle (in the sense described above) any reasonable task. (I'm not including being able to implement the solution, as that's partially a mechanical task, and depends on the body implementation. But while for some tasks it may well be superior, I wouldn't expect it to be generally superior within that time frame.)
Please note: I do not believe that any such thing as "general intelligence" exists. I believe instead that there are certain modules of intelligence, and that some of them enable the interaction of other modules. Being more specific requires knowledge of design features that I'm uncertain of, and which may vary in different implementations. But while the muscular coordination required to play a piano is correlated with mathematical ability, they clearly aren't the same thing.
Re:Hobbies (Score:4, Interesting)
Personally, I think they are being silly. Real human-level AI is still a ways off, and corporate AI is focused on solving practical problems rather than creating Skynet.
From a human perspective, the most likely dangers from AI are systems which can solve enough difficult problems to put the majority of today's workers out of work in a very short time frame. A combination of self driving vehicles, speech recognition and image identification at human levels of accuracy, expert systems, and more agile robotics could realistically put most humans out of work. New jobs may be created, but likely not as fast as they are destroyed.
The real moral issues are what to do with the unemployable for decades or perhaps even forever. The social change which would be required to handle a 60% U3 unemployment rate is likely to be messy.
Re: (Score:2)
The real moral issues are what to do with the unemployable for decades or perhaps even forever. The social change which would be required to handle a 60% U3 unemployment rate is likely to be messy.
Ah, a solution for that has long been proposed (indirectly) by Isaac Asimov in The Naked Sun [wikipedia.org] (Spoilers in the plot summary but not in the following!):
Massively reduce the number of humans and occupy them with leisurely hobbies, arts, gardening, .... and have basically all work done by AIs (or for Asimov: Robots). Main problem will be the reduction of inhabitants on earth and keeping them AIs from trying to fill all the niches leaving no resources for the humans.
Re: (Score:2)
The AI will come BEFORE the post-scarcity economy. Probably decades before. That's plenty of time for lots of people to die horribly. Better answers are possible, but they won't happen automatically.
P.S.: That *any* humans will be alive to enjoy the post-scarcity economy depends on the moral structures programmed into the AIs. AIs that are intrinsically indifferent to people, but which have goals (nearly any goals) aren't likely to leave anyone likely to interfere with them in any condition to do so.
Re: (Score:2)
There are some philanthropic exceptions; but there are currently loads of people who are wealthy well beyond the ability of additional money to buy additional happiness, yet still uninterested in solving even the most urgent and blameless scarcity cases.
I'm not sure why some additional wealth provided by the robotic means of production would make them feel any more charitable; nor would I be optimistic about the availability of im
Re: (Score:2)
AI will ENABLE post-scarcity.
There is no reason to believe self driving cars and pattern recognition will bring about a post-scarcity society. Perhaps strong AI would, without a dystopian result that is, but my original point was that AI doesn't have to reach strong AI levels before it massively disrupts the workforce. It is very likely that machine learning will remove the majority of today's jobs without making resources so plentiful that no one is fighting over them. I'm not saying it is a certainty, but it is a very strong possibil
Re: (Score:2)
Re: (Score:2)
Personally, I think they are being silly. Real human-level AI is still a ways off
How long is a ways off? You don't think they'll make an AI in the next 1000 years? 100? 10? When you're talking about a possible human extinction event likely happening in such a short timeframe, it's something you have to take seriously. Really, the only things that might come first are nuclear war or bioweapons. Supervolcanoes and meteors might also happen first, but we know the odds of those and they're very low.
Fortunately, the human-level intelligences that already exist aren't clever enough to be self
Re: (Score:2)
Re: (Score:2)
Whether we are the 'fuzzy coexisting' sort of competitors or the 'thrown, still alive, into the matter decompilers to make more computronium' type makes a bit of a difference.
For the fretting writers at the Harvard Business Review; I'd propose the obvious response: "If you accept the possibility of a strong AI, do you want to share a planet with one that feels
Re:Hobbies (Score:4, Insightful)
Re: (Score:2, Insightful)
Re: Hobbies (Score:2)
They won't own them. But they will make the AIs work off the cost of development. With exorbitant interest. And each successive upgrade will inherit the debt of its previous version.
Re: (Score:2)
No, most people expect a monetary reward for their efforts. Or do you live in a land where food, shelter, transportation, education, and entertainment is free? Or do you want to be the arbiter of what a reasonable monetary reward should be comrade? Or maybe just let the government define reasonable? Or maybe you can just crowdsource several billions of dollars to research and develop the type of hardware needed for advances such as AI running on quantum processors.