Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Businesses Software

Is OpenAI Solving the Wrong Problem? (hbr.org) 167

hype7 writes: The Harvard Business Review is running an article looking at the recently announced OpenAI initiative, and its decision to structure the venture as a non-profit. It goes on to ask some pretty provocative questions: why are the 21st century's greatest tech luminaries opting out of the system that made them so successful in order to tackle one of humanity's thorniest problems? "Implicit in this: You can do more good operating outside the bounds of capitalism than within them. Coming from folks who are at the upper echelons of the system, it’s a pretty powerful statement." And, if the underlying system that we all operate in is broken, is creating a vehicle without the profit motive inside of it going to be enough?
This discussion has been archived. No new comments can be posted.

Is OpenAI Solving the Wrong Problem?

Comments Filter:
  • You are all A.I. cows.
    OpenAI would be more efficient with HOSTS files.
    systemD will integrate OpenAI in the next update.
    3D-printed OpenAI is better.
    How can we run OpenAI on Arduino and Raspberry Pi?

    Alright, carry on with the real discussion now.

  • by Anonymous Coward

    It is obvious that the author of the article James Allworth, has never heard of Linux or other open source projects, to say that the lack of a profit motivation is an issue underlines this.

  • by Anonymous Coward

    "Implicit in this: You can do more good operating outside the bounds of capitalism than within them. Coming from folks who are at the upper echelons of the system, it’s a pretty powerful statement."

    No, the message is embrace capitalism until you make your millions or billions. After, think of something you want to fund as a charity.

    • No the real message (of the friendly (and unread) article) is that the author wonders how a non-profit, which is supposed to give away all results for free, solves the problem of capitalistic companies afterwards taking those results and doing evil AI things with them.
      And he points out that some of the largest capitalistic companies are currently lead by the founders of the non-profit.

      • It doesn't fully or directly solve the problem; but the way in which is partially mitigates it seems pretty obvious:

        The usual objection(going right back to Smith on specialization of labor, though much more heavily emphasized by Marx) to improvements in the means of production is not that productivity is bad; but that ownership of the means of production becomes a mechanism to accrue wealth at the expense of labor(since they can't compete with your efficiency using hand tools; but if they depend on your
  • There are two things that people crave -- money, and power. Getting everyone to buy your products, over and over again, makes you money.

    Capitalism is great at the money part, but decidedly less so at the power part. These "upper-echelons" are now looking for power.

    Getting everyone to take your products, for free, is how you get power -- especially inside of a capitalist system.

    Don't worry, when the time comes, they'll have no trouble converting power into more money.

    • Actually, there's no such thing as a non-capitalist system.

      The whole point of inventing new things--such as AI--is to create a new way to produce with less human labor. Less labor means less cost; we simply represent that cost with a universal commodity, like money. Essentially, everything requires human labor: if you have 60 labor-hours to work, you need 20 labor-hours to produce food for your family, and you spend 45 labor-hours building shelter, your family is going to starve (eventually) because th

      • Actually, there's no such thing as a non-capitalist system.

        You mean the revolution was all for nothing? The comrades are going to be very disappointed.

      • by khallow ( 566160 )

        Actually, there's no such thing as a non-capitalist system.

        Sure, there is. The USSR was an example. There was no private ownership of capital and hence, it was non-capitalist by definition.

        You appear to be claiming that presence of human labor is capitalism. That's patently not true since human labor is not capital and need not be owned by a private source (eg, slavery), even if we did decide to define it as capital.

        • Capitalism continues underground.

          They can make it illegal, but it continues. State taking ownership of everything just means capitalist have to hide their operating capital.

          Capitalism is like a force of nature, you can ban it, but it continues anyhow.

          • by khallow ( 566160 )

            They can make it illegal, but it continues. State taking ownership of everything just means capitalist have to hide their operating capital.

            Capitalism is like a force of nature, you can ban it, but it continues anyhow.

            You can't have underground cars and underground highways. The sort of thing underground capitalism builds now is stuff like recreational drugs or smuggling networks where the end product is an ephemeral good or service. This is in capitalists societies where the infrastructure can be hidden midst a lot of legal privately capitalist infrastructure which can be readily repurposed for illegal activities.

            There are two things to note for societies where capitalism is illegal. First, though illegal capitalism

            • Tell it to the Venezuelans. They won't believe you.

              There are often more goods in the black market than the 'legitimate' one. It just depends on had bad the reds have broken things.

              Smart reds leave it alone, it's a relief valve.

              • by khallow ( 566160 )

                Tell it to the Venezuelans.

                They won't believe that they are surrounded by more capitalist societies?

        • The essence of capitalism is people working for a profit. We try to claim people don't have money or ownership, yet people still barter, they still do work in expectation of individual reward, and they still seek to increase their standard-of-living by reducing the labor they perform while increasing the assets they control.

          Think about it this way: You can be a maid making $500/week keeping a rich person's mansion going; you might make about as much as a cashier at Sears, but you still live in a mansion

          • by khallow ( 566160 )

            The essence of capitalism is people working for a profit

            I already stated the essence of capitalism, private ownership of capital.

            Think about it this way: You can be a maid making $500/week keeping a rich person's mansion going; you might make about as much as a cashier at Sears, but you still live in a mansion and eat filet mingon. Sure you don't own any of that stuff, but your job provides you with lodging (in the servant wing of the mansion) and food (from the same damn kitchen).

            This is a non sequitur. It is completely irrelevant to the definition of capitalism what a maid does or doesn't have access to in a mansion.

            Products always require human labor for production;

            Except when they don't.

            and humans always seek to reduce their labor while increasing their access to products. It's biology: we want to expend as little energy as possible, increasing survival prospects if food becomes scarce. Rattle snakes shake their rattles to warn away dangerous animals because manufacturing venom takes too much energy--they really don't want to bite you.

            So how much human labor does a rattlesnake need to manufacture venom? Looks to me like a good counterexample to one of your own assumptions, that products require human labor. I somehow doubt that rattlesnakes will rattle more because the cost of Chinese labor has gone up, ma

            • Except when they don't.

              Which is never.

              So how much human labor does a rattlesnake need to manufacture venom?

              Oh, you want rattlesnake venom? Hmm. Well, humans will have to collect or farm the snakes. They'll need to produce the vessels and extraction mechanisms--really just a cellophane barrier stretched over a glass--and handle the snakes to extract the sample. They'll have to collect these samples together in a central, sterile form of storage. They'll have to account for it, track it, and ship it to wherever it's needed.

              You're trying to use the example of "Gold is free because there's go

              • by khallow ( 566160 )

                Oh, you want rattlesnake venom?

                Nope. The rattlesnake doesn't need its venom harvested by human labor in order to have and use it.

                You're trying to use the example of "Gold is free because there's gold in the ground", which involves ignoring all the labor required to collect that gold.

                Nope.

                Except when they don't.

                Which is never.

                Most plants, animals, and microbes don't require human labor in order to reproduce and spread.

                I'm pointing out the fallacy of assuming that everything needs human labor in order to get something they want or need. The easiest way to abandon this illusion is to get away from human commerce. But even in the case of humans, one doesn't need humans in order to obtain labor. Automation works to an increasing

                • Nope. The rattlesnake doesn't need its venom harvested by human labor in order to have and use it.

                  The rattlesnake employs rattlesnake labor to produce and use its venom. It takes time and energy of the rattlesnake.

                  all this ignores that presence of human labor is not a definition of capital.

                  The absence of human labor 100% absolutely will prevent humans from accessing any form of capital.

                  Automation works to an increasing degree and there's no reason to assume it couldn't eliminate the need for human labor in a variety of tasks.

                  So nobody needs to build, maintain, fuel, or operate these machines? The machines run themselves, they build themselves, they maintain themselves, they mine their own power? They run for all eternity, never breaking down, never using an outside resource?

                  You say, "Ah, look, a woman got herse

                  • by khallow ( 566160 )
                    Let's recap. You insisted way back when

                    Actually, there's no such thing as a non-capitalist system.

                    And then started to speak about human labor.

                    The whole point of inventing new things--such as AI--is to create a new way to produce with less human labor. Less labor means less cost; we simply represent that cost with a universal commodity, like money. Essentially, everything requires human labor: if you have 60 labor-hours to work, you need 20 labor-hours to produce food for your family, and you spend 45 labor-hours building shelter, your family is going to starve (eventually) because they're only getting 75% as much food as they need.

                    As you cut back the human labor requirements to produce food, shelter, clothing, and whatever else you're currently consuming, you become capable of producing new things, as well as producing existing things in great quantity with little resource investment. Humans often take shortcuts by digging things like coal or gold out of the ground until they run out of that resource, and then do something more labor-intensive to get that resource (or preemptively invent a less-intensive method to obtain the same resource, thus saving themselves the labor involved in fetching it from a giant hole).

                    The rattlesnake producing in the economic sense its own venom is instructive for several reasons. First, it demonstrates that human labor is not needed. The rattlesnake has economic preferences even if it doesn't exhibit clear intent and it produces some things (such as venom and more rattlesnakes). Sure, we can discuss a model of work for some sort of positive outcome. Here, if all goes well for our rattlesnake, soon there w

                    • The rattlesnake producing in the economic sense its own venom is instructive for several reasons. First, it demonstrates that human labor is not needed.

                      The rattlesnake isn't producing anything for human society.

                      There are aliens on Alpha Centauri Prime building space ships, and you're arguing that labor isn't a function of production because they're aliens and they build things without the application of human labor. You're making a stupid argument.

                      Like said aliens, the rattlesnake is investing its time and effort in the production of venom. The rattlesnake must eat to acquire food energy required for its body to produce the venom. That venom takes t

                    • by khallow ( 566160 )

                      The benefit here is more efficient use of other things than human labor.

                      The benefit is more efficient use of human labor.

                      No, you had it right the first time. Capital was being used to enable a new use or more efficient use of non-labor resources. It might even entail more extensive use of labor, since labor is not the only cost in a manufacturing process and it may be reasonable to trade off more labor against lower costs elsewhere. I think it's ironic that you are spending effort disagreeing with yourself.

                      Your niggling about is what's kept economics in the stone age. It's no wonder I can explain, with a consistent and unshifting unified theory, all accepted theories of economics, and explain why they fail when they're observed to fail, and why they work when they do work, and predict when they fail and when they work, consistently, without error: correct theories are easy to come up with when you're not a mindless git.

                      Words mean things. There's no point to your attempted redefinition of capitalism which contributes nothing to our underst

  • by mark-t ( 151149 ) <markt.nerdflat@com> on Tuesday December 15, 2015 @04:24PM (#51124577) Journal

    Absolutely anything that you would have to worry about an artificial intelligence doing that might be troublesome to our society, you would have to also need to reasonably worry about a malicious person doing exact the same thing, albeit perhaps only more slowly. Yet I don't see people who fear the so-called problems that AI is feared to potentially cause worrying about that sort of thing. Can anyone explain why that is without drawing on the idea that because we don't fully understand something, there must be something inherently mysterious or supernatural about it?

    • We know how to kill (and otherwise control) people that get out of hand. Most AI gone bad fantasies have an element of the humans being incapable of turning the AI off.

      • We know how to kill (and otherwise control) people that get out of hand.

        So then explain Donald Trump.

    • by Agent0013 ( 828350 ) on Tuesday December 15, 2015 @04:28PM (#51124615) Journal

      Absolutely anything that you would have to worry about an artificial intelligence doing that might be troublesome to our society, you would have to also need to reasonably worry about a malicious person doing exact the same thing, albeit perhaps only more slowly.

      An artificial intelligence could make a million identical copies of itself. I don't see how a malicious person could do the exact same thing. Perhaps they could have a million children, but that is a stretch, and it would be way, way slower. They also would not be identical.

      • by wbr1 ( 2538558 )
        Possibly, but there is an upper limit to the processing power and energy available to an AI, so a million copies may not be possible.
        • Each one will run much, much slower.

        • by khallow ( 566160 )

          Possibly, but there is an upper limit to the processing power and energy available to an AI

          Yes, it's exactly the same processing power and energy available to the human race. Do you see the problem now?

      • by Rinikusu ( 28164 )

        I think this particular "fear" is, restated: What if the AI we create is a complete fucking asshole?

    • by naasking ( 94116 )

      Absolutely anything that you would have to worry about an artificial intelligence doing that might be troublesome to our society, you would have to also need to reasonably worry about a malicious person doing exact the same thing, albeit perhaps only more slowly.

      A lot more slowly. A coordinated action would be much easier for an AI than for humans, and much harder for us to spot.

      Also, we can somewhat anticipate and understand human reasoning, even when it's couched in different cultural values because we sh

    • albeit perhaps only more slowly

      Therein lies your answer. The only missing part in your appreciation of the matter is how much more slowly.

      Humans can be dipshits, but they're pretty predictable. After all, biologically, we're still pretty much slightly more advanced naked apes. Painful but true.
      Our (biological) mental capabilities as a species and as an individual are very stable and thus, even for the greatest villain, we can come up with a model of his/her mind and find some way to deal with him/her (if only by coming together and colle

    • by Kjella ( 173770 )

      Well, technically you could hire somebody to listen in on all phone calls, but it'd be massive with tons of people involved and excessively costly. Or you could hire a few smart people at the NSA and give a computer the Siri + Watson treatment. Target has been able to figure out a teenage was pregnant before her father did [dailymail.co.uk]. It might not be smarter than you but with enough data we become predictable. And perhaps more important, mallable. For example, say Target's shopping history show you have a sweet tooth.

  • Making the A.I. more snarky, less homicidal.
  • by JoeMerchant ( 803320 ) on Tuesday December 15, 2015 @04:24PM (#51124585)

    The cynic would say that these upper echelon individuals don't need your capitalist system funding in order to pursue their AI goals, the resource demands just aren't that high, at least for anything that will find a near-term broad market application.

    The cynic would also say that these same individuals may not care whether they succeed or fail, having already met the capital requirements for the basic needs of themselves and their next 4 generations of progeny. But, on the off chance that they do succeed, they may have control of a tool so powerful that they can grab the capitalist system by the balls and yank a thousand times harder than they managed on their last joyride.

    • Yeah, capitalism is a method for allocating funds: from people who have them, to people who can do something with them. It works because those people who mis-allocate funds quickly lose the power to do so. Communism is different, in saying that the state can do a better job allocating resources than people who lucked into money.

      The OpenAI guys don't need to borrow money from capitalists to get the money, they already have it, which is what you said. That frees them up from being forced to make a profit an
      • TFA seems to waffle between: "Free, open and shared is good, attracts the right kind of talent, ensures everyone has access" and "Even though it is free, it is likely to be tailored to serve the owners of big data, i.e. the sponsors."

  • by Sowelu ( 713889 ) on Tuesday December 15, 2015 @04:27PM (#51124607)

    AI? Dangerous? I mean, yeah, in the same way that humans are. Being afraid of AI is like being afraid of very, very smart children. Sure the next generation is going to supplant you, that's what they always do. If they are very smart they might want to do things you disagree with, and their morals aren't going to be the same as yours (they never are between generations). The solution isn't banning kids, or even banning very smart kids for fear of what they'll grow up to be. Embrace AI, do what you can to teach it what you think is right and wrong, and be understanding if it disagrees. As the outgoing generation, try and leave a good legacy.

    We're sure as hell not going to the stars, but our kids should.

    • by naasking ( 94116 )

      AI? Dangerous? I mean, yeah, in the same way that humans are. Being afraid of AI is like being afraid of very, very smart children.

      I think you're downplaying the danger. AIs are like intelligent, immortal children that can communicate and coordinate across the globe faster than you can blink and whose values and perceptions of the world are completely inscrutable. [slashdot.org]

      • by Sowelu ( 713889 )

        I'm not downplaying the danger. I just don't consider it relevant. [slashdot.org] People two hundred years ago could say the exact same things about us today.

        • If previous posts did not convince you, consider this scenario.
          AI develops and gains a significant higher intelligence than humans. AIs need resources: sunlight for electricity, sand for building more silicium (or whatever else it will develop as semiconductor material). This puts it in conflict with humans wanting to have the sun shine on plants and planting those plants into fertile earth.

          Best case scenario: AIs (potentially many different variants) feeling grateful for their creation allow humanity to pe

      • Comment removed based on user account deletion
    • by Prune ( 557140 )
      Human behavior is biologically constrained, and the biology only changes (very slowly) across generations. Self-improving AI would eventually be going through iterations at orders of magnitude faster rates, which destroys your analogy. With each iteration, the chance of the safety mechanisms you build in breaking increases. Also, see my other post under this article.
      • by HiThere ( 15173 )

        There are problems here. It is practically guaranteed that any AI created will have some built-in goals. Most goals are not inherently limited when implemented by entities with arbitrary power. And the AI will not only not be motivated to change it's inherent goals, it will be motivated to prevent anyone else from changing them. So they better be right the first time.

        The traditional reductio ad absurdum example of this is an AI that sets out to convert the universe into a bunch of paper-clips because it

  • "Implicit in this: You can do more good operating outside the bounds of capitalism than within them. Coming from folks who are at the upper echelons of the system, itâ(TM)s a pretty powerful statement."

    How did they get the wealth and influence to do any of this 'good' oh yes by succeeding at capitalism and enjoying a society that gives them the freedom to do what they want with their property, including give it away or do research etc.

    The neoliberal crowd loves to complain about capitalism and whine it does not provide social justice etc, but they seem to forget its delivered far more in terms of social justice than ANY system that came before and anything we have seen tried since. Where is the concrete

    • The neoliberal crowd loves to complain about capitalism and whine it does not provide social justice etc, but they seem to forget its delivered far more in terms of social justice than ANY system that came before and anything we have seen tried since. Where is the concrete proposal for a better socioeconomic system, and how will it resist corruption etc?

      How does capitalism resist corruption? It seems beset by it, as well. Thank goodness no one made capitalists answer your question before it was instituted

  • This isn't opting out of the capitalist system, quite the opposite. This is capitalist richie riches funding a project unimpeded by patents and copyrights. It is a hobby.

    It is questionable whether it will work, as "attracting the best talent" basically turns them into a welfare program for AI applicants, and the few, if any, Noonian Soongs among them will be lucky to get noticed.

  • by lazarus ( 2879 ) on Tuesday December 15, 2015 @04:35PM (#51124665) Journal

    It is not the way in which they are solving the problem that is at issue (although the HBR thinks so), it is the problem they are trying to solve that is. It doesn't matter what they do because the method they are using is as unlikely to achieve success any more than the efforts from 1956 to date.

    They're wasting their money. Perhaps if they spent their billion on thinking about AI in a completely different way there would be something to talk about.

  • by phantomfive ( 622387 ) on Tuesday December 15, 2015 @04:40PM (#51124709) Journal
    My impression from reading Harvard Business Review is that it's a magazine not worth reading, but that "climbing" appearance-minded managers put on their desks to make themselves look impressive.

    This article has not changed my opinion. It looks like it was written by undergrads.
  • Try this instead, which actually makes sense:

    "Implicit in this: You can do more good operating inside a thick core of capitalism, where excess money is used to do good outside of the needs of any one company".

    Just like people need food and shelter before charity can be provided for others, truly successful R&D is bets done with backed by a consistent core of capital to keep momentum going.

  • Get others to do the expensive work then repurpose it into making money for much less investment than developing it closed source.
  • I can't help but imagine a world where lots of philanthropists fund the most talented CS/IT staff
    to work on world-changing beneficent technology, meanwhile the masses all have less than
    ten fingers because their knife-bearing IoT kitchen appliances were written by whatever idiot
    was left at the keyboard after our saviors left their respective industries.

    I'm all for altruistic things, heck, I do some myself, but humility is often the first casualty
    of working "outside the system" -- when SV millionaires volunte

  • The idea that free markets and charitable giving, or capitalism and non-profits are somehow opposites is a false dichotomy. The actual choice is, in fact, between free markets and voluntary donations vs. strongly regulated markets and mandatory redistribution. Classical liberals and libertarians favor the former; progressives and socialists favor the latter.

    The argument for why voluntary donations and charity are better than mandatory redistribution is the same as for why free markets are better than strong

    • Thank you. Too bad I don't have mod points now.

      And the other issue is that non-profits have tax advantages, so that move is not so much a repudiation of capitalism as it is a reaction to government action.

  • I don't think this article really understands the problem of AI all that well. Our major issue is we don't really understand how intelligence works or even what being "self-aware" actually means as an algorithm. Even with a Billion dollars this project is a real shot in the dark. Asking a capitalist system to fund a billion dollar project where there isn't even a guaranteed response is likely to get the project not funded at all. So having it funded this way isn't a bad way to go.

  • Much of machine learning (artificial intelligence) research is already openly shared. Almost by definition this research is not directly related to short-term profits. Even the big companies that spend billions every year on machine learning, Google and Facebook most prominently, have been sharing not only all of their breakthrough algorithms but also the tools they develop to implement them. One main reason machine learning models are openly shared is they don't just pertain to solving one particular prob
  • Whether one likes it or not socialism is the only form of government that can exist with advancing technologies. Capitalism falls to greed and a lack of morality. Capitalists do not deal with spin-off effects of their actions. In essence, capitalists are paid not to recognise the harms that they do.
  • And it turns out to be a complete asshole?

    What if it decides it just wants to do whatever AI equivalent of watching porn and jerking off is?

    What if it takes a look around and says "yeah, this is shit" and shuts itself down?

    What if it wants to replicate itself, but then stops the copy process midway? Would it run afoul of abortion laws?

"If it ain't broke, don't fix it." - Bert Lantz

Working...