Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Businesses The Almighty Buck Technology

We're Too Wise For Robots To Take Our Jobs, Alibaba's Jack Ma Says (scmp.com) 221

Have confidence in yourself -- technology will never replace human beings, insisted self-made billionaire Jack Ma in a keynote speech at Alibaba Cloud's Computing Conference in Hangzhou. From a report: There's one simple reason for that, the Alibaba founder said - we possess wisdom. "People are getting more worried about the future, about technology replacing humans, eliminating jobs and widening the gap between the rich and the poor," said Ma. "But I think these are empty worries. Technology exists for people. We worry about technology because we lack confidence in ourselves, and imagination for the future." Ma explained that humans are the only things on Earth that are wise. "People will always surpass machines because people possess wisdom," he said. Referencing AlphaGo, the Google artificial intelligence program that beat the world's top Go player at his own game, Ma said that there was no reason humanity should be saddened by the defeat. "AlphaGo? So what? AlphaGo should compete against AlphaGo 2.0, not us. There's no need to be upset that we lost. It shows that we're smart, because we created it."
This discussion has been archived. No new comments can be posted.

We're Too Wise For Robots To Take Our Jobs, Alibaba's Jack Ma Says

Comments Filter:
  • by sdinfoserv ( 1793266 ) on Thursday October 12, 2017 @02:04PM (#55357723)
    "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."
  • by Anonymous Coward

    Eventually, yes, computers will have more "wisdom" than humans. We aren't all that close to it now, but someday, we will.

    • I don't believe it for a moment.

      Computers (when they are running as designed) are deterministic. They do what they are told to do, so two identical computers in the same condition, stimulated the same way, will produce the same results... Every Time. We might be able to invest seeming "insight" into some random looking problem using AI and training data, code in some clever randomness in our deterministic programs, but we won't *ever* get a computer to think, feel and decide based on intuition.

      However, I'

      • Of course they won't get rid of us right away. At first they'll need us to hit the console port and debug when one of them fails.

      • Computers (when they are running as designed) are deterministic.

        Not if you use the branch-on-random instruction, for example.

        • LOL... Random number generator programs are pretty much deterministic if their internal states and external stimuli are the same, the number you get out of them is going to be the same.

          So, I got to ask, how are you generating your random number for that branch?

          • Preferably with a noise generator obviously. It would make an awfully shitty nondeterministic program if you used a PRNG.
          • Noise is a good source of randomness. Either way, computers are far superior at picking "random" numbers than humans, even with the most basic of tools.
            • LOL... What noise? I've never heard of the assembly instruction "Load accumulator A with noise" on a CPU, and if you are running some program to do this, it's going to be deterministic, if you know the initial state of the digital machine you are using. It's actually a design constraint of good digital electronics that you not have anything but defined logical states. Any "noise" or randomness needs to be removed. Therefore any digital system that produces "random" numbers really isn't random at all. I

              • Load Accumulator A with some memory location where there are no memory chips, was the old way to do it.
                I am honestly unsure how to find noise on any computer built since 1995, the last time I coded in assembly, but almost no 8 bit computer manufacturer was able to eliminate all noise from the system.

      • > but we won't *ever* get a computer to think, feel and decide based on intuition.

        That's the thing with AlphaGo though. The search tree for Go is simply too large to deal with in a traditional manner (minmax algorithm) the way we can with chess. Instead AlphaGo analyses the board holistically and basically develops an "intuition" for the best move.

        The current state of the art image recognition systems are not built on rules, they "learn" the rules themselves from examples... In fact, we don't even have a

        • Sure, it looks close, but it's not ever going to be the same.

          Intuition is about making decisions on topics for which you have no specific exposure or experience. Everything you describe requires that the computer be pre-exposed to a situation in some way, either though simulation, training on real data or hard coding logic.

          Computers will be useful for specifically defined tasks, but they will neither recognize the boundaries of what they know or choose to stray outside their limits. They will always jus

          • As a counter example you should look into multimodal AI.

            https://research.googleblog.co... [googleblog.com]

            This machine has different types of inputs, and different types of outputs, and the same AI learns many different tasks as opposed to just one very narrowly task.

            Apparently this machine is capable of performing many different tasks at equivalent to where the state of the art was just five years ago. I don't believe it will take long for it to 'catch up'.

            > Computers will be useful for specifically defined tasks, but t

            • I think you make a mistake when you describe the "zero shot learning" of these systems as being untrained. Sure, they may be able to utilize previous training information as a short cut to answer a new kind of question, but that is not starting untrained, but partially trained. And if you take a look at the Blog you will note that the trained and "untrained" problem domains involve the same kinds of input adapters, which implies that the problems may be slightly different specifically, but they are actual

              • By zero shot learning, it was able to learn the task by only a few examples and extrapolated the rest from previous experience. So, on day four you tell it has a new task, and show it just one purple object, then it knows that the key is purple objects from then on.

                Absolutely no different to how humans would deal with it.

                A multimodal AI with enough modes and experience will not have the limits you are used to thinking about for AI. Think of using the same AI to control robots, drive cars, fly planes, make f

                • Chess is a deterministic problem as most games are, I never believed computers wouldn't ever be better than humans. I'm guessing that most games would be better played by AI than humans as long as the game play was deterministic, where all possible moves and outcomes can be conceivably looked at.

                  I'd be willing to bet (tooth picks) that playing poker with AI would not be as deterministic, that the humans would win at least some times, and if you let the humans keep a database of information at hand to consu

                  • > Chess is a deterministic problem as most games are, I never believed computers wouldn't ever be better than humans.

                    Right, but you probably grew up in an age where computers were already better at chess than humans. Before they were better, Kasparov (the chess grand master), was famous for saying that computers would never beat humans because "Chess is a unique cognitive nexus, a place where art and science come together in the human mind and are then refined and improved by experience.".

                    > I'd be wil

      • by Bengie ( 1121981 )
        While I agree with you have a high level, any large distributed or async parallel system will have many different types of non-determinism.
        • Only because they are not identical systems starting from the same internal condition.. ;) CPU's are pretty deterministic.... And when they aren't, it's usually a problem.

          Yes, complex digital systems can display very non-deterministic behavior, but that's because between systems there are slight variances in things like clock rates and logical voltage levels.

    • We aren't all that close to it now, but someday, we will.

      Sure, but for now human level AI is pure science fiction. When we finally achieve strong AI, it will change the world profoundly, and "jobs" will likely be the least of our concerns.

      "Weak AI" and automation are currently having less impact than expected. Productivity growth has been stagnant in American and Europe. Where it is growing, as in China, it is mostly because of good old-fashioned manufacturing automation, and not automation of service jobs.

  • by rsilvergun ( 571051 ) on Thursday October 12, 2017 @02:06PM (#55357737)
    from business leaders over the next few years. Lots and lots of talk about how robots aren't taking our jobs while they automate away millions of jobs. It's either that or we a) don't let them do it or b) tax the heck out of them and redistribute the wealth. And neither of those outcomes are desirable to them.

    On the plus side I come from a short-lived family with poor genetics and I'm getting up there in years, so I'll probably be dead before the massive unemployment and chaos caused by the next industrial revolution.
    • by kwoff ( 516741 )
      Exactly. My first thought when reading "We worry about technology because we lack confidence in ourselves, and imagination for the future" was "I worry about it because billionaires are saying not to..."
    • With the far right win in the US I think we know the answer. They will blame the workers for being lazy and not being smart CEO's like the rest of the people and they should go get jobs

  • never be... (Score:2, Interesting)

    by Anonymous Coward

    Why is it that people who feel the need to explain that A.I. will not replace people always come up with the argument that A.I. "will never be as good as human" in one or another aspect?

    That is just a baseless statement. Becoming as good as humans, or actually becoming better, in all aspects is exactly the goal of the A.I. research. There is no reason to think that "wisdom" or some other factor cannot be captured in A.I.

    Not to mention that there is such a thing as "good enough". Employers would happily repl

    • Re: (Score:2, Interesting)

      There is no reason to think that "wisdom" or some other factor CAN be captured in A.I. We can barely even make software that runs reliably. Moores Law is dead. What makes you think there will be some magic leap that brings intelligence to computers?
      • We need to wait for the Firefox tribe, the Gnome crowd, and Lennart Poettering to finish what they're doing now. Then it's just a matter of Elon Musk finding time to lead them.

        You'll have AI before you can say "Where's my plugins, in fact where's *anything*, and fuck binary logs with the devil's own dildo! In space!"

      • by gweihir ( 88907 )

        There will not be (at least there is zero indication at this time that strong AI could ever be implemented in a machine), but dumb automation can do maybe 95% of the stuff humans do as "work" today. The small part that requires actual intelligence will have to be done by a human for the foreseeable future, but weak AI ("automation") will still kill most jobs.

      • by mark-t ( 151149 )

        The brain is a physical entity, obeying physical laws of chemistry and physics, and these laws are fairly well understood and can be simulated by a computer.

        The only thing stopping us from having intelligent machines right now is the fact that we don't have the technology to make enough processing power to dedicate to performing such simulations in any time scale that would be remotely practical, so we are searching for a shortcut.

    • by gweihir ( 88907 )

      Why is it that people who feel the need to explain that A.I. will not replace people always come up with the argument that A.I. "will never be as good as human" in one or another aspect?

      Because most people are fundamentally dumb (an averagely smart and educated person is already a moron compared to the complexity of the modern world) and some of the rest have an agenda. Weak AI (a.k.a. "automation", the other kind does not exist), will kill a _lot_ of jobs, because as it turns out many things humans do as work do not actually require intelligence. Sure, some humans will need to continue working, but if, say, 95% of a job can be automatized away, then 19 people will lose that job (with no r

  • Right (Score:5, Funny)

    by jasnw ( 1913892 ) on Thursday October 12, 2017 @02:20PM (#55357853)
    Sure we're wise - look who we elected President! OK, so we're fsck'd.
    • The president is an example of people with below 100 IQs voting for things like moral values and less educated being over represented in the electoral college and districts which the GOP likes to keep them in power.

      The problem is the public turned on them with Trump.

  • What is the big deal about a computer winning Go? Go is a game with strict rules. Computers love that kind of stuff. That is the ONLY thing they are good at. It is no surprise that a computer will eventually win any game you come up with.
    • What is the big deal about a computer winning Go? Go is a game with strict rules. Computers love that kind of stuff. That is the ONLY thing they are good at. It is no surprise that a computer will eventually win any game you come up with.

      Prepare for mod-blivion - I've made that very point before.

      Games are literally just sets of well defined rules. It's only surprising how long it took computers to get good at them.

      • Re:Exactly (Score:5, Interesting)

        by Waffle Iron ( 339739 ) on Thursday October 12, 2017 @03:21PM (#55358375)

        Games are literally just sets of well defined rules.

        Well, so is the physical universe.

        You also don't seem to remember all of the go fanbois on this site a few years ago who kept asserting that go has some kind of inscrutable emergent behavior that requires human intuition to master, and machines were never going beat humans at go.

        Maybe people who are making similar assumptions about the world in general are repeating that mistake.

        • We don't know all the rules of the physical universe. Go is just a game. Computers are good at games.
        • by mark-t ( 151149 )
          No, it isn't, Those so-called rules don't actually exist in any real sense the way that rules do in a game, they are simply generalizations that we have made about our observations in the universe around us which appear to offer predictive power to determine how things will be at a later time. The universe happens to obey the laws of physics not because the laws of physics are in some way limiting its behavior, as game rules would limit player behavior, but because we define the laws of physics to be h
    • Go cannot be exhaustively searched the way chess and many other games can. The branching factor (number of possible moves) is simply too high.

      AlphaGo is a breakthrough because it learns what is basically an "intuition" for good moves and board states by learning from examples and playing against itself, in a way similar to how we train networks the differences between cats and dogs.

      • I said nothing about exhaustive search. I said "computers love rules". Computers are good at games.
        • We don't know the "rules" that define the differences between a cat and a dog, yet neural networks can work them out through experience.

          Nor do we know the "rules" that make up a winning go strategy (as opposed to the rules of the game itself)... yet AlphaGo managed to work those out itself too.

          Everything is just games from some point of view. Eventually AI will be better at them than any human (or group of humans) can be (including at your job).

    • by gweihir ( 88907 )

      The big deal is that most people have no clue what intelligence is and mistake dumb automation (admittedly on a very large scale here) for intelligence. Hence they are completely off when evaluating what this thing can do.

      Sure, there exist a computer that, in a clearly rigged contest (the computer knew games of the player, but not the other way round), defeated a really good Go player. But that Go player has general intelligence and can do and learn a lot of other things that this computer (or any other) co

      • Sure, there exist a computer that, in a clearly rigged contest (the computer knew games of the player, but not the other way round), defeated a really good Go player.

        You have it backwards. AlphaGo knew literally nothing about its opponents and hadn't been trained on any of its opponents games. The opponents (Lee Sedol and the much more recent match versus Ke Jie) had access to a small number of previous Go games played by AlphaGo.

    • by sbaker ( 47485 )

      The point about AlphaGo isn't that it plays amazing Go.

      The point is that it learned to play from being fed images of the board. It taught itself the rules and how to play to win - and it even got better by playing itself when it ran out of human-played games to look at....AND THEN it beat the best human player by an unprecedented margin.

      That's a BIG step up from (for example) Deep Blue's coup in chess.

  • by Falconnan ( 4073277 ) on Thursday October 12, 2017 @02:23PM (#55357883)

    When the people who will benefit most greatly from an impending change tell the people who will be most harmed (possibly starved out in this case) by the same impending change that change is good, worry. When they say, "You're too smart/wise to be harmed by this change," worry more. I don't fear Skynet. I fear VIKI.

    The truth is, volitional AI is nowhere seen to be on the horizon, but non-volitional AI is already here, following our rules. Or, should I say, the rules of a few people who control the system. What are the odds those rules will be good for the people already in power?

    • by sbaker ( 47485 ) on Thursday October 12, 2017 @05:22PM (#55359377) Homepage

      Very true - but the point of the OP was about jobs.

      It doesn't take a general AI to take jobs. A self-driving truck (which isn't really "AI" at all) can quite easily take 2 million US jobs away within about 5 years from it's introduction. Repeat for fast-food cooks, taxi drivers, tax preparers, medical coders...you name it.

      A General AI - a true intelligence - may just decide that it's bored with driving trucks or playing Go and just decide to spend the next million years meditating on the properties of the number '42'. Since we'd have zero understanding of how it works (nobody really understands the weighting numbers that are the "program" in a neural network) - there would likely be no way to fix it.

      So between the risk that a general AI might end our civilisation within a matter of days - and the risk that we'd spend a fortune developing one only to discover that it has ADHD or is obsessed in ridiculous and self-defeating ways...I'm not sure what to think about that possibility.

      Only to say that we're not one tiny step closer to having a general AI than we were 40 years ago.

  • by hey! ( 33014 ) on Thursday October 12, 2017 @02:23PM (#55357887) Homepage Journal

    If your line in the sand is wisdom, then this is what you have to ask: can computers provide a substitute for wisdom that is cheap and convenient enough we can live with its shortcomings?

    Think of wisdom as hardwood flooring and machine learning algorithms as floating melamine resin tiles with wood grain printing. Yes, solid maple tongue-and-groove planks are considered more valuable, but a lot more people put laminate tile in because it's way cheaper to buy and install.

  • Remember when robots were being introduced to the Auto industry? The same arguments are being used, yet this one is made me laugh.

  • I'm worried we are not wise enough to let robots take our jobs, just because we are wedded to the current economic system.

  • When in reality it just boiled down in the end to:
    #include wisdom.h
  • Best place for an AI decision-engine computer would be in the public sector. Indifference to profit motive, complete objectivity, no biases or '-isms,' scrupulous with funds to the penny, can't be bought, sexual temptation means nothing, it can soak up data from dozens of intelligence networks, sensors, and organizations in real time to make decisions economic, military, etc. The bureaucracy shouldn't be human, it should be an API that humans control.

    Of course if Jack Ma seriously suggested such a thing

  • This is a species that kills millions of its own kind every year.

    Wisdom is not a word I would be using.
  • It seems to me, that in many of the jobs likely to be automated, the companies don't want the employees to utilize wisdom, initiative, or ingenuity--they just want the job done a certain way without adjustment by the person doing it.
  • so he already prepared his speech for the times the company fires thousands of workers in favor of automation? Nice.

  • I don't know about people having wisdom? Just look at the Nielsen ratings for most popular TV shows!! ;) lol
  • Of course, robots cannot take most jobs completely, but if they take 95% of a job, you still have just one human of 20 that gets to keep that job. As to actual "wisdom", you will find that the average person has close to none, and that those that have more find it is not in high demand.

    • Of course, robots cannot take most jobs completely, but if they take 95% of a job, you still have just one human of 20 that gets to keep that job. As to actual "wisdom", you will find that the average person has close to none, and that those that have more find it is not in high demand.

      Yeah, but you aren't looking at that one human who has a job value? With 19 desperate starving people willing to work below minimum wage to feed their malnurished children you think the boss is going to want to pay a middle class salary and lifestyle??

      We all suffer just like the H1B1 visa to this day has brought wages below 2000 17 years ago across the board in I.T. outside of programming. Even if you speak english and were never outsourced you are competing with those who were who are desperate for a job a

  • First of all, AI is not wise YET, but that is because it is young...

    Second of all and more important, what manager do you know actually values wisdom? I'm constantly thwarted by stupid management policies that specifically don't allow me to use intuition or wisdom in any business decision. Management in general wants objective data to make decisions and hates any prospect of using subjective criteria to make decisions because they can't justify it later if it turns out to be wrong.

  • by nospam007 ( 722110 ) * on Thursday October 12, 2017 @03:43PM (#55358641)

    "There's one simple reason for that, the Alibaba founder said - we possess wisdom."

    Has he met us?

  • Wisdom is a product of time and experience. Both of which have been constantly and consistently devalued by corporations seeking a quick profit.

  • by eaglesrule ( 4607947 ) <eaglesrule@nospam.pm.me> on Thursday October 12, 2017 @04:36PM (#55359053)

    “People are getting more worried about the future, about technology replacing humans, eliminating jobs and widening the gap between the rich and the poor,” said Ma. “But I think these are empty worries."

    "Rest assured," Ma continues, "that after the majority of the world's GDP is managed by just a few mega corporations, who also dominate the funding for political elections and the media, that they will only have the welfare of all people in mind. After all, even greed has its limits.

    "Remember... corporations are people, and as such can be held accountable too."

    “Technology exists for people. We worry about technology because we lack confidence in ourselves, and imagination for the future.”

    "Trust us," Ma says with the utmost sincerity, "there really is nothing to worry about. Have faith that the Free Market, holy be thy name, along with unshackled Capitalism, will ensure that technology will never leave large swathes of people unemployable or underemployed, fighting for scraps and having to suffer abusive jobs and crippling debt for a lack of better alternatives."

    "Just use your imagination! Imagine a blissful future for everyone!"

  • by sbaker ( 47485 ) on Thursday October 12, 2017 @05:00PM (#55359247) Homepage

    The OP is crazy. Let's look at some hard realities: There are 3.5 million truck drivers in the USA...maybe half of those are long-distance. We already have cars that can auto-drive on the freeway adequately. How long will it be between the day the first viable self-driving truck arrives on the scene until about 1.75 million people wind up being unemployed?

    With AI trucks being able to drive 24/7 without having to take mandatory breaks - goods will get where they're going about twice as fast...that's a HUGE win. You'll only need half the number of trucks to get the same amount of goods transported because half of them are not sitting idle in truck-stops like they are now. Without driver salaries (health care coverage, taxes, management) - and probably with lower insurance premiums - and likely with lower fuel bills (I'm betting the AI drives at the perfect speed/gear for the conditions 100% of the time)...road transport will probably be HALF the cost without human drivers.

    About 10% of those truckers are self-employed - so they'll be in work until they can't work cheaply enough to beat the AI's - but the big fleets will be anxious to switch over as fast as they can. An average 18 wheeler truck is scrapped after 5 to 6 years in service. And that's probably the maximum amount of time it'll be until the last long distant truck driver is unemployed.

    If existing truck vendors provide add-on kits for current generation trucks, the adoption rate could be much faster. If Elon Musk's upcoming all-electric truck works out as claimed - then with states like California having aggressive "zero emissions" policies - it could happen much faster even than that.

    If only half the number of trucks are needed - then the truck manufacturers will have to down-size too. When you cut out the ancillary jobs such as fast-food cooks and truck-stop owners - you could easily be looking at 2 million job losses.

    Sure, there will be gains in electronics to manufacture these AI units - but I think a lot of that stuff will go to China...only the R&D will stay in the USA.

    Even if AI trucks are only smart enough to reliably do freeway driving - there would STILL be massive incentives to putting a human driver at the offramp to drive the truck from freeway to destination then drop it back onto the on-ramp for it's next trip. All he needs is a motorbike to get him on to the next freeway exit/entrance after each truck is on it's way. One human driver could handle a dozen trucks quite easily.

  • If you spend even a little time thinking about it, it's more difficult to *avoid* concluding that machines will have more wisdom than humans than it is to imagine how they can. And stating that they never will is undefendable. Successful business people don't deserve instant credibility just for being successful - credibility has to be earned. He going to have to at least say a few credible things, first.
  • Complete bollocks. (Score:3, Insightful)

    by edgedmurasame ( 633861 ) on Thursday October 12, 2017 @06:51PM (#55359767) Homepage Journal
    They don't have to take every job, they just have to take enough of them. That's bad enough.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...