Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
AI Technology

The Future of AI: a Non-Alarmist Viewpoint 367

Nerval's Lobster writes: There has been a lot of discussion recently about the dangers posed by building truly intelligent machines. A lot of well-educated and smart people, including Bill Gates and Stephen Hawking, have stated they are fearful about the dangers that sentient Artificial Intelligence (AI) poses to humanity. But maybe it makes more sense to focus on the societal challenges that advances in AI will pose in the near future (Dice link), rather than worrying about what will happen when we eventually solve the titanic problem of building an artificial general intelligence that actually works. Once the self-driving car becomes a reality, for example, thousands of taxi drivers, truck drivers and delivery people will be out of a job practically overnight, as economic competition forces companies to make the switch to self-driving fleets as quickly as possible. Don't worry about a hypothetical SkyNet, in other words; the bigger issue is what a (dumber) AI will do to your profession over the next several years.
This discussion has been archived. No new comments can be posted.

The Future of AI: a Non-Alarmist Viewpoint

Comments Filter:
  • TL;DR (Score:5, Insightful)

    by penguinoid ( 724646 ) on Wednesday June 17, 2015 @02:52AM (#49927313) Homepage Journal

    AI will obsolete your job before it obsoletes humanity.

    • Maybe the tipping point that Bill Gates worries about is when enough people are out of work, they notice who has all of the money and have the skills and motivation to do something about it

      • This is one reason why so many sci-fi futuristic dystopias feature a police state. If you've got a sizable unemployed, downtrodden mass then you need a bit of oppression to quickly crush any revolt before it can grow.

        Perhaps the revolts can be crushed by robots too. No need for an ED-209 - a swarm of flying drones carrying tear gas can be quickly dispatched to any illegal gathering, and any valuable property can be protected by sniper rifle turrets. You don't need machine guns if you have an aim-bot.

        • Re: TL;DR (Score:2, Insightful)

          by Anonymous Coward

          Actually it will be way less messy. Have you seen how quickly and bloodlessly Occupy Wall Street was defeated and destroyed as soon as the One Percenters required it? Revolutions require more than numbers and weapons: they need discipline and organization, and communication. In the Surveillance Age all of these are impossible to obtain without being detected and removed from the equation. We have witnessed the final triumph of the One Percenters over the rest of the populace. Deal with it. They won.

          • Final triumph? The one percenters have been "winning" since life first crawled out of the primordial soup. Or do you propose to stop evolution in its track? If you're so concerned with the welfare of your fellow human beings go take the next bum you see home, feed him and let him spend the night, or are you too concerned he's gonna touch your shiny Apple gadgets with his grimy hands?
          • by khallow ( 566160 )

            Have you seen how quickly and bloodlessly Occupy Wall Street was defeated and destroyed as soon as the One Percenters required it?

            I think it has more to do with the vapidity of the movement. I think the only reason the protests were as large as they were was because the Democrat Party wanted widespread protests for political advantage. When the OWS was no longer politically convenient, then suddenly the police remembered that there were laws which needed enforcing.

    • Is there an actual solid dividing line between these two concepts? It seems it's just a continuum of capability, starting from AIs that replaced human calculators, progressing towards AIs that we currently have, and soon AIs that will drive cars and replace other jobs, and eventually AIs that will replace all jobs, effectively obsoleting humanity.

      • eventually AIs that will replace all jobs, effectively obsoleting humanity.

        Hopefully you don't live to work. Automation will free us up to do things we'd rather do.

        • Automation will free us up to do things we'd rather do.

          My my, what positivity. Automation will label those out of a job as just another unnecessary mouth to feed.

        • Hopefully you don't live to work. Automation will free us up to do things we'd rather do.

          Let me know how that works out for you without an income.

    • AI will obsolete your job before it obsoletes humanity.

      I think perhaps the things to worry about are more immediate than what will happen if/when AI becomes worthy of its name. Things like using autonomous devices in warfare, for example. Or what if we come to trust autonomous systems to such a degree that most of us no longer have the skills or insight needed to perform basic, necessary tasks? IMO it is not good to get into a situation where we are fully dependent on a technology that might malfunction, and which we can't fix.

    • Re:TL;DR (Score:5, Interesting)

      by Lennie ( 16154 ) on Wednesday June 17, 2015 @06:00AM (#49927783)

      Yep, I've mentioned this before on slashdot comments.

      The people are gonna rise up way before the machines do.

      I'm actually quoting what Andrew McAfee said in a talk about automation and jobs. And indirectly the book he's a co-author off: the second machine age. []

      Probably one of the most important things to change is education. If certain types of jobs disappear you'd want people to have had the education to adopt and do the jobs that haven't been done yet or before. That way we'll grow the economy and all benefit from it. This is how we dealt with the 'first machine age', the industrial revolution.

      And we might start to think about something like 'negative income tax', just in case we need it, maybe we just need it to help us through a transition. An old concept which Nixon almost got through congress. It gives people some money if they really need it and rewards people when they put in more effort.

    • I believe that we will not be able to invent a truly sentient AI until we can teach it how to skive. If you're truely worried about AI's taking your jobs can I suggest we teach the AI's about Unions :D then you can just whinge about all the smart robots that never do any work because they have a better union than you do :)
  • by Anonymous Coward on Wednesday June 17, 2015 @02:57AM (#49927325)

    the problem with that is cultural and ideological not a problem with AI, Capitalism *requires* scarcity in order for certain business models to work and this is why AI makes people nervous, It removes scarcity of labor,
    We've already seen this with the internet where it provided freedom of information leading to copyright issues begin essentially unenforceable however we now have governments en-mass attempting to put the jack back in the box with draconian despotic measures threats of cultural apocalypse. Which is a real shame that they lack such imagination.

    Historically Feudalism described our societal structure, with the technological limits on transporting people around it was the best we could manage at the time despite how horrible it was. With the increase in movement wealth in the mercantile classes increased and there power came to supplant notions of bloodline/dynasty dominance.
    Capitalism is likewise horrible but probably the best we can manage given our current technological limitations. I'm hopeful within my lifetime we will replace it with something better But we do need to change peoples attitude towards work, ownership and entitlement... If we don't then capitalism will invariably collapse into despotism.

  • We're in it together (Score:4, Interesting)

    by WSOGMM ( 1460481 ) on Wednesday June 17, 2015 @03:43AM (#49927431)

    Keep in mind that we're in this together. A large economic collapse due to robotics and AI advances will compel the american populace to find ways of supporting itself, be it through complete economic regulation (ie communism) or through philanthropic capitalism. After all, what's the point of building robots for profit if that profit can't be realized?

    One thing is for certain though: things will get worse before they get better. Our hands need to be forced.

    • Destroying jobs through progress is economic expansion, not collapse. When a machine puts labor out of work, that machine produces for lower, consumers have more left over, they bid up some other scarce resource (like holidays) and new jobs are created in tourism industry for example. Net employment is not effected, but living standards have risen as we still got the result of the robot work, plus more holidays. We all know that 200 years ago 90%+ jobs were in agriculture. We didnt get unemployment and co
      • by Niggle ( 68950 ) on Wednesday June 17, 2015 @07:19AM (#49927965) Homepage

        We didn't get unemployment and collapse of society when machinery destroyed 90% of those jobs.

        From a long-term view (decades), no we didn't get massive unemployment.
        From a short-term view (years), yes we did. The early phases of the industrial revolution saw very high unemployment. And with no welfare systems back then, quite a few of those people starved to death or turned to crime. The majority were badly mistreated by those who owned the early factories because there were no other jobs around. The agricultural revolution had a similar history.

        So if/when an AI takes over your job, your choices are likely to be:
        a) Starve
        b) Crime
        c) Crappy job
        d) Try and retrain to a new field before that gets taken over by AIs as well.
        e) Hope society gets rebuilt on less capitalistic lines and you can enjoy a life of leisure.

        I'm sure it'll all sort itself out within a generation. Doesn't really help that generation though.

        • The early phases of the industrial revolution saw very high unemployment.

          Yes, and no. The land holders owned the tenants (serfs), provided them with work and lodgings, and begrudgingly, food. Industrialization saw the transition from lord of the manor to owner of the factory - which made the tenants a liability. So they became homeless and unemployed simultaneously. Being homeless was a crime, as it still is in some US states, the punishment became work. Not everyone was keen on working for nothing - even with free floggings or travel to exotic locations []. This made petty crime m

      • "We didnt get unemployment and collapse of society when machinery destroyed 90% of those jobs."

        The point is... yes we did.

        The early decades, almost a century, of industrial revolution really made thousands to millions of people miserable all their lives which -almost luckily, were also quite shorter than their parents. Do you think, say, coal mining or 19th century London or Paris were such a paradise except for those lucky one-percenters?

      • I think in the near future we will all live underground. Our homes will last for centuries and will be a lot more energy efficient. They will be self cleaning and have virtually no maintenance. All transportation will be automated from door to door. There will be no commercial district as most products will be delivered from the factory. All those jobs will just disappear. Tourism will also disappear as virtual reality will be far more interesting and easier to accomplish.
        We will live in homes that are extr

    • Keep in mind that we're in this together. One thing is for certain though: things will get worse before they get better.

      We're not in this together.....we're not in anything yet. We're not anywhere close to inventing AI. All of this is just speculation.

      • by lorinc ( 2470890 )

        We're not in this together.....we're not in anything yet. We're not anywhere close to inventing AI. All of this is just speculation.

        We're not anywhere close to have autonomous vehicles? I guess you miss some news lately.

    • ...or through philanthropic capitalism.

      That's just another word for authoritarian plutocratic rule through euergetism and patronage, which is basically taking the social order back to that of ancient Rome where the landless and unemployed population was at the mercy of powerful, wealthy, and corrupt magnets because the people were dependent upon these plutocrats for their sustenance.

  • Friendliness (Score:5, Insightful)

    by Meneth ( 872868 ) on Wednesday June 17, 2015 @03:47AM (#49927445)

    The article's viewpoint is dangerous. We must solve the Friendliness problem before AGI is developed, or the resulting superintelligence will most likely be unfriendly.

    The author also assumes an AI will not be interested in the real world, preferring virtual environments. This ignores the need for a physical computing base, which will entice any superintelligence to convert all matter on Earth (and then, the universe) to computronium. If the AI is not perfectly friendly, humans are unlikely to survive that conversion.

    • by Livius ( 318358 )

      Natural intelligence has selfish (and also co-operative) behaviours because animals with nervous systems have evolved these behaviours over hundreds of millions of years of natural selection.

      We have no idea what kind of personality an artificial intelligence would have, but odds are it won't be much like a human one.

      Software is more likely to kill us because it's less intelligent than we thought than because of malice.

    • But how dangerous could an AI really be, if it were just given a simple task like making paperclips.

  • > A lot of well-educated and smart people, including Bill Gates and Stephen Hawking, have stated they are fearful about the dangers that sentient Artificial Intelligence (AI) poses to humanity.

    Look at the dangers sentient *humans* have put onto the world: greed, avarice, corruption, war, climate, suppression of rights, mass surveillance, abuse of power, media manipulation. Those dangers are here and now. How about fixing that *NOW* and now, because that danger is *NOW*.
  • Self driving cars are already here, we've had articles about google self driving car accidents, stop pretending it's a future thing that will need proper AI. Also, if they ever make the equivalent of the human brain it will take over a year before it can say its first word, who's going to put in the endless hours of talking to it like it's a baby to help it understand words? Even more of a problem for the prototypes, you wouldnt even know if it'll work after all that.

  • by Anonymous Coward on Wednesday June 17, 2015 @04:37AM (#49927561)

    ... They won't feed you.

    The utopia that artificial intelligence promises will be theirs alone to reap, not yours. You will receive only ashes, and death.

    This is the future we've earned.

  • ... about.

    It gets so depressing listening to these hyperventilating pearl clutching nitwits worry about killer robots or sapient AIs.

    I don't care who they are... they're not AI experts.

    Look, I'm not an AI expert either and even I knew the worry was moronic. As the guy said "like worrying about over population on mars".

    Current AIs are retarded and unbelievably myopic. And whatever skills or nature is in them was programmed into them. Their priorities... their databases. We provide everything.

    The best AIs of my life time will probably be the computer equivalent of Rainman. Brilliant in some task no doubt but unable to do anything with any competency or even understand that anything else is important.

    A big part of the problem is that people anthropomorphize robots/AIs. They invest in them this notion of being demons in bottles or animals made of metal. They're neither of these things.

    We have hundreds of millions of years of genetic programming on this planet emphasizing our survival. What is the AI going to have? Will it even have a sense of self preservation? Why would we program that into an AI in any complex sense?

    What we'd do with a combat robot is program it to evade enemy weapons fire. But teaching something to evade something is not the same thing as teaching it to preserve itself. Little things like fear, paranoia... that deep animal cunning that comes into play when death is on the line. We do weird things. We play dead. We make a final stand with no attempt to defend... just investing everything in one final attack.

    All of this stuff is genetic. Our ancestors... even the furry ones that scurried around occasionally got out of bad situations by doing things like that. The effectiveness is dubious on some predators as anyone with a competent cat will know. Playing dead from what I could see was a terrible idea.

    But the point is that even an AI war machine isn't going to be as adaptable or tricksy as people. First, it doesn't need to be that cunning. And second, even if it would be nice, it wouldn't be wroth it. Its too much work for what? So the robot occasionally get scragged? That's why you send in 10 of them at once. The fucking things roll off an assembly line. Finally, it is easier to keep them alive by adjusting their battle tactics. You tell them to stay back a bit, maybe bombard the area a bit... something that makes dealing with ambushes less of an issue.

    Oh yeah, and when the robots actually get clever enough that they might actually be a danger... we'll slap a slave collar on that monster at birth.

    The danger is not AIs... but the rich and powerful with AIs. The AIs are tools. The rich and the powerful are the will and the mind that guides them.

  • It's just misdirection. Yes, you should be mad, but not at robots and meanie rich people.

    It wasn't a giant leap in robots that turned the recession of 2008 into the depression of 2009-?. Any more than it was a giant leap in robots that did it in the 1930s.

    Think it through.

  • Or can it be less-than-sentient and borrow its sentience in the form of the will, motivation and biases of its creators, yet still be some kind of existential risk?

    When I think about the global financial marketplace, I think of a relatively small number of people at the too-big-to-fail institutions making decisions that rely on information that comes from market analysis and modeling systems, and in some cases this information being fed back into automated trading systems. The machines aren't self aware, b

  • People will just hate the people who make it, no matter how intrinsically interesting it is or how much benefit it can provide in other areas of society.
  • by tomhath ( 637240 ) on Wednesday June 17, 2015 @08:22AM (#49928229)

    If predicting that AI will destroy civilization isn't alarmist I would be interested in hearing the other side.

    The world has changed a lot in the past 100 years. It will change a lot in the next 100. Deal with it.

  • A lot of well-educated and smart people, including Bill Gates and Stephen Hawking, have stated they are fearful about the dangers that sentient Artificial Intelligence (AI) poses to humanity.

    They aren't that smart if they think machines could ever be sentient. Machines are deterministic. They do what you tell them to. We might be able to make extremely complex machines that give the general appearance of sentience, but they will still only ever be deterministic.

    Anyone with enough insight and humility knows there's still an extremely large piece of the puzzle missing in our understanding of life. And you need to understand how something works before you can create it.

  • by RandCraw ( 1047302 ) on Wednesday June 17, 2015 @10:56AM (#49929293)

    Nice article. I disagree though that most AI researchers are motivated by the good that automation will do. They're not that naive. I think Oppenheimer had it right: scientists want to work on projects that are "technically sweet". AI is definitely that.

    But I totally agree that the real world impact of AI will be like evolution -- following a pattern of punctuated equilibria where disruption arises in chuncks as each significant skill area is usurped by automation (like car/truck drivers, then call centers, then retail clerks, then jobs requiring physical skills).

    That said, once the first skill area falls that requires substantial linguistic facility (like a call center), I see most white collar jobs tumbling like dominos soon thereafter. Once machines can converse using speech and perform the simple logical deductions/inferences that humans do, would anyone hire a human for an office job ever again?

Today is a good day for information-gathering. Read someone else's mail file.