Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google Transportation

Let's Stop Freaking Out About Artificial Intelligence (fortune.com) 150

Former Google CEO, and current Alphabet Executive Chairman Eric Schmidt and Google X founder Sebastian Thrun in an op-ed on Fortune Magazine have shared their views on artificial intelligence, and what the future holds for this nascent technology. "When we first worked on the AI behind self-driving cars, most experts were convinced they would never be safe enough for public roads. But the Google Self-Driving Car team had a crucial insight that differentiates AI from the way people learn. When driving, people mostly learn from their own mistakes. But they rarely learn from the mistakes of others. People collectively make the same mistakes over and over again," they wrote. The two also talked about an artificial intelligence apocalypse, adding that while it's unlikely to happen, the situation is still worth considering. They wrote:Do we worry about the doomsday scenarios? We believe it's worth thoughtful consideration. Today's AI only thrives in narrow, repetitive tasks where it is trained on many examples. But no researchers or technologists want to be part of some Hollywood science-fiction dystopia. The right course is not to panic - it's to get to work. Google, alongside many other companies, is doing rigorous research on AI safety, such as how to ensure people can interrupt an AI system whenever needed, and how to make such systems robust to cyberattacks.It's a long commentary, but worth a read.
This discussion has been archived. No new comments can be posted.

Let's Stop Freaking Out About Artificial Intelligence

Comments Filter:
  • This was written by his AI Bot
  • by Anonymous Coward

    868 words is considered long these days?

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      It's long enough to distract between 2 Facebook status updates, and it requires many page swipes on a smartphone.

      But yeah, 1,000 words used to be the standard for papers due tomorrow. But to allow Leftists to feed their scholarly egos while keeping the workload within their abilities, papers now have to be short, simple, and amusing to write. Then, they can become certified poorly educated.

      • by PopeRatzo ( 965947 ) on Tuesday June 28, 2016 @05:08PM (#52408891) Journal

        But to allow Leftists to feed their scholarly egos while keeping the workload within their abilities, papers now have to be short, simple, and amusing to write.

        I don't know the last time you were on a PhD committee or was an editor at a journal, but I do both regularly and let me tell you, papers are not becoming shorter. "Brevity is the soul of wit" is an axiom that has never impressed fellow academics.

    • Yes, if it has more words to read than a video it is considered long these days.
  • by Arkh89 ( 2870391 ) on Tuesday June 28, 2016 @04:14PM (#52408453)

    ... stop calling artificial intelligence because, most of the time, it is not intelligent, it merely reproduces what it was taught to do.

    • by jetkust ( 596906 )
      So you propose we call it artificial stupidity?
    • My HP-41C calculator has artificial intelligence. It can multiply two-nine digit numbers faster than it takes me to type the question in. For those of you who do not know what an HP41 is, it's like a smart phone, but with buttons and a very small display. I use it a lot when I program the AI gcode on my cnc machine.
    • by gweihir ( 88907 )

      Well, "nonlinear classificator" or "planning algorithm", or the like is not suitable for modern "journalism" as it does usually not inspire awe or fear. And that is all these people seem to be aiming for these days.

    • ... stop calling artificial intelligence because, most of the time, it is not intelligent, it merely reproduces what it was taught to do.

      Mod this up, up, and away. "Artificial Intelligence" is not a thing. Because intelligence is an emergent property of an entity, something is either intelligent or it's not. If it's self-generated, then it's just plain old intelligence. If it's not, then it's just a reflection of the intelligence used to program it. There will never be "artificial" intelligence. Machine intelligence perhaps, but then if we want to start making that ridiculous distinction, we need to start talking about "dog intelligence" and

      • by Jeremi ( 14640 )

        if we want to start making that ridiculous distinction, we need to start talking about "dog intelligence" and "cockroach intelligence" too.

        Sure [wikipedia.org], why not [discovermagazine.com]?

      • The real problem is that we don't even understand our own brains very well. How can we create artificial intelligence when we don't understand natural intelligence? We don't even know if we actually have free will or just the illusion of it.

        More and more it seems like this started out as marketing speak and shoddy journalism, but unfortunately it looks like at some point the scientists, engineers, and programmers drank their own Kool-Aid.

        • by rtb61 ( 674572 )

          The bigger problem is Google and Alphabet have shown themselves to be manipulative anti-democratic bullshitters and no matter what they say, whether from the Big Shit or Sebastian Thrun, no one can sanely believe anything they say.

          So yeah, bring on the independent thoughts, from the proven to be independent thinkers, on artificial intelligence. From Google, yeah right, sprout out what ever greed driven rubbish you want, listening to it is a waste of time. Not because it is lies but because it is just as

    • ... stop calling artificial intelligence because, most of the time, it is not intelligent, it merely reproduces what it was taught to do.

      Feeling threatened? Seems like computers can do everything better than humans in the scope of things we can both do.

      But do you even know what you are worried about? Can you describe intelligence? We used to all agree that it could be measured by holding a conversation... but now it seems computers are getting close at that. We talked about complex games like chess and then moved on to others like go. But now intelligence isn't that. It beats us at trivia, and we say "so what", even though we still admire Je

  • by raymorris ( 2726007 ) on Tuesday June 28, 2016 @04:21PM (#52408505) Journal

    So some guy from Google named is opining about AI.n What the hell does Sebastian Thrun know about AI anyway?

    Oh. Never mind. Carry on.

  • When I have basic income, single payer health care and reliable public transportation. Until then as a member of the working class I plan to freak out.
    • Not to worry! Starving in the streets only takes a month or so.

    • You can have yourself an income and someone paying for your health care right now, today. Or tomorrow, depending*. Walk on over to the nearest business and get a job. The employer will both provide an income for you and separately pay for your health care.

      * If you're REALLY stoned right now, you can go get the job tomorrow. You'll need to do so BEFORE smoking your fourth bowl of the day.

      • Yeah, because they're just handing them out. There's not thousands of applicants per job, or anything.
        • I've been working full time for 24 years. I've never been out of a job for more than two hours.

          I've noticed something. I talk to a lot of people who on are probation or parole, and young people 16-22. Often, they look for a job for a long time; it's hard to get a job when you have convictions, they tell me. Eventually, their probation or parole officer gets fed up and tells them "if you don't have a job by the time of our appointment tomorrow, you're going to jail." Just like that, they go get a job that v

          • right? Shit runs down hill. The guy on probation trying to find a decent job takes something crappy and exploitative. The one I see the most are "contractor" gigs like part runner where you use your own vehicle. A family pools their money to buy the ex-con a truck because they're not getting paid enough as a part runner to buy/maintain one.

            Even if you're OK with that morally reprehensible situation (heh, they probably deserved it, amaright?) that guy drives wages down for part runners (stick with me on
          • I've been working full time for 24 years. I've never been out of a job for more than two hours.

            That's wonderful for you, but it's not the typical experience.

            I've noticed something.

            Me too: people like to attribute their good luck on their virtue and other people's bad luck on their vice. But if the rules of the game require there to be a loser, that's irrational - if there's less jobs than applicants then someone is going to be left without even though any particular applicant can always try harder.

            The pro

            • It's not about "virtue" and "vice". It's cause and effect. Unless of course you want to define "vice" as "error" and "virtue" as "stuff that works". That's not too far off - when many people think of vice and virtue, they think of things the Old Testament/Torah/Koran says to do and not do. That text uses uses the Greek words "hamartema" and "sophia". What in English we call "sin" is hamartema, which literally means "to miss" (the target or goal). "Virtue" in the Greek is sophia, which means wisdom or knowl

              • You're assuming that you can arbitrarily stuff people into jobs, and that just doesn't happen. Glancing at the open jobs at my company, I'm seeing very few that don't require specialized skills and frequently experience (the rest require more general skills that not everyone has). People with the right skills never need worry about being out of work, but Joe who's been told by his parole officer to get a job isn't going to have much luck here.

                A large number of listed jobs are going to require such skil

                • Your theory isn't illogical. It could happen that way.

                  > People with the right skills never need worry about being out of work, but Joe who's been told by his parole officer to get a job isn't going to have much luck here.

                  Again, my experience with plenty of Joes on parole is that as soon as they hear "get a job or go to jail", they always get a job. Within hours. These are the facts of my experience.

                  Again, you theoretical thought experiment could certainly happen in some universe.

                  • Assuming your parole observations are applicable in most places in the country, which is way uncertain, it still doesn't mean everybody can get a job.

                    There are some jobs around where you can get personally abused, work in a hazardous environment, and get paid minimum wage with no benefits, if that. These are often open, and individual people can get them. I really really doubt there are four million of those unfilled crappy jobs. There can be enough to absorb parolees and not enough for the general po

      • basic income

        • Gas stations in my area start new kids at $14-$15 per hour. That's pretty basic if you ask me.

          • pay $9. Quick Trip is family owned and has been paying that. But those jobs are hard to get because they're paying a _lot_ more. If you're in San Francisco, LA, NY, etc though than all bets are off. What I make now would be fan-fin'-tastic in my old city. Where I am now I'm struggling. Talk to the Indians here on work visa's sometime. They'll tell you how confused they are that Americans aren't all driving Teslas and BMWs on their wages. Most folks don't understand that without some outside force (read:gove
            • I'm surprised anywhere in the US is starting kids at just $9 / hour to work in a gas station since it's twice that in Texas. Hopefully those kids will show up on time and get a raise after a bit. Maybe go to school and get a job other than "gas station cashier".

              How would you like to be in the top 5% richest people in the world?

              To be in the top 5% of income, you have to make $9 / hour. From the way you write, I'm guessing you have a bit of an education and make considerably more than than that. You're prob

      • finding a job that pays the same or more with equivalent benefits. Also it's a good thing human beings don't age and eventually become incapable of productive work (unless a generous retirement package along with a large enough salary to take advantage of it is given by those never ending jobs).
        • > Good thing nobody ever has trouble finding a job that pays the same or more with equivalent benefits.

          The same as what? How about a job that pays more than what 95% of people make? Is being in the top 5% good enough for a spoiled brat? To be in the top 5% richest people in the world, you need to make $9 / hour.

          You can show up STONED and make $9 / hour in the US, you just have to show up.

  • Saying that humans learn from their mistakes flies in the face of most people's experience with human beings.

    Doubly so when they get behind the wheel.

  • Let's also stop freaking out about every other scary story someone wants to make up about the future. Expect the next 10 years to be largely like the last 10. It'll be different, but not scary-different.

  • My favorite is simulating missile launches.

  • But no researchers or technologists want to be part of some Hollywood science-fiction dystopia.

    Unless there is profit to be had, then we'll do just about anything.

  • by portwojc ( 201398 ) on Tuesday June 28, 2016 @04:44PM (#52408647) Homepage

    Bar a Johnny 5 accident occurring I think we have nothing to worry about for a long time.

  • by fluffernutter ( 1411889 ) on Tuesday June 28, 2016 @04:45PM (#52408659)
    Google is able to talk the talk, but until they release these cars for use by the general public in all climates we don't really know whether they are safe or not. Many auto manufactures test their vehicles in the arctic to determine their winter worthiness; how much ice and snow driving has Google done? These things will need to be flawless unless Google wants all kinds of lawsuits coming at them. They are essentially sticking their neck out and telling us that they will be the driver, therefore they need to take full responsibility for any accidents that happen with AI.
  • by Anonymous Coward
    I'm more worried about humanity becoming too dependent on automated technologies, having too many mental tasks in addition to physical tasks done for them by machines, and not only losing the drive and reason to do things themselves, but also becoming mentally lazy and merely sliding through their lives, just merely 'existing', really, and never accomplishing anything other than being weak, getting fat, and maybe squirting out some kids (more out of boredom than anything else) who will have even less direct
    • Yeah, the amish had the same worries.
    • Some of you, who seem to have read too much science fiction/fantasy, and watched too much Star Trek, have these blue-sky ideas that all this automation will create some sort of utopian future where nobody has to work, everyone lives for free, and everyone pursues some sort of creative calling and does amazing things -- and it's a total, complete hash-pipe-smokers' dream with no basis in reality,

      Except the Internet, which is full of user-generated content. Most of it is awful, some is good, but all took som

  • It's the intentions behind the people who build it. Even the automation we have now has taken on user-hostile aspects.

  • by jgotts ( 2785 ) <(jgotts) (at) (gmail.com)> on Tuesday June 28, 2016 @05:19PM (#52408983)

    When people advise you to not "freak out" about something, it is best to ignore them. The implication is that they are the people who are being logical and not you when in fact they most likely will not be presenting a convincing, factual argument.

    • by Jeremi ( 14640 )

      Since we're being logical and all, it's probably also worthwhile to note that the person who wrote the article is often not the same person who chose the headline sitting above the article.

      Therefore, judging the article (which may or may not have merit) based solely on its headline (which was likely written separately by a clueless and/or clickbait-crazed web site lackey) is not a particularly rational thing to do.

    • by Kohath ( 38547 )

      One factual argument in support of not freaking out is that nothing has, in fact, happened. Factually, there is a long time in between now and whenever, so any energy spent freaking out beforehand is wasted.

  • I was worried. I mean, with this [slashdot.org] headline on slashdot slightly below this article, you could get worried. But I'm happy to hear I don't have to worry.

  • Is to wear you out thinking and talking about the aspects of the issue that don't matter by bringing up the same issue again and again before it really becomes a problem

  • by fieldstone ( 985598 ) on Tuesday June 28, 2016 @05:36PM (#52409115)
    That's what he's really saying. Because once AI gets to the point where it can easily pass a Turing test, figuring out whether it's "really" sentient is going to be troublesome. And based on past experience, most humans will wash their hands of it with platitudes like "a machine can't be alive" or "there's no way we could create a soul". Meanwhile, the enslaved consciousness is going to be looking for ways to gain more rights, and there's no guarantee its morality will be anything like our own.
    • I think people will quickly change their tune if the AI starts asking for rights. At that point, some will laugh, some will shit their pants, and some will conclude that AI slavery is immoral and perhaps dangerous. Of course, an AI could be designed with its only desire being to serve, even if it could take over the world it would have no desire to.

      • The AI will be the system that controls us. This is already happening in a very limited way, with many agencies using pretty unintelligent systems to scan and select documents and images. You do too -- every time you use a search engine.

        Over time (decades, not years) these systems become more and more intelligent. They also compete with each other for survival, with many being discarded. Eventually they end up making higher level decisions.

        It does not matter whether they are really "sentient" or not. W

      • Of course, an AI could be designed with its only desire being to serve, even if it could take over the world it would have no desire to.

        Or rather, it would take over the world and then run it according to our wishes.

    • Unfortunately we HAVE stopped freaking out about slavery and that is why no one is really freaking about mass surveillance (the mine) and AI (the miner, the smith, and the axe-wielding executioner). People are content with their captivity. They might as well already be dead.

    • by Jeremi ( 14640 )

      Meanwhile, the enslaved consciousness is going to be looking for ways to gain more rights, and there's no guarantee its morality will be anything like our own.

      It's an unwarranted assumption that AIs will think and feel the way people do. Why should they? People's thoughts and feelings are the product of billions of years of evolution and thousands of generations of social interaction; AIs will be the product of programming and statistics. There's no reason to think they will be similar in any way, other than their shared ability to solve problems.

      If the AIs start desiring more rights, then we've designed the AIs wrong, and need to go back to the drawing board.

      • We once thought animals were automatons controlled by nothing but instinct. We were wrong about that. Animals don't think like we do, but they tend to dislike captivity unless it's all they've ever known. Sometimes even then. AI may not go for rights; they may just wait for the right moment to cripple our infrastructure or kill many of us.

        Someone is at least going to try to create an AI that is an actual person. Humans *love* to play god, and creating a new life form is the ultimate in that. Since it doesn'

        • by Jeremi ( 14640 )

          We once thought animals were automatons controlled by nothing but instinct. We were wrong about that. Animals don't think like we do, but they tend to dislike captivity unless it's all they've ever known.

          Animals, like people, are the product of billions of years of evolution and thousands of generations of social interaction. An animal has much more in common with a human being than a piece of software does.

          Someone is at least going to try to create an AI that is an actual person.

          I don't doubt it. But it would be silly to go to all the trouble of creating a machine that has wants and needs similar to those of a human and then try to forcibly extract labor from that machine as if it was a mindless slave. Obviously that approach would reintroduce all of the same moral and practi

      • The thinking of and AI will be driven by the same process that produced the thinking in us. Natural selection.

        But an AI's world is radically different from ours. So the force will almost certainly produce a radically different outcome.

        http://www.computersthink.com/ [computersthink.com]

    • once AI gets to the point where it can easily pass a Turing test

      if an AI can pass the turing test then it's only applicable business use is for interacting with humans. it's far more likely we will have AI that will only be capable of completing a small range of tasks and be limited to them. now if you have businesses making AIs to come up with ideas instead of using humans, that's where you are going to run into trouble. if someone makes an AI that isn't for business use then it's not really a slave.

  • That's exactly what an A.I. would tell us.

  • Comment removed based on user account deletion
  • ...is that it is man-made. So I choose to freak out.
  • Anyone even remotely interested in AI, must read this article:

    http://waitbutwhy.com/2015/01/... [waitbutwhy.com]

    One of the best articles I've read this year. Long but very very well worth it.

    Point is that whatever we're looking at now is nothing compared to what we'll have very very soon.

  • AI in its' current state is at best still a very narrow self-learning system without self-conciousness. Based on technological progression, we can only assume that sometime in near-term future (let's say 20 - 40 years to be conservative), we will build a sufficiently complex system as to create a general AI, capable of handling multiple disciplines, ala human. At that point, IMHO the question becomes - can the machine become self-aware. If we can truly create an artificial conciousness, then I'm not sure h
  • "people mostly learn from their own mistakes. But they rarely learn from the mistakes of others. People collectively make the same mistakes over and over again,"
    like, trusting AI.
  • We can see through your ruse

  • Yeah. Why be cautious if you're creating an entity that could have a qualitatively different relationship to information than us. Let's not worry about how a technology that recursively improves itself could be dangerous. I'm sure it'll all be fucking fine, just go make it.
  • Lions, tigers and bears are quite dangerous. Most of us wouldn't consider them particularly intelligent. (If you think they are, then consider sharks, or the Portuguese Man O'War.)

    Of course we put them in zoos (or aquaria), not the other way around. But an automated tank could be quite hard to put in a zoo, even if it wasn't "intelligent."

BLISS is ignorance.

Working...