Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Google AI Privacy

Google Executive Addresses Horrifying Reaction To Uncanny AI Tech (bloomberg.com) 205

The most talked-about product from Google's developer conference earlier this week -- Duplex -- has drawn concerns from many. At the conference Google previewed Duplex, an experimental service that lets its voice-based digital assistant make phone calls and write emails. In a demonstration on stage, the Google Assistant spoke with a hair salon receptionist, mimicking the "ums" and "hmms" pauses of human speech. In another demo, it chatted with a restaurant employee to book a table. But outside Google's circles, people are worried; and Google appears to be aware of the concerns. From a report: "Horrifying," Zeynep Tufekci, a professor and frequent tech company critic, wrote on Twitter about Duplex. "Silicon Valley is ethically lost, rudderless and has not learned a thing." As in previous years, the company unveiled a feature before it was ready. Google is still debating how to unleash it, and how human to make the technology, several employees said during the conference. That debate touches on a far bigger dilemma for Google: As the company races to build uncanny, human-like intelligence, it is wary of any missteps that cause people to lose trust in using its services.

Scott Huffman, an executive on Google's Assistant team, said the response to Duplex was mixed. Some people were blown away by the technical demos, while others were concerned about the implications. Huffman said he understands the concerns. Although he doesn't endorse one proposed solution to the creepy factor: Giving it an obviously robotic voice when it calls. "People will probably hang up," he said.

[...] Another Google employee working on the assistant seemed to disagree. "We don't want to pretend to be a human," designer Ryan Germick said when discussing the digital assistant at a developer session earlier on Wednesday. Germick did agree, however, that Google's aim was to make the assistant human enough to keep users engaged. The unspoken goal: Keep users asking questions and sharing information with the company -- which can use that to collect more data to improve its answers and services.

This discussion has been archived. No new comments can be posted.

Google Executive Addresses Horrifying Reaction To Uncanny AI Tech

Comments Filter:
  • Ian (Score:5, Insightful)

    by Sejus ( 2109992 ) on Thursday May 10, 2018 @12:42PM (#56588930)
    Your programmers were so preoccupied with whether or not they could, they didn’t stop to think if they should.
    • Re: (Score:3, Interesting)

      by cayenne8 ( 626475 )
      I'm not much one for onerous regulation, but I do believe in openness even if somewhat mandated/forced....

      I mean, packaged foods have to tell you about their ingredients, fish have to be labeled with country of origin....I think that a computer call should be forced to ANNOUNCE that it is a non-Human call and the beginning of the "conversation", and that your answers will be recorded and analyzed.

  • Phone CAPTCHAs (Score:4, Interesting)

    by DickBreath ( 207180 ) on Thursday May 10, 2018 @12:45PM (#56588960) Homepage
    Or a reverse CAPTCHA.

    I'm looking up to see if I have anything free on that date. While I'm looking can you please confirm the prime factorization of 28573782909827352?
  • There's a big difference between:

    Google Executive Addresses Horrifying Reaction...

    and

    Google Executive Addresses "Horrifying" Reaction..

    • There's a big difference between:

      Google Executive Addresses Horrifying Reaction...

      and

      Google Executive Addresses "Horrifying" Reaction..

      Still not quite there. The proper word is "horrified."

      Google Executive Addresses Horrified Reaction To Uncanny AI Tech

      • by tomhath ( 637240 )
        The article's headline quotes a professor who said the technology is horrifying. But yes, he was horrified.
        • by jetkust ( 596906 )
          But what was horrifying again? The technology or the reaction?
          • by tomhath ( 637240 )

            "Horrifying," Zeynep Tufekci, a professor and frequent tech company critic, wrote on Twitter about Duplex. "Silicon Valley is ethically lost, rudderless and has not learned a thing."

            Both.

        • The article's headline quotes a professor who said the technology is horrifying. But yes, he was horrified.

          Yeah, I know. What I meant is putting "horrifying" in quotes is still misleading. The reaction was not horrifying. The person reacting was horrified.

    • I read that the same way you did, then thought - what is what was horrifying to the writer of the summary, was the fact that people did not really like the new assistant? They could very well find the reaction horrifying... :-)

      Of course that is not what the link was about so you are probably right, but I thought it could be an interesting twist.

      • by tomhath ( 637240 )
        FTFA:

        Scott Huffman, an executive on Google's Assistant team, said the response to Duplex was mixed. Some people were blown away by the technical demos,

        I suppose it would be pretty horrifying to see people blown to smithereens by the demo.

  • I don't know what they think they saw, but just because you say "um" a couple times doesn't mean you're thinking like a human. This is basically a case of good speech synthesis and voice recognition.
    • by ( 4475953 )

      Not necessarily, it depends on how they implement semantic processing and discourse modelling. If the "um" is interjected in a natural way based on some abstract representation of the conversation and the goals of the assistant, then that could be pretty interesting technology.

      My guess is that the system will fail in very weird ways in more complex conversations, because its mostly based on machine learning of certain dialogue types. But perhaps I'm wrong and it does some real processing.

  • by JaredOfEuropa ( 526365 ) on Thursday May 10, 2018 @12:54PM (#56589038) Journal

    Silicon Valley is ethically lost, rudderless and has not learned a thing

    Wow, exaggerate much? All that may be true, mind. But using such hyperbole when voicing concerns does nothing for the guy’s credibility. He comes across as someone who has already made up his mind about SV companies a long time ago, and sees every new issue only as something that confirms his fears, as something that’s part of a bigger plot to rape the planet and enslave humanity.

  • Instant hang-up (Score:5, Insightful)

    by Brett Buck ( 811747 ) on Thursday May 10, 2018 @12:55PM (#56589050)

    Their only option is to make it mimic and impersonate human voice. If it sounds very good, and there is an announcement that it is robotic, no need, but people will hang up immediately just like any other robo-call. If it sounds obviously robotic, instant hang-up. The only way it works is if they can fake it long enough to get some information, and don't let anyone know about it.

        Essentially, the only value to Google is if they trick people into using it.

    • They're not faking the voice I think, instead it's a set of pre-recorded voices. If it was AI, then I'd want to see it change accent according the location and service, maybe even switch language. "Jimmy-Bob, there's a yankee on the phone!"

      • They're not faking the voice I think, instead it's a set of pre-recorded voices

        Neither, it's a neural net trained on real voices.

    • by burhop ( 2883223 )

      The only way it works is if they can fake it long enough to get some information, and don't let anyone know about it.

          Essentially, the only value to Google is if they trick people into using it.

      I don't know. It might work both ways. I just need to convince the robot callers I'm human.

      "Hello. This is Lenny."

  • This is impressive speech synthesis, though from the short demo it's hard to judge whether it's new and better than existing ones I've heard. The harder part is the domain-specific knowledge for understanding, it will be interesting to see how they deal with that.

    Personally, I don't find it scary. The voice sounds dumb, but who cares. I'll probably make fun of these kinds of assistants once they become mainstream. I'm worried that similar technology will be used for robocalls by someone else in the future.

  • If it doesn't answer a direct question or otherwise evades it, assume it's a robot, and hang up. It has already demonstrated no interest an actually attempting to communicate with you, so there's no point in giving it any more time than if it had the obviously robotic voice that would likely make most people hang up right away.
    • by Junta ( 36770 ) on Thursday May 10, 2018 @01:15PM (#56589216)

      In this context, it's calling a business as a potential customer. Hanging up on what from all appearances is a potential customer is a good way not to have a job anymore. In fact, even if it is not a human but is calling to arrange the business of a real human customer, it's probably still a bad move to hang up.

      • by mark-t ( 151149 )

        If that's the case, then why are they so worried that people would just hang up if the voice was more obviously robotic sounding?

        If they are reasonably expecting people to hang up if they know that the call is from a machine, then unless they have programmed the machine to deliberately lie (in which case you don't want to do business with them anyways, since they will deliberately engage in deceptive practices in order to acquire business), then why is it somehow different if you hang up on what seems to

        • by Junta ( 36770 )

          Because presumably places of businesses get obviously recorded spam calls like everyone else, and for the moment no one is going to assume what sounds like a recording is going to be interactive and/or on behalf of an individual customer, and will try to keep the line open/save time by hanging up before they even hear enough words to recognize it wants to make an appointment/reservation.

          All in all, impressive as it sounded, this is one area I can't understand why I'd need Google to take care of it for me.

          • by mark-t ( 151149 )

            Because presumably places of businesses get obviously recorded spam calls like everyone else...

            Exactly.... so there's no difference. Even if it *was* a person on the other end of the phone, they are spamming you and not actually interested in any honest communication.

            • by Junta ( 36770 )

              In this case, the business does not care about 'communication', they care that the call is a legitimate request to buy goods or services. Today a robocall is 0% chance, but this would represent a probable transaction.

              Sure, it's creepy. Sure, I'd rather it be blatantly obvious what it is. However, they do have a legitimate concern that an obviously artificial behavior will get terminated by someone mistaking it for spam (no, even if it is by machine, a request for a reservation is not spam).

              Now it can mor

    • Really give the AI a workout. As in "I have an opening 15 minutes after kickoff, is that frosty for you?"

  • Comment removed based on user account deletion
  • by trb ( 8509 )

    Most robocalls are garbage, "would you like new aluminum siding?" (I live in an apartment.)
    Some robocalls are useful, "this is the town, we have declared a snow emergency, you have to take you car off the street."
    Or, "this is Doctor Smith's office, you have an appointment Wendesday morning at 9."
    It's obvious that these are all robocalls, and some of them are welcome or at least tolerable.
    Having informative and valid caller-ID information will be helpful.
    If the calls are for information that people want, the

    • by czmax ( 939486 )

      This is an important point. Their use case is to call and make reservations or purchase services: basically do the stuff somebody might do online but with an analog interface.

      I'll bet the companies in question won't really mind selling to somebody might not contact them via other means. Basically this lets them delay on building a web page or buying into some reservation service they don't care about/don't want and yet they can still interact with the digital world.

      As long as the technology is *good enough*

  • get called a 'drama queen' and 'hyperbolic' and other ad-homonym attacks by the 'real experts' in social psychology. You know, the guys who have no technical background and aren't allowed to perform these experiments because they were deemed unethical... they keep saying the tech doesn't work.

    Meanwhile the internet war is getting really insane. You guys have the tools to check (mostly) but here's some screenshots I uploaded to imgur: https://imgur.com/gallery/I3vE... [imgur.com]

  • Bring in the telemarketers

  • It is a clever fake, that is all. Basically, this is Eliza with a much larger database. The databases allowing this type of "conversation" have been build during the last 30 years in slow, tedious work. Still, the potential is endless, as somewhat interactive SPAM can now reach everybody that has a phone. It seems we will eventually have to go to a whitelist system for phones or to a micropayment scheme. (Deposit me a dollar and I will accept your call, then I will decide whether to give it back. What, only

    • by Cyberax ( 705495 )
      Humans are also Elizas with a much larger database.
      • by gweihir ( 88907 )

        And if you believe that, then you are lacking in that unique quality many (but not all) humans have called "general intelligence".

    • by q_e_t ( 5104099 )
      Eliza was handcrafted, but presumably this is not, so it's qualitatively different.
      • by gweihir ( 88907 )

        By a tiny amount, if even that. It is the same technology, no understanding, no insight, no modelling capabilities, just reading from a giant cue-sheet.

  • > The unspoken goal: Keep users asking questions and sharing information with the company -- which can use that to collect more data to improve its answers and services.

    The unspoken goal: Keep users and those who don't know they're interacting with Google asking questions and sharing information with the company -- which can be complied and sold to governments, private companies, and other persons regardless of the desire to remain private.

    Say what you want about Siri, but Apple doesn't sell tha
    • The unspoken goal: Keep users and those who don't know they're interacting with Google asking questions and sharing information with the company -- which can be complied and sold to governments, private companies, and other persons regardless of the desire to remain private.

      If you are someone who gives out personal information to a random telephone caller just because they sound like they are a real human, you have worse problems than Google creating a system that sounds like a real human.

      It should not matter if the "voice" on the other end of the phone says "please speak or enter your credit card number now, ending with the pound key" in a robotic voice or "hey, I need to process your order, ok. What are your digits?" in a realistic human voice.

      • > Thank you for calling Bill's Hair Salon, this is Mike.

        That's all that's needed in a lot of cases to identify people in a small business environment. Between that and the various other methods of data gathering and identification Google could potentially get the person, the employer, and their voice. And this person had no real choice in the matter. You've got a very narrow vision to not see this.

        > It should not matter if the "voice" on the other end of the phone says "please speak or enter
        • by Obfuscant ( 592200 ) on Thursday May 10, 2018 @04:58PM (#56590932)

          > Thank you for calling Bill's Hair Salon, this is Mike. That's all that's needed in a lot of cases to identify people in a small business environment.

          Oh my. Someone calls a company and finds out Mike works there. I notice that the caller didn't need to say anything to get Mike to divulge that super-secret private information, so whether Google Assistant sounds human or not is irrelevant.

          And, as I said, if this "Mike" is stupid enough to hand over personal information to a random caller, whether or not the voice he hears sounds human or not, then the problem is much deeper than Google Assistant having a human-sounding voice. If this "Mike" is not authorized to handle ordering and payments and I own the shop, if "Mike" hands out my credit card info in response to any request from a cold-caller for ANY reason he's fired, period, get your ass out of my shop. If "Mike" is the one who makes the payments and he hands out the card info just because someone asks for it, then once again, he's fired. Notice the common element here. Hand out private information to unknown callers, you're fired.

          This has nothing to do with the type of voice - you missed the point entirely.

          Of course it has to do with the type of voice. That's the HORRIFYING part of this whole process. Google's AI can insert "ums" and "uhhs" and stuff and make itself sound like it is a real human. THAT'S the horrifying "uncanny AI tech" being talked about.

          It's that Google has gone one step further and is now contacting potentially non-Google users without their consent.

          Google isn't contacting anyone. It's PEOPLE who use GOOGLE ASSISTANT to make the contacts. Google isn't cold calling anyone, and even if they were, then the point I made about stupid people handing out personal information to random callers still applies.

          But it isn't GOOGLE that's doing this. It's Joe Smith telling his Google Assistant to make a call ON HIS BEHALF to someone that he does not need consent to call in the first place. I don't need "consent" from a barbershop to call them up to make an appointment. It's stupid to think you do.

          You DO know what Google Assistant is, don't you? I guess not. It's not a Google program to randomly call people and collect data. It's that stupid, annoying piece of crap that keeps popping up when you long-press the middle (or only) button on your Android device in more recent versions of Android, offering to help you do stuff that most people can do better themselves. Mine is turned off, so instead of Google Assistant popping up directly, I get a notification that says "Your Google Assistant is ready to help you get things done. TURN ON". THAT'S what the "horrifying uncanny tech is. This Google Assistant thing is now going to be able to make phone calls that imitate a real person's mannerisms (not a specific real person, by the way). It will now be able to send emails that look like they were written by a human. That's HORRIFYING.

          Yawn. If someone asks their digital assistant to do something, then it's the person who asks for it to be done that is responsible, not GOOGLE, and it is no more horrifying to have to deal with that than with a real person.

          Telemarketers already have this technology. I get a lot of crap calls that start out with a perfect human voice asking if they are speaking to me, by name. Unless you typically answer "rutabaga" instead of "yes" or "no", you don't know it's automated right away. Or maybe answer "who is calling?". Once they get the initial answer they continue in a perfect human voice. It's not new. It's not even particularly horrifying. It's annoying, but it's really no worse than a recognizable robotic voice. They don't get anything more than they would otherwise.

        • Thank you for calling Bill's Hair Salon, this is Mike.

          That's all that's needed in a lot of cases to identify people in a small business environment

          Why does Mike have an expectation of privacy when answering the business phone line and taking appointments?

          And how does the fact that Duplex is calling materially differ from me calling with a tape recorder held up to the phone?

  • Sincerely, I would love to use this. It'd be perfect for calling my representatives when I'm unable to and constantly berating them for not having my best interests. Might be taken a little more seriously than emails too.
    • I would think that if anything, this technology would make phone calls taken just as (non-)seriously as emails, because of the lower effort required to make them. If more people are robo-calling their representatives because it's easy to do so, then calls to representatives begin to carry less weight each.

      The only real solution is to have a small enough population represented by each representative that they can realistically afford to care about each and every constituent.

      • Write a letter and mail it. Seriously, your representatives take snail mail VERY seriously. If a constituent took the time to write and mail a letter they will take action.

        • Even better when you write the letter in all caps.

        • My point was precisely that it's difficulty (having to, as you say, take the time) acting as a filter that makes some forms of contact weigh as more important than others (letters more than phone calls more than email), so if something like this Duplex makes calling your representative easier, that will make calls from constituents less important to your representatives.

          Likewise if something made mailing letters to the representatives much easier, mailed letters would start to weigh less too.

          Because represe

          • by djinn6 ( 1868030 )

            Because representatives realistically cannot be expected to attend to the huge numbers of people they are supposed to represent, so they have to let difficulty filter it down to the ones who care the most.

            Might as well setup a pay-to-contact system. Contribute $5 to the re-election fund and your voice gets heard by your rep. It already exists for the wealthy folks, so why not the regular folks too?

  • by Headw1nd ( 829599 ) on Thursday May 10, 2018 @01:43PM (#56589496)
    The issue of deception is only a part of the actually worrisome part about this technology. The real issue is that it creates an imbalance of effort, which is exactly what we have been fighting with spam emails. With either method, minimal effort on my part can cause another person to use up a significant chunk of their time. Google is displaying this technology in a situation where that would be considered acceptable, because the outcome is profitable for the person taking the appointment, but what if it wasn't? This technology could easily be used for both parasitic purposes such as sales calls, and outright hostile purposes such as tying up phone lines with seemingly benign callers. The fundamental issue is that when a human knows they are speaking to another human, they can assume that each has a similar opportunity cost for the time spent in the conversation. A machine has no such costs, and the transaction is inherently lopsided. I think this is what the push for identification stems from, the basic need for the two parties to be on (relatively) equal footing.
    • by q_e_t ( 5104099 )
      You raise a good point, as you could effectively do a DOS on a firm's human customer service staff, giving real customers a poor impression of a company.
    • This technology could easily be used for both parasitic purposes such as sales calls, and outright hostile purposes such as tying up phone lines with seemingly benign callers.

      What an amazing planet you live on that doesn't already have telemarketers and other malefactors tying up phone lines with seemingly benign calls. This technology isn't new. Google isn't creating something new. No telemarketer is going to start using his Android phone with Google Assistant to start making his telemarketing calls, he's going to use his existing systems with his existing fat data pipe into the telephone system to keep making them.

  • I notice they don't say what happens if the call is to a wrong number. Does it still try to book a hair appointment/order a pizza/whatever?
  • I'm horrified by the lack of attention to grammar in Slashdot headlines. Some may find my reaction horrifying, but I'm fairly certain they are overreacting.

    • Well, given that /. really no longer has user submitted stories per se, one really doesn't have to look far for the source of horror that is mix of unique grammar, bias, and messaging of facts and truth that comprise the headlines and summaries these days.

      Oops, won't be able to mod for another lifetime for posting this remark, either, I suppose.

  • How is this different than a dozen different automated processes we already use, other than it does things via voice rather than text?
    • by jythie ( 914043 )
      Each automated process has its own limits in where and how effectively it can be used. The concern here isn't necessarily that it is a fundamentally new capability, but that the improvements (when combined with other emerging technologies) will open up new and troubling uses that were impractical using older technologies.
  • Another data point hidden in the TFA:

    "Google is taking advantage of its primary asset: data. It trained Duplex on a massive body of “anonymized phone conversations,” according to a release. Every scheduling task will have its own problems to solve when arranging a specific type of appointment, but all will be underpinned by Google’s massive volume of data from searches and recordings that will help the AI hold a conversation."

    Yeah, that's your data and your phone calls they're talking abou

  • If I was running a restaurant or business and this called on the phone, I'd ask to speak to a live human being, and if that's not possible I'd hang up. I'd instruct my employees to do the same: it's either a live human being making the appointment or reservation or you hang up on it. Could be a prank, could be a malfunction, could be a mistake, could be someone hacked someone's digital assistant, could be any number of things. Therefore you need verification from the actual person who wants the appointment
    • it's either a live human being making the appointment or reservation or you hang up on it. Could be a prank

      If I wanted to make a prank call to set up an appointment, I could just call your business myself.

      • If I wanted to make a prank call to set up an appointment, I could just call your business myself.

        Yeah, and that's childish and takes up your resources. Much the same way that you could handwrite a prank letter/ad and send it. However, once you could mass send "prank" emails (chain mail and ads) things got very different very fast.

        I see this as making mass phone calls (more of a) thing. Yay, a need for a better spam filter on your phone.

    • Because someone with the excess spending money to blow on a Duplex making appointments for them is definitely not what a restaurant or other business wants as a customer!!

  • If you're in panic mode because of Duplex instead of Alexa and all the other "A.I. assistants", your priorities are not in the right order.

  • I only hope these technologies treat us benignly when they combine and become self aware and don't destroy us like we've destroyed many species, and continue to, knowing that we're doing it.

  • I remember when answering machines were first coming out (yes, I"m THAT old). People wigged the hell out about having to "talk to a machine". Now is there anyone anywhere who cares? No.
  • I'm surprised people haven't thought of a simple solution here. Just have a computer pick up the phone. Hit 1 if you are human.
    • I'm surprised that you haven't made a phone call recently. You may have forgotten a tiny bit of info; so your prompt should come much later,as in:

      For English, press 1. Para Espanol, oprima dos If you are human,

      YOU: Hasta la vista, Baby!

  • They record your voice, over and over and over, mannerisms, the lot and can replicate you speaking perfectly?

    Given their skills, their resources and if they had such recordings, I imagine they could emulate a person fairly easily, at least to a stranger.

Remember to say hello to your bank teller.

Working...