Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Google

Google Told Its Scientists To 'Strike a Positive Tone' in AI Research (reuters.com) 51

Alphabet's Google this year moved to tighten control over its scientists' papers by launching a "sensitive topics" review, and in at least three cases requested authors refrain from casting its technology in a negative light, Reuters reported Wednesday, citing internal communications and interviews with researchers involved in the work. From a report: Google's new review procedure asks that researchers consult with legal, policy and public relations teams before pursuing topics such as face and sentiment analysis and categorizations of race, gender or political affiliation, according to internal webpages explaining the policy. "Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues," one of the pages for research staff stated. Reuters could not determine the date of the post, though three current employees said the policy began in June. The "sensitive topics" process adds a round of scrutiny to Google's standard review of papers for pitfalls such as disclosing of trade secrets, eight current and former employees said. For some projects, Google officials have intervened in later stages. A senior Google manager reviewing a study on content recommendation technology shortly before publication this summer told authors to "take great care to strike a positive tone," according to internal correspondence read to Reuters.
This discussion has been archived. No new comments can be posted.

Google Told Its Scientists To 'Strike a Positive Tone' in AI Research

Comments Filter:
  • That seems like a pretty reasonable request from one's employer. Who would want their own people going around ginning up fear over their own research?
    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday December 23, 2020 @09:17AM (#60859654) Homepage Journal

      It's only reasonable if you're evil. You know who else did this kind of shit? Big Tobacco, and Big Sugar, and hey, Big Oil.

      Google removed their evil canary for a reason.

      • The ethics is to be debated and determined internally. You would not publicly argue what internal data structure your algorithm should uses. I mean you cannot have employees saying ..man, my company is stupid for using a array when a hash table would have made it a lot faster!

        • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday December 23, 2020 @09:30AM (#60859682) Homepage Journal

          The ethics is to be debated and determined internally.

          Yes, trust google! There's no signs anywhere that this is a bad idea!*

          You would not publicly argue what internal data structure your algorithm should uses.

          This is way deeper than that.

          * Yep, I use many Google services. But I don't use them for everything, either...

          • I think its deeper than that. They are trying to get their AI researchers to not report on disappointing results, which is most of AI research. It could be for their image but more likely it is to convince investors, which is fraud.
            • I agree absolutely. Between this news and previous on their recent "AI Ethics" debacle [slashdot.org], Googles "research" generally appears to be a sham. It is hard to avoid the conclusion that company leadership are using the term "research" to lend credence to their business plans and for other window dressing- too bad.
              • by shanen ( 462549 )

                Just got a copy of The AI Does Not Hate You by Tom Chivers. Seems relevant to this topic, but what if I hate the AI?

                (Seems to be too much to hope for that someone around TL:DR Slashdot 2020 might have already read the book. But first I'm hoping to finish Talk to Me about Siri and "her" "friends" in the next few days.)

      • by sabbede ( 2678435 ) on Wednesday December 23, 2020 @10:23AM (#60859798)
        Would it be evil for Musk to tell Tesla engineers not to say that self-driving cars are a bad idea? Would it be evil for Pfizer to tell its people not to go around claiming that vaccines are dangerous? Would it be evil for Verizon to make sure their people aren't claiming 5G causes COVID?
        • The issue here is that the situations are not the same. I'm an engineering leader. My job is internal -- I lead some people and, you know, take credit for their work. My job does not involve going out into the world and making statements about technology.

          Google's AI researchers get paid to write to write AND PUBLISH research papers. So their job is to build up credibility and publicly make statements. This only works -- they only get credibility -- if people think they're operating objectively and ar

        • Would it be evil for Musk to tell Tesla engineers not to say that self-driving cars are a bad idea?

          If you are skeptical of self-driving cars and want to bad-mouth them, then you would be an idiot to expect Telsa to pay you to do so.

        • Well yes, it would be evil if you had hired a person as a researcher and they had discovered your self driving algorithm had a deficiency / your vaccine kills more people than it saves / the laws of physics have provable changed allowing virus to ride radio waves.

          Google is paying for research into ai and now it's finding it doesn't like the results of the research so it wishes to suppress it. Sounds evil to me.
      • You know who else did this kind of shit? Big Tobacco, and Big Sugar, and hey, Big Oil.

        True. And you know who else? Every other company ever, including the the one you work for. Prove me wrong and upload to ArXiv a research paper with your name that claims to have discovered that "our company's product kills children" and see how long you keep your job.

    • by AmiMoJo ( 196126 )

      They appointed her head of AI ethics and asked her to do this kind of research. The fact that they don't like the results is their own problem.

      I've read a summary of the paper and it seems like they should be thanking her, she basically pointed out that they are wasting huge amounts of money on tech that is reaching its limits already and can never be fixed to eliminate bias. The paper then tells them how to proceed too, what technologies they need to resolve these issues.

      • > can never be fixed to eliminate bias

        This is a new field, let's not kill it yet. The exact definition of bias is a political problem that might never be fixed, but model bias can be tuned any way you like. The problem is political, not technical. GPT-3 and future iterations will improve our life, so let's keep working at it. I want to see what's possible, not go back to 1990s.
        • by AmiMoJo ( 196126 )

          The paper talks about how the way they have found to improve these systems is ever bigger training datasets. Problem is when you have a few million images it's impossible to verify them all.

          Without any real understanding it's also difficult to teach AI to see problematic things for what they are.

          • Yes, it's hard because we expect from a language model what not even a human can do - to be unbiased no matter how hard you scrutinize it. GPT-3 already has a chaperone model to watch what it says and block offensive text. This model can be developed independently from the language model and retrained frequently. A recent paper shows that offensive speech is not the only problem, GPT-3 could violate copyright by being able to reproduce verbatim whole pages of content, and reproduce personal identifiable in
        • Seventy years is "new" now?
    • just don't say it's a cookbook, okay?

    • So you're basically the guy who was asking Project Manhattan scientists seventy five years ago to love the bomb.
      • So you're basically the guy who was asking Project Manhattan scientists seventy five years ago to love the bomb.

        No. We are saying that if you hate the bomb and think it is a bad idea, you no right to demand a paycheck for saying so.

  • by Geoffrey.landis ( 926948 ) on Wednesday December 23, 2020 @09:17AM (#60859652) Homepage

    Explains why Google fired Timnit Gebru.

      https://artificialintelligence... [artificial...e-news.com]

    It does make it difficult for them to have an AI ethics division, since the point of ethics is to "raise ethical, reputational, regulatory or legal issues."

    • by AmiMoJo ( 196126 ) on Wednesday December 23, 2020 @10:59AM (#60859908) Homepage Journal

      There is a summary of the paper here: https://www.technologyreview.c... [technologyreview.com]

      It identifies a few issues, the main one being that the current models don't understand language, they just learn how to manipulate it. That results in systems that are good at fooling people and making some cash for the tech companies, and which are largely impossible to scrutinize or ensure aren't picking up unwanted biases. The lack of understanding issue was demonstrated years ago by Microsoft's Twitter bot that was easily tricked into defending Hitler, because all it could do was manipulate language but had no understanding of the meaning.

      Because of the short term gains from those kinds of systems a lot of money (and energy, they are a bit of an environmental disaster) goes into them, diverting it away from AI research that will result in understanding.

      Her CV is quite impressive too, worked on signal processing for Apple (on the iPad) among other things.

    • by pendolino ( 6185100 ) on Wednesday December 23, 2020 @11:08AM (#60859942)

      So, I've actually read the paper that triggered Timnit Gebru's dismissal.

      The least that can be said is that it's perfectly understandable why Google would have refused its publication.

      First off, it's not a research paper. It doesn't contain any original research. It's not a review either. It presents a number of open questions as settled science. For instance it asserts that language models lack a deeper understanding of the material they're trained on, and therefore can only merely reproduce language patterns they've observed. That's something I actually agree with, but others do not. Whether those models can in fact derive a deeper understand is an important point of doing the research. The "paper" is really an opinion piece dressed up as a paper.

      Second, it's poorly structured and contains a considerable amount of repetition. It gives off the impression of a high school student trying to hit a word count.

      But more importantly, it's just intellectually unsound. To give two examples:
      - they use an entire column (of a 12 page paper) to show an interaction with GPT3 to demonstrate how those language models can give an impression of intelligence. That would have been a _perfect_ demonstration of the danger posed by these models. But they didn't do that. They just took an example somebody else had posted. So either they didn't think about doing it, and they're idiots, or they did think about it, but couldn't be bothered to try, and they're intellectually lazy, or they did try but couldn't actually come up with and example demonstrating their point (or didn't try because they were afraid of the result, same thing), in which case they're dishonest.
      - the talk a lot about the "hegemonic" viewpoint present in any big corpus of English conversations taken off the internet, and the consequences of this when a model blindly reproduces it, with regards to sexism, racism and so on. Fair enough. Surely, however, the main "hegemony" coming out of such a corpus would be that it would be an _American_ viewpoint. Again, either they didn't think about this, and they're idiots, or they didn't think through what they were saying, and they're intellectually lazy, or they did think about it and don't consider it to be a problem so long as the "hegemonic viewpoint" is one that they share, and they're dishonest.

      To have an ethics group in more than name, it needs to be staffed by ethical competent people. Based on this paper, Timnit Gebru at least, since she was the manager, but possibly the rest of the team as well, are not it.

      Not only that, but when told that the "paper" was being held, she chose, instead of addressing the objections, to instructed staff to stop following Google anti-discrimination procedures, and went to her boss with a list of demands or else! Good riddance.

      • For instance it asserts that language models lack a deeper understanding of the material they're trained on, and therefore can only merely reproduce language patterns they've observed.

        Who disagrees with it? NN models are good at interpolating, but remarkably bad at extrapolating. I thought this was accepted knowledge.

      • by Geoffrey.landis ( 926948 ) on Wednesday December 23, 2020 @12:10PM (#60860136) Homepage

        Well, some of your critique is a little unfair. You say that they rely on somebody else's "_perfect_ demonstration of the danger posed by these models" instead of doing their own. But, so what? If that other work "perfectly" demonstrated the danger, use it. A paper don't have to re-do work somebody else already did, only cite it.

        You also criticize them for pointing out that the "corpus of English language conversations taken off the internet" contains "sexism, racism and so on" (of which you say is "fair enough") but their failure was in not mentioning that it's biased toward American viewpoints. Unless you have some reason to think that American viewpoints are unethical, however, that particular bias is not relevant to a paper on ethics. They didn't leave it out because "they didn't think about this and they're idiots", they left it out because it was not relevant.

        You only give two examples in your saying that the paper is "intellectually unsound", and neither example was an example of being intellectually unsound.

        • I didn't say the example they posted was perfect. I said they showed an interaction with GPT3 which was posted by somebody else. And that it would have been a perfect demonstration of the problem at hand. By implication, it was not. Who could possibly have a problem with reusing somebody else's work when it's adequate? The reason it was not is that the interaction is about the Alpha group, a Russian mercenary group, and its involvement in Syria. It's straight out of the English wikipedia. Which you would kn

          • It appears you first decided that the paper was correct, without having read it, and then misread my post until you found a weird reading of it you could make an argument against. How's that for ethics?

            No, you claimed the paper was "intellectually unsound"; it's up to you to justify your statement, not me. For all I know it may in fact be, but the particular examples you gave did not support that statement.

            I stand by what I said.

    • by NoSig ( 1919688 )
      Read that article again, to the very bottom, as well as other sources. She wasn't fired for the content of her paper, she was fired for outrageous behavior. If you are yourself a manager and upper management makes a decision that you don't like, and you tell them that they must provide you with a list of the names of everyone they consulted while making that decision (for you to do who-knows-what with) or you will resign, while also behaving outrageously in other ways, don't be surprised if upper management
      • Correcting that:

          Google claims she wasn't fired for the content of her paper, she was fired for outrageous behavior.

        • by NoSig ( 1919688 )

          Correcting that:

          Google claims she wasn't fired for the content of her paper, she was fired for outrageous behavior.

          The situation is overwhelmingly clear. You must think that upper management at Google has an IQ of 70 or something. Do you really think Google's lawyers would let them lie about very specific facts about something sent IN EMAIL, that could be determined to be true or false with 100% certainty in a lawsuit? Makes absolutely no sense, not even the slightest bit of sense. Of course what they are saying is true. If you read between the lines of what Timnit is saying, you can even tell that she's really just put

          • Correcting that:

            Google claims she wasn't fired for the content of her paper, she was fired for outrageous behavior.

            The situation is overwhelmingly clear.

            I stand by what I posted.

            You are repeating what Google stated. It may even be true. But you don't know that (unless you are personally the person who fired her. Were you?). What you do know is that they said it.

            Of course Google will say what makes them look good. They are a corporation. Corporations have PR departments that say the things that make them look good.
            If you don't understand that, you are going to be lied to a lot by corporations.

            • by NoSig ( 1919688 )
              It appears you didn't go past the first sentence. :-/ It is indeed overwhelmingly clear. Corporations lie, but not likely like that for the reasons given. This may not be obvious to you from your position in academia, but corporations don't want bad PR, especially not easily avoidable bad PR. It wouldn't have been done this way if what they are saying happened didn't happen. These people are much, much cleverer than that and they don't have to make up lies to fire people, that's an unnecessary risk (which i
              • I did read past the first sentence. Nothing past the first sentence made any sense either.

                You say "corporations don't want bad PR, especially not easily avoidable bad PR."

                Exactly!! You got it!! Corporations issue statements to the press in order to make themselves look good. If somebody says "I got fired because of xx bad reason," they will respond "no, that firing was because of many other factors, all of which were perfectly correct and reasonable."

                That's what PR departments are being paid for: making th

                • by NoSig ( 1919688 )
                  They wouldn't get sued for wrongful termination (well they could, e.g. she could allege racial discrimination, which is illegal in CA, but that would be a different discussion), they would get sued for defamation. Google has alleged that Timnit sent an email containing an exact quote that would make her unemployable everywhere under normal circumstances. Look into Sarbanes-Oxley, emails can be accessed and verified in court, so the truth would be known. Damages would likely be very high given the salary lev
                  • Something you might consider is that just as corporations lie, other people lie, too, and frame things unreasonably. You are trusting sources of information that are not worthy of your trust.

                    To the contrary. Saying "just because Google said it doesn't mean you should believe it uncritically" absolutely does not imply a corollary of "but you should believe everything people say about Google without checking."

                    Verify. Unless you have inside information, verify from an external source.

                    Corporations lie-- well, let's say, they "spin the truth" to make themselves look good. Your belief "oh, they wouldn't lie, they might get sued" is amazingly credulous. But people also spin the truth to make themselv

                    • by NoSig ( 1919688 )

                      Something you might consider is that just as corporations lie, other people lie, too, and frame things unreasonably. You are trusting sources of information that are not worthy of your trust.

                      To the contrary. Saying "just because Google said it doesn't mean you should believe it uncritically" absolutely does not imply a corollary of "but you should believe everything people say about Google without checking."

                      Verify. Unless you have inside information, verify from an external source.

                      Corporations lie-- well, let's say, they "spin the truth" to make themselves look good. Your belief "oh, they wouldn't lie, they might get sued" is amazingly credulous. But people also spin the truth to make themselves look good.

                      If someone who sometimes lies looks to be 6 feet tall and they sign a contract stating that they will pay you 20 million dollars (that they do have) if they are not 6 feet tall, and that if you want they will participate in a televised measuring in which they'd be extremely embarrassed in front of the nation if they turned out not to be 6 feet tall, and you've known this person to be very risk averse in the past, they generally will only do things that benefit them and they've not lied in circumstances wher

                    • About all I can say is that you choose to be credulous.

                      A long history of corporations lying shows that this is not justified, but apparently you are not able to see that.

                      Bye.

                    • by NoSig ( 1919688 )
                      This is sadly and tragically ironic, since the premise of our discussion is that you are credulous. You prefer to believe that sources of information that frame information in unwarranted ways, for the purpose of manipulating you, are in fact credible sources of information and so you let yourself be deceived in spite of your intelligence. The justification is yet another strawman - we never discussed a statement "corporations never lie", in fact I've repeatedly stated that they do. Just not likely in this
                    • This is sadly and tragically ironic, since the premise of our discussion is that you are credulous. You prefer to believe that sources of information ...

                      I keep saying "I don't believe either one without further evidence," and you keep saying "you are credulous".

                      What part of "I don't believe either one" do you not understand?

  • ...and frightening at the same time to learn something about the "negative" research that was being developed there.
    • Just because they want to make sure their research is presented in a positive light does not imply that it is "negative". It only suggests that there is a concern the work might be presented in a negative light - which is often the case, for a variety of reasons. Apple and Samsung each want their phones portrayed in a positive way and the other's to be portrayed negatively. Pfizer doesn't want people to be afraid of vaccines, but knows some people are so they have to take action to counteract that.

      It c

  • No need to "double speak" the word "order".
  • I, for one, welcome our new AI overlords.

I had the rare misfortune of being one of the first people to try and implement a PL/1 compiler. -- T. Cheatham

Working...