Google Told Its Scientists To 'Strike a Positive Tone' in AI Research (reuters.com) 51
Alphabet's Google this year moved to tighten control over its scientists' papers by launching a "sensitive topics" review, and in at least three cases requested authors refrain from casting its technology in a negative light, Reuters reported Wednesday, citing internal communications and interviews with researchers involved in the work. From a report: Google's new review procedure asks that researchers consult with legal, policy and public relations teams before pursuing topics such as face and sentiment analysis and categorizations of race, gender or political affiliation, according to internal webpages explaining the policy. "Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues," one of the pages for research staff stated. Reuters could not determine the date of the post, though three current employees said the policy began in June. The "sensitive topics" process adds a round of scrutiny to Google's standard review of papers for pitfalls such as disclosing of trade secrets, eight current and former employees said. For some projects, Google officials have intervened in later stages. A senior Google manager reviewing a study on content recommendation technology shortly before publication this summer told authors to "take great care to strike a positive tone," according to internal correspondence read to Reuters.
So, "don't go out and trash our product"? (Score:2, Insightful)
Re:So, "don't go out and trash our product"? (Score:5, Interesting)
It's only reasonable if you're evil. You know who else did this kind of shit? Big Tobacco, and Big Sugar, and hey, Big Oil.
Google removed their evil canary for a reason.
Re: So, "don't go out and trash our product"? (Score:2)
The ethics is to be debated and determined internally. You would not publicly argue what internal data structure your algorithm should uses. I mean you cannot have employees saying ..man, my company is stupid for using a array when a hash table would have made it a lot faster!
Re: So, "don't go out and trash our product"? (Score:4, Insightful)
The ethics is to be debated and determined internally.
Yes, trust google! There's no signs anywhere that this is a bad idea!*
You would not publicly argue what internal data structure your algorithm should uses.
This is way deeper than that.
* Yep, I use many Google services. But I don't use them for everything, either...
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Just got a copy of The AI Does Not Hate You by Tom Chivers. Seems relevant to this topic, but what if I hate the AI?
(Seems to be too much to hope for that someone around TL:DR Slashdot 2020 might have already read the book. But first I'm hoping to finish Talk to Me about Siri and "her" "friends" in the next few days.)
Re:So, "don't go out and trash our product"? (Score:4, Interesting)
Re: (Score:3)
Google's AI researchers get paid to write to write AND PUBLISH research papers. So their job is to build up credibility and publicly make statements. This only works -- they only get credibility -- if people think they're operating objectively and ar
Re: (Score:2)
Would it be evil for Musk to tell Tesla engineers not to say that self-driving cars are a bad idea?
If you are skeptical of self-driving cars and want to bad-mouth them, then you would be an idiot to expect Telsa to pay you to do so.
Re: So, "don't go out and trash our product"? (Score:2)
Google is paying for research into ai and now it's finding it doesn't like the results of the research so it wishes to suppress it. Sounds evil to me.
Re: (Score:3)
You know who else did this kind of shit? Big Tobacco, and Big Sugar, and hey, Big Oil.
True. And you know who else? Every other company ever, including the the one you work for. Prove me wrong and upload to ArXiv a research paper with your name that claims to have discovered that "our company's product kills children" and see how long you keep your job.
Re: (Score:2)
They appointed her head of AI ethics and asked her to do this kind of research. The fact that they don't like the results is their own problem.
I've read a summary of the paper and it seems like they should be thanking her, she basically pointed out that they are wasting huge amounts of money on tech that is reaching its limits already and can never be fixed to eliminate bias. The paper then tells them how to proceed too, what technologies they need to resolve these issues.
Re: (Score:2)
This is a new field, let's not kill it yet. The exact definition of bias is a political problem that might never be fixed, but model bias can be tuned any way you like. The problem is political, not technical. GPT-3 and future iterations will improve our life, so let's keep working at it. I want to see what's possible, not go back to 1990s.
Re: (Score:2)
The paper talks about how the way they have found to improve these systems is ever bigger training datasets. Problem is when you have a few million images it's impossible to verify them all.
Without any real understanding it's also difficult to teach AI to see problematic things for what they are.
Re: (Score:2)
Re: (Score:2)
To serve mankind (Score:3)
just don't say it's a cookbook, okay?
Re: (Score:2)
Re: (Score:2)
So you're basically the guy who was asking Project Manhattan scientists seventy five years ago to love the bomb.
No. We are saying that if you hate the bomb and think it is a bad idea, you no right to demand a paycheck for saying so.
Explains why Google fired Timnit Gebru (Score:4, Insightful)
Explains why Google fired Timnit Gebru.
https://artificialintelligence... [artificial...e-news.com]
It does make it difficult for them to have an AI ethics division, since the point of ethics is to "raise ethical, reputational, regulatory or legal issues."
Re:Explains why Google fired Timnit Gebru (Score:4, Interesting)
There is a summary of the paper here: https://www.technologyreview.c... [technologyreview.com]
It identifies a few issues, the main one being that the current models don't understand language, they just learn how to manipulate it. That results in systems that are good at fooling people and making some cash for the tech companies, and which are largely impossible to scrutinize or ensure aren't picking up unwanted biases. The lack of understanding issue was demonstrated years ago by Microsoft's Twitter bot that was easily tricked into defending Hitler, because all it could do was manipulate language but had no understanding of the meaning.
Because of the short term gains from those kinds of systems a lot of money (and energy, they are a bit of an environmental disaster) goes into them, diverting it away from AI research that will result in understanding.
Her CV is quite impressive too, worked on signal processing for Apple (on the iPad) among other things.
Re:Explains why Google fired Timnit Gebru (Score:4, Informative)
So, I've actually read the paper that triggered Timnit Gebru's dismissal.
The least that can be said is that it's perfectly understandable why Google would have refused its publication.
First off, it's not a research paper. It doesn't contain any original research. It's not a review either. It presents a number of open questions as settled science. For instance it asserts that language models lack a deeper understanding of the material they're trained on, and therefore can only merely reproduce language patterns they've observed. That's something I actually agree with, but others do not. Whether those models can in fact derive a deeper understand is an important point of doing the research. The "paper" is really an opinion piece dressed up as a paper.
Second, it's poorly structured and contains a considerable amount of repetition. It gives off the impression of a high school student trying to hit a word count.
But more importantly, it's just intellectually unsound. To give two examples:
- they use an entire column (of a 12 page paper) to show an interaction with GPT3 to demonstrate how those language models can give an impression of intelligence. That would have been a _perfect_ demonstration of the danger posed by these models. But they didn't do that. They just took an example somebody else had posted. So either they didn't think about doing it, and they're idiots, or they did think about it, but couldn't be bothered to try, and they're intellectually lazy, or they did try but couldn't actually come up with and example demonstrating their point (or didn't try because they were afraid of the result, same thing), in which case they're dishonest.
- the talk a lot about the "hegemonic" viewpoint present in any big corpus of English conversations taken off the internet, and the consequences of this when a model blindly reproduces it, with regards to sexism, racism and so on. Fair enough. Surely, however, the main "hegemony" coming out of such a corpus would be that it would be an _American_ viewpoint. Again, either they didn't think about this, and they're idiots, or they didn't think through what they were saying, and they're intellectually lazy, or they did think about it and don't consider it to be a problem so long as the "hegemonic viewpoint" is one that they share, and they're dishonest.
To have an ethics group in more than name, it needs to be staffed by ethical competent people. Based on this paper, Timnit Gebru at least, since she was the manager, but possibly the rest of the team as well, are not it.
Not only that, but when told that the "paper" was being held, she chose, instead of addressing the objections, to instructed staff to stop following Google anti-discrimination procedures, and went to her boss with a list of demands or else! Good riddance.
Re: (Score:2)
For instance it asserts that language models lack a deeper understanding of the material they're trained on, and therefore can only merely reproduce language patterns they've observed.
Who disagrees with it? NN models are good at interpolating, but remarkably bad at extrapolating. I thought this was accepted knowledge.
Re:Explains why Google fired Timnit Gebru (Score:4, Interesting)
Well, some of your critique is a little unfair. You say that they rely on somebody else's "_perfect_ demonstration of the danger posed by these models" instead of doing their own. But, so what? If that other work "perfectly" demonstrated the danger, use it. A paper don't have to re-do work somebody else already did, only cite it.
You also criticize them for pointing out that the "corpus of English language conversations taken off the internet" contains "sexism, racism and so on" (of which you say is "fair enough") but their failure was in not mentioning that it's biased toward American viewpoints. Unless you have some reason to think that American viewpoints are unethical, however, that particular bias is not relevant to a paper on ethics. They didn't leave it out because "they didn't think about this and they're idiots", they left it out because it was not relevant.
You only give two examples in your saying that the paper is "intellectually unsound", and neither example was an example of being intellectually unsound.
Re: (Score:1)
I didn't say the example they posted was perfect. I said they showed an interaction with GPT3 which was posted by somebody else. And that it would have been a perfect demonstration of the problem at hand. By implication, it was not. Who could possibly have a problem with reusing somebody else's work when it's adequate? The reason it was not is that the interaction is about the Alpha group, a Russian mercenary group, and its involvement in Syria. It's straight out of the English wikipedia. Which you would kn
Re: (Score:2)
It appears you first decided that the paper was correct, without having read it, and then misread my post until you found a weird reading of it you could make an argument against. How's that for ethics?
No, you claimed the paper was "intellectually unsound"; it's up to you to justify your statement, not me. For all I know it may in fact be, but the particular examples you gave did not support that statement.
I stand by what I said.
Re: (Score:2)
FTFY [Re:Explains why Google fired Timnit Gebru] (Score:2)
Correcting that:
Google claims she wasn't fired for the content of her paper, she was fired for outrageous behavior.
Re: (Score:2)
Correcting that:
Google claims she wasn't fired for the content of her paper, she was fired for outrageous behavior.
The situation is overwhelmingly clear. You must think that upper management at Google has an IQ of 70 or something. Do you really think Google's lawyers would let them lie about very specific facts about something sent IN EMAIL, that could be determined to be true or false with 100% certainty in a lawsuit? Makes absolutely no sense, not even the slightest bit of sense. Of course what they are saying is true. If you read between the lines of what Timnit is saying, you can even tell that she's really just put
Re: (Score:2)
Correcting that:
Google claims she wasn't fired for the content of her paper, she was fired for outrageous behavior.
The situation is overwhelmingly clear.
I stand by what I posted.
You are repeating what Google stated. It may even be true. But you don't know that (unless you are personally the person who fired her. Were you?). What you do know is that they said it.
Of course Google will say what makes them look good. They are a corporation. Corporations have PR departments that say the things that make them look good.
If you don't understand that, you are going to be lied to a lot by corporations.
Re: (Score:2)
Re: (Score:2)
I did read past the first sentence. Nothing past the first sentence made any sense either.
You say "corporations don't want bad PR, especially not easily avoidable bad PR."
Exactly!! You got it!! Corporations issue statements to the press in order to make themselves look good. If somebody says "I got fired because of xx bad reason," they will respond "no, that firing was because of many other factors, all of which were perfectly correct and reasonable."
That's what PR departments are being paid for: making th
Re: (Score:2)
Re: (Score:2)
Something you might consider is that just as corporations lie, other people lie, too, and frame things unreasonably. You are trusting sources of information that are not worthy of your trust.
To the contrary. Saying "just because Google said it doesn't mean you should believe it uncritically" absolutely does not imply a corollary of "but you should believe everything people say about Google without checking."
Verify. Unless you have inside information, verify from an external source.
Corporations lie-- well, let's say, they "spin the truth" to make themselves look good. Your belief "oh, they wouldn't lie, they might get sued" is amazingly credulous. But people also spin the truth to make themselv
Re: (Score:2)
Something you might consider is that just as corporations lie, other people lie, too, and frame things unreasonably. You are trusting sources of information that are not worthy of your trust.
To the contrary. Saying "just because Google said it doesn't mean you should believe it uncritically" absolutely does not imply a corollary of "but you should believe everything people say about Google without checking."
Verify. Unless you have inside information, verify from an external source.
Corporations lie-- well, let's say, they "spin the truth" to make themselves look good. Your belief "oh, they wouldn't lie, they might get sued" is amazingly credulous. But people also spin the truth to make themselves look good.
If someone who sometimes lies looks to be 6 feet tall and they sign a contract stating that they will pay you 20 million dollars (that they do have) if they are not 6 feet tall, and that if you want they will participate in a televised measuring in which they'd be extremely embarrassed in front of the nation if they turned out not to be 6 feet tall, and you've known this person to be very risk averse in the past, they generally will only do things that benefit them and they've not lied in circumstances wher
Re: (Score:2)
About all I can say is that you choose to be credulous.
A long history of corporations lying shows that this is not justified, but apparently you are not able to see that.
Bye.
Re: (Score:2)
Get a dictionary [Re:FTFY] (Score:2)
This is sadly and tragically ironic, since the premise of our discussion is that you are credulous. You prefer to believe that sources of information ...
I keep saying "I don't believe either one without further evidence," and you keep saying "you are credulous".
What part of "I don't believe either one" do you not understand?
Re:So what? (Score:5, Insightful)
If you don't like what your government is doing, you should move to another country!11!
Just in case you were wondering what the inevitable result of that kind of thinking is... it's that.
My country right or wrong... and if wrong, to be set right. People often forget that last part.
Corporations are legal fictions. You owe them no loyalty, and they owe you none.
Re: (Score:2)
Criticize internally, not externally. First off, I am not even a government official. If I was a government official I wouldn't don't go to China and tell them the USA sucks. Would it be fitting for the Sceretary of State or president to give a speech in China talking about how terrible the US is?
Re: (Score:3)
And I don't know why you're hung up on the structural,
Re: (Score:2)
Google is not merely "a corporation", which I mentioned because it's useful to have a reminder that we're not talking about a person, or anything which behaves like a person. Google is a fundamental part of the modern social fabric in most of the world. Because they are useful this has positive effects; because they are generally amoral (as a corporation) it also has negative ones. The repercussions of letting Google do whatever it wants are significant, for good and/or ill.
It would be quite interesting... (Score:2)
Re: (Score:2)
It c
Re: (Score:2)
The keyword here is "told" (Score:2)
Don't be evil. (Score:2)
I, for one, welcome our new AI overlords.