The Algorithms That Detect Hate Speech Online Are Biased Against Black People (vox.com) 328
An anonymous reader shares a report: Platforms like Facebook, YouTube, and Twitter are banking on developing artificial intelligence technology to help stop the spread of hateful speech on their networks. The idea is that complex algorithms that use natural language processing will flag racist or violent speech faster and better than human beings possibly can. Doing this effectively is more urgent than ever in light of recent mass shootings and violence linked to hate speech online. But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. In one study [PDF], researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English (which is commonly spoken by black people in the US). Another study [PDF] found similar widespread evidence of racial bias against black speech in five widely used academic data sets for studying hate speech that totaled around 155,800 Twitter posts.
This is in large part because what is considered offensive depends on social context. Terms that are slurs when used in some settings -- like the "n-word" or "queer" -- may not be in others. But algorithms -- and content moderators who grade the test data that teaches these algorithms how to do their job -- don't usually know the context of the comments they're reviewing. Both papers, presented at a recent prestigious annual conference for computational linguistics, show how natural language processing AI -- which is often proposed as a tool to objectively identify offensive language -- can amplify the same biases that human beings have. They also prove how the test data that feeds these algorithms have baked-in bias from the start.
This is in large part because what is considered offensive depends on social context. Terms that are slurs when used in some settings -- like the "n-word" or "queer" -- may not be in others. But algorithms -- and content moderators who grade the test data that teaches these algorithms how to do their job -- don't usually know the context of the comments they're reviewing. Both papers, presented at a recent prestigious annual conference for computational linguistics, show how natural language processing AI -- which is often proposed as a tool to objectively identify offensive language -- can amplify the same biases that human beings have. They also prove how the test data that feeds these algorithms have baked-in bias from the start.
Corrected headline (Score:5, Insightful)
Hate Speech Algorithms do not recognize double standards.
Fixed that for you.
Re:Corrected headline (Score:5, Insightful)
The programmer trains the AI to see the N-word as hate speech. Fair enough. Then the AI reads the so-called "black twitter" where everyone refers to everyone else as the N-word, and the AI tracks it all as hate speech. Also fair, it's a goddam computer, not a socio-linguist. Ditto the Q-word for the so-called "LGBTQ twitter." Fair, and fair.
Hysterical. But fair.
Re:Corrected headline (Score:5, Insightful)
And that everyone has a right to say what they please, as long as it isn't against the law (slander, libel), or is directly inciting violence.....and everything shy of those special cases is allowed.
And if you don't like what you're hearing, you can quit listening or go elsewhere.....and grow a bit thicker skin and quick being a snowflake that has to be protected from words.
Sticks and stones....remember the old saying?
Re: (Score:2)
"Sticks and stones....remember the old saying?"
If the speech is against the law then the words are hurting?
Re:Corrected headline (Score:5, Insightful)
Computers just aren't built to handle the complex mental gymnastics that humans have to do to negotiate the often quite bizarre and nonsensical world of social interaction in a modern western world where the wrong phrase or subtle sentiment, no matter how innocently expressed, can ruin your life. It's very hard for most humans to even understand the "rules."
Is It Fair? (Score:4, Insightful)
The programmer trains the AI to see the N-word as hate speech. Fair enough. Then the AI reads the so-called "black twitter" where everyone refers to everyone else as the N-word, and the AI tracks it all as hate speech. Also fair, it's a goddam computer, not a socio-linguist. Ditto the Q-word for the so-called "LGBTQ twitter." Fair, and fair. Hysterical. But fair.
Unless terms like "cracker" or "redneck" are flagged, it would seem to be (deliberately) biased in favor of black people.
Re: (Score:2)
It's more subtle than that. The first page of TFS gives the example of "I saw him yesterday" vs. the African American English equivalent "I saw his ass yesterday". Apparently the word "ass" triggers the AI.
Note that African American English (AAE) is recognized as a dialect and actually has its own complex rules etc, much like other dialects such as Southern American English or Jamaican English or Scottish English.
I wonder what it would make of Scottish English. It's pretty much the ultimate test for any spe
Re: Corrected headline (Score:5, Insightful)
It's all fair until someone actually puts the tech into use and begins to automatically censor black people. Then it is decidedly not fair.
Having a single standard for all is fair. Double standards are inherently not fair. This should be obvious. Maybe certain groups shouldn't use double standards as a crutch for their behavior.
Re: (Score:2)
"Having a single standard for all is fair. Double standards are inherently not fair. This should be obvious. Maybe certain groups shouldn't use double standards as a crutch for their behavior."
The standard is don't do shit that society has basically collectively decided is super offensive or at least don't do it where those who will be offended will hear or see you. Unfortunately computer alogorithims and some conservatives have problems with such standards
Re: (Score:2)
The standard is don't do shit that society has basically collectively decided is super offensive or at least don't do it where those who will be offended will hear or see you. Unfortunately computer alogorithims and some conservatives have problems with such standards
You mean liberals in this case. It seems to be liberals arguing that censoring the N-word can be unfair.
Re: (Score:2)
In that case, just have it biased against formal grammar and 1950's academic English.
Context *IS* important, and any simple rule is going to be a Procrustean bed.
Re: (Score:3, Insightful)
You're right. In this case, it just showed the *racism* of black people against white: if you're black, you can use the "n-word", if you're white, you can't. That's racism.
Double standards are discriminatory by design.
Re: Corrected headline (Score:4, Insightful)
That people have different skin color is not racist but obvious. What's racist is attaching attributes to skin color as if the color of the skin had anything to do with the person therein.
Re: Corrected headline (Score:5, Insightful)
When you don't impose the same standard on yourself that you do on others, it's "not" fair.
Oppressed communities often use trigger words privately amongst themselves that they do not accept when others use them in public. That's fine. What's not fine is when they use them in public in front of strangers. You cannot get upset when others do exactly what you do. Hypocrisy is a real problem here.
None of this is at all surprising or unfair, although inherent biases in training sets probably have other unfair aspects.
Re: (Score:2)
"When you don't impose the same standard on yourself that you do on others, it's "not" fair."
Sure, what I said was different, it was meant to point out that having a single standard for all is not always fair.
"inherent biases in training sets probably have other unfair aspects."
Yes, and what group the raters in the training set are part of is a big bias.
Re: (Score:2)
Ah, so when you impose standards of racial equality on those who don't believe un it, it is not fair to do so, as they have a different standard, right?
Re: (Score:2)
"Ah, so when you impose standards of racial equality on those who don't believe un it, it is not fair to do so, as they have a different standard, right?"
That makes it obvious how different cases can be.
It's not aways fair to have one rule for all and it's not always fair to have specific rules for specific groups, and the fairness of having one rule or more than one rule depends on the specifics of the different cases.
Re: (Score:3)
If the two groups want to be treated equally, I fail to see how applying different standards would serve that purpose.
Re: (Score:2)
I'm not suggesting a multiple or different standards scheme, though we can and do do that well in many cases.
I just don't think it's always fair to impose one group's standard on all groups. In this case the group membership of the hate speech raters in the training sets can be a big bias if not taken into account somehow.
Re: (Score:2)
Not having the same standards for all is RACIST.
race A owns something ... (Score:4, Insightful)
It depends on the context. If race A owns land, and race B doesn't, it's racist if group A makes a law which limits participation in elections to those who own land.
Your own analogy proves you incorrect.
"own(s) land" => "own(s) the N-word"
"limits participation in elections" => "limits certain speech"
"It depends on the context. If race A owns the N-word, and race B doesn't, it's racist if group A makes a "law" which limits certain speech to those who own the N-word."
Re: (Score:3, Insightful)
If you sort people out into 'race A' and 'race B' you are racist.
And it's self-perpetuating.
Re: (Score:2)
Rightly, we have different standards by age groups to prevent those kinds of crimes.
Re: Corrected headline (Score:5, Insightful)
On the internet, nothing says if you're white or black... The N-Word should either be accepted of banned. If black people find that it's OK to call themself using that word, then the word is not racist and can be used by everyone... Else, it's double standard and it's discrimination.
To say it in another way, if black consider that a black using the N-Word is OK and a non-black using the same word is not, that means that HE is racist (he is making a difference based on the skin color)
Teachable moment, not civil rights violation (Score:4, Insightful)
It's all fair until someone actually puts the tech into use and begins to automatically censor black people. Then it is decidedly not fair.
Its not censoring black people, its censoring people using the N-word. That is fair, the N-word is not acceptable in any conversational context, there are no "racial" passes for unacceptable behavior. That unacceptable behavior has been tolerated is not a legitimate complaint.
The idea that AI is going to be applied to these things in any way that doesn't open the operator up to liability for civil rights abuses is ridiculous.
There is no civil right to use the N-word in a private entities forum. The user of the N-word is experiencing a "teachable moment", not a civil rights violation.
Re: (Score:3)
Actual AI can only understand context if it's given context.
You could certainly train a model to recognize that black people are allowed to say certain words that white people aren't. But you'd have to tell it the race of the poster.
Training the model requires formalizing certain rules, but that process reveals contradictions.
Re: (Score:3)
An actual AI would be able to understand context.
A large part of the problem is the "context" includes the race of the author and/or intended recipient.
Also, let's not forget that simply changing the race from "White" to "Black" can change something from "hate speech" to "perfectly acceptable and totally not bad in any way" [foxnews.com].
Re: (Score:2)
Hate Speech Algorithms do not recognize double standards.
Fixed that for you.
So using "bastard" as a term of endearment (as allowed for use in Australian english) should actually be considered a double standard?
Re: Corrected headline (Score:2)
Well it will have to guess their language then. For example, TFS makes reference to a language I've never heard of, called African American English, which may include, based on the topic, frequent use of words that are considered vulgar in regular English. So if a white guy makes a lot of racial slurs on Twitter, then the AI will have to conclude that he speaks the "African American English" language.
Though this ridiculousness went too far a decade ago. How did that go...âoeIâ(TM)ve never seen a t
Re: (Score:2)
For example, TFS makes reference to a language I've never heard of, called African American English, which may include, based on the topic, frequent use of words that are considered vulgar in regular English.
Sometimes called Ebonics [wikipedia.org] https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: Corrected headline (Score:4, Funny)
For me, when reading that, the first thing that came to mind was "Airplane". "Oh stewardess, I speak Jive..."
Re: (Score:2)
So they are both kind of double standards, but the N word one is more exclusionary by disallowing specific groups of people being able to use it, unlike bastard.
Bastard is just as exclusionary, it's just that you have to cross borders before you lose the cultural understanding that makes it acceptable. Likewise with the N-word, except that you lose the context by crossing social rather than political boundaries.
Re: (Score:2)
I was kinda surprised that qu33r is now on that level too.
When did that one hit such a high level of censorship?
Re:Not the same (Score:4, Funny)
Yep. Context is important, too.
If you say "I love n!ggers.", that's not hate speech.
But, if you say "I love n!ggers, I think everyone should own one.", that's hate speech.
If you say "I'm going to beat that qu33r.", that's hate speech.
But, if you say "I'm going to beat that qu33r with my c0ck.", that's not hate speech.
Re:Not the same (Score:5, Insightful)
"A black rapper using the word n!gger is perfectly acceptable."
It is not, it's merely tolerated. In a dispassionate, non-racist world this would not be acceptable, it only passes because of racism.
Re: (Score:2)
In a dispassionate, non-racist world it would be perfectly acceptable. A word only becomes objectionable because of passions attached to it. There was a time not too long ago when Irish was a term of abuse. (There may have been different pronunciations, but "No dogs or Irish allowed" is pretty direct, and that was a not-uncommon sign.) The passions have disappeared, and it's no longer a term of abuse.
Re: (Score:2)
"No dogs or Irish allowed" is pretty direct, and that was a not-uncommon sign.) The passions have disappeared, and it's no longer a term of abuse.
I bet if you put such a sign up in the window of your pub in an Irish neighborhood you might find that the passions haven't disappeared quite so much.
Re: (Score:2)
In a dispassionate non-racist world, 'black' and 'white' would be meaningless.
How so? I don't care if a cat is calico or tiger striped, but the terms still have meaning.
Re: (Score:2)
Yep, absolutly this. If your gay you can say queer. If your black you can say nigga.
I don't possess a gay, nor do I possess a black. Assuming that either one can be owned is offensive.
So, if you have to know the color or gender identity of the person speaking to know if a word is offensive, how do you ever flag any word as offensive? And yet, clearly, many uses of "queer" and "n***" are patently offensive. Why should your skin color get you the privilege of using offensive words?
Last night a woman was booted from Big Brother (USA). She repeatedly referred to the others in the house as
Re: (Score:2)
>I don't possess a gay, nor do I possess a black. Assuming that either one can be owned is offensive.
Sorry, I forgot 're.
> how do you ever flag any word as offensive?
Offensive is subjective. The point of the article is saying that "we want to flag 'offensive speech and hate speech but it flags minorities because of the language they use and that means it's racist because disparate is racism.
I think there are a couple problems. 1) trying to apply objective standards to subjective things. 2) when that o
Re: (Score:2)
Offensive is subjective.
Of course offensive is subjective. All words are subjective. They all depend on a shared belief in what they mean. This is a Wonderlandian rabbit hole that isn't worth going down.
The point of the article is saying that "we want to flag 'offensive speech and hate speech but it flags minorities because of the language they use and that means it's racist because disparate is racism.
It flags ALL messages with the trigger words, not just minorities. The AI has no way of knowing what race any message author is. How is it racist if all messages are treated identically?
1) trying to apply objective standards to subjective things.
The owners of the medium wish to control what is said via their medium. To this extent, they have tried to apply computer algorithms. Computers are
Re: (Score:2)
how do you ever flag any word as offensive?
By looking at the context. A recurrent NN should be able to do that.
Why should your skin color get you the privilege of using offensive words?
Free speech is not a privilege. The burden of proof should be on the censor not the speaker.
Re: (Score:2)
Why should your skin color get you the privilege of using offensive words?
Free speech is not a privilege.
Twitter or facebook banning you for saying a bad word is not a free speech issue. Your use of their systems is a privilege that they can retract at any time.
Re: (Score:2)
"Yep, absolutly this. If your gay you can say queer. If your black you can say nigga."
Anyone can use either of these words in the right context and company. No one can use them outside the right context and company.
If your standard is that who can say what depends on who they are, then you have a bigotry problem.
It should also be understood that (closeted) gays are often guilty of the most virulent anti-gay hate speech in public, so gays don't simply get a pass on the word "queer". The same rules apply to
Re: (Score:2)
No one can use them outside the right context and company.
Really? I think you are wrong. I know you are wrong.
If your standard is that who can say what depends on who they are, then you have a bigotry problem.
That's correct. But you might notice that the people who object to the use of the N word by ANYONE are not the bigots then, it is the people who think that they are in a class that can use it while nobody else can. I think I turned the problem 180 degrees from where you were headed. Maybe not.
It should also be understood that (closeted) gays are often guilty of the most virulent anti-gay hate speech in public, so gays don't simply get a pass on the word "queer".
Huh? Someone who isn't identified with the protected class cannot use the word without reproach, which proves that people who are in the protected class cannot use t
Re: (Score:2)
Dude was literally fired for using the word instead of "N-Word" in a meeting discussing "sensitive words". He wasn't using it, he was saying the word descriptively like "'N!gger' is offensive and shouldn't be used".
Papa Johns CEO forced to resign after pointing out another's use of ' [cnbc.com]
They'll Fix It (Score:5, Insightful)
Re: They'll Fix It (Score:2)
The problem is there's no such thing as "racist speech". Speech can be used as a tool for racism, certainly. But there is no word or series of words that is intrinsically racist.
The word "boy" is one of the most commonly known and widely used examples of "racist speech".
Human beings can't even distinguish racism from carelessness online. No algorithm can do this.
Re: (Score:2)
Not skin colour. As the study points out, it's the dialect of English being used that is the issue.
The very first page uses the example of "I saw his ass yesterday". There are white people who talk like that.
The study isn't claiming racism, it's claiming bias against African American English speakers, who may not actually be African American themselves.
Re: (Score:2)
It's not a double standard (Score:2)
The latter is a parody of Asians taking on the mannerism without understanding them while the other is just someone looking down on people to make themselves feel better.
Context Matters.
Re: (Score:3)
It also gives those words more power and taboo. If everyone uses it, no one cares. If only a few use it and when some outside of that privileged few use it, zomg end of ze world.
It's the same reason 'fuck' used to be really really bad (just reminded myself of the scene in A Christmas Story when Ralphie helps his father change a tire but drops the lugnuts and says... ffffffuuuuuuuuu dge, ahhh good show. ). Now it's everywhere and for the most part no one cares besides soccer moms and the FCC.
Re: Corrected headline (Score:2)
No, that is just something an overbearing father does.
Re: (Score:2)
Well said. If the N-word was not used among African Americans or Q-word in LGBT community, I would probably discriminate against them less.
why are you discriminating against them in the first place? It sounds like you don't need to rationalize your discrimination through the usage of the N-word.
Re: Corrected headline (Score:2)
Well for starters, the topic isn't him, it's the AI. I doubt he is in any kind of position to be able to actually censor anybody.
Re:Corrected headline (Score:5, Insightful)
why are you discriminating against them in the first place?
Well, for starters, he's being told that he should accept some words from different people differently.
Re: (Score:2)
why are you discriminating against them in the first place?
Well, for starters, he's being told that he should accept some words from different people differently.
You might want to re-read what I was replying to. He/She/It stated that he would discriminate *less*. Words have nothing to do with his base discrimination.
Re:Corrected headline (Score:5, Insightful)
No, the bias is that certain group of people are allowed to say a word and an another group isn't. You expect the AI to understand that context matters and that context is race.
Okay, but how the hell is the AI supposed to KNOW persons A, L, and D are in the protected group while persons E and M aren't? Do we all have to register as by race?
It gets worse, though. In meat space, a professor is in hot water for using or referring to quote by James Baldwin ("I am not your n--"). The professor cited that the author used that word rather than the sanitized "Negro" for reason. This was in a graduate level course and she, as a white woman, got in trouble for it. This was a place where context should have mattered, exploring the meaning of a black writer's words in an academic setting. Why did he chose one word and not the other? That's a worthy discussion to be had. But no, the only context that matters is the race of the person that says the word not how or why it is used.
The problem isn't the algorithm, it's that only special people can use certain words regardless of context. Until we fix that in meat space, the AI is doomed.
Re: (Score:3)
Even more to the point, why does anyone think that being not being a "protected class" is a legitimate reason to suppress speech? Or conversely, why should someone in a protected class be allowed complete unadulterated racism and everyone else, not?
That is not the concept behind the "protected class" (no matter what yo
Re: (Score:2)
By and large, though, "protected classes" don't flaunt their "privileged" use of hate speech in public EXCEPT for one specific protected class.
The LGBT community, though hard to size for obvious reasons, is similar in size to the black community in the US, as blacks are 10-15% of the population and gays alone are roughly 10% (unless you are religious and in denial). There is an enormous difference, however, between the use of "queer", "f*gg*t" and "n*gg*r" in terms of outrage in culture, and furthermore th
"Protected class" is too simple. Context matters (Score:3)
I think most people will agree that speech can be harmful. Even if they disagree when asked how they feel about the broad statement "speech can be harmful", they will probably agree that certain classes of things which are nothing but speech are at least often harmful. Some examples of the category include blackmail (and other forms of extortion involving threats but no other actions... yet), defamation, inciting panic (the classic "yelling 'Fire!' in a crowded theater" scenario), revealing things you learn
Re:"Protected class" is too simple. Context matter (Score:4, Insightful)
Speech is not harmful, it's merely words.
It's down to whoever listens to that speech to determine how they're going to react to it. It's perfectly possible to completely ignore speech, where it won't harm you at all. Someone could be shouting all manner of insults to your face in a language you don't understand, and you'd have no idea wether to feel offended or not.
The speech doesn't harm you, how you react to it might... That's basically self harm.
Re: Corrected headline (Score:3)
The fact that you can't think of a single example tells me that you're definitely a racist POS. The media sucks at reporting the shootings of unarmed white people, but they don't suck THAT bad. I guarantee you've heard of at least one - for instance, Justine Damond, the Australian woman shot by American cops after she spooked them by making a loud noise. That one got lots of play in the media and a few others have as well. You just don't give a fuck because it doesn't suit your racist narrative.
I'll giv
Probably played it too much rap music (Score:5, Informative)
A far larger proportion of it than other musical genres is misogynistic, aggressive, preening bullshit that advocates a cartoon violence lifestyle that a lot of kids are copying. Rap music no longer reflects the streets, it dictates what happens on them. Terrorism aside you never hear of shootings or stabbings at rock, EMD, jazz, classical or any other type of gigs/concerts, but they're 10 a penny at rap gigs.
Re: (Score:2)
Oh please, you're being ridiculous.
You speak like someone who doesn't interact with any kids and hasn't gone to any rap concerts. Maybe you yell at them to get off your lawn?
Yea, just like all the extremely violent TV shows, movies, and cartoons are causing people to become violent. Don't forget books and comics.
I'd rather blame bad parenting than whatever entertainment is popular. Of course, parents often have a hard time believing they're responsible for how their little angels act.
Re: (Score:2)
Couldn't immediate find statistics to back that up, but I did find a very recent study that points out that pop music actually has as much violence in it: https://phys.org/news/2019-03-... [phys.org]
Then again rap is associated with poverty so it's quite likely that it does correlate with violence too. Of course correlation is not causation. Just like playing Mortal Kombat doesn't make people rip each other's spines out, listening to rap music probably doesn't make them stab each other.
From VOX? Really? (Score:5, Interesting)
Re: (Score:3)
The writers have an unashamed political bias, sure, but they don't skimp on the fact-checking. I like WaPo for the same reason.
Re:From VOX? Really? (Score:5, Insightful)
The writers have an unashamed political bias, sure, but they don't skimp on the fact-checking. I like WaPo for the same reason.
Perhaps the most devious lie is to just use facts that have been cherry picked while omitting any other viewpoint or facts that might go against your narrative. Lying by omission or telling incomplete truths is extremely common for hot button topics. Much like conflating statistics for legal immigrants and illegal immigrants.
Re: (Score:2)
Much like conflating statistics for legal immigrants and illegal immigrants.
Or conflating legal asylum seekers with illegal immigrants.
Re: (Score:2)
I searched for "vox lie by omission." This is the only relevant thing I found:
https://www.dailywire.com/news... [dailywire.com]
I'm not sure I would call that a lie. Trump described himself as, in any practical sense, a first responder, but then said he doesn't consider himself one.
So, you made the claim, you present the evidence, or else I will have to ask you to prove that you do not beat your wife.
Re: (Score:2)
He absolutely said he was not a first responder and yet Vox said that is exactly what he said. It is absolutely easy to look up the two statements.
Ignoring that is just willful ignorance.
Re: (Score:2)
If I say I've jumped out of planes for fun but don't consider myself to be a skydiver, did I claim to be a skydiver or not?
Re: (Score:2)
I'm not sure I would call that a lie. Trump described himself as, in any practical sense, a first responder,
Yikes dude.
From your own link that you obviously did not read :
"Many of those affected were firefighters, police officers, and other first responders. And I was down there also, but I'm not considering myself a first responder. But I was down there. I spent a lot of time there with you."
So if you can't even get that right, how are we ever going to convince you Vox is a bunch of lying dirtbags ?
Re: (Score:2)
Answer this:
https://slashdot.org/comments.... [slashdot.org]
Comment removed (Score:5, Insightful)
Re: of course it is (Score:2)
Re: (Score:3)
No, as the study you didn't read points out, it's not about race. It's about the dialect of English you speak, and it just happens that the one many black people use comes off worse. That dialect is not exclusive to black people though.
I can say that but you can't (Score:2)
You don't say? (Score:2)
You don't say? If you classify the N-word as "hate speech", some black people use it in every other sentence just for dramatic effect.
Education Level and AAE (Score:2)
Many adults with a 3rd grade reading and writing level probably look racist to an AI. Because they are.
Bias in, bias out (Score:2)
The training data came from humans, it's so far been impossible to scrub the biases out, even if you try to hide the user's race from the AI - it'll pick it up from proxy factors.
Re: (Score:2)
This is in large part because what is considered offensive depends on social context. Terms that are slurs when used in some settings -- like the "n-word" or "queer" -- may not be in others.
Thing is, the "algorithm" needs to take into account BOTH the speaker (and the social context they communicate in) AND the listener (and the social context they operate in). In other words the speaker may not feel what they're saying is hateful, but the listener might. That's a harder problem than screening just one or the other.
Re:Bias in, bias out (Score:5, Insightful)
In other words the speaker may not feel what they're saying is hateful, but the listener might.
Or, more likely, some random bystander gets offended on behalf of a party that's not even aware of the conversation.
Re: (Score:2)
The final solution (Score:3)
This particular problem is quite easy to solve.
Fire thought police, disable the filters and stop censoring people. The problem with nonsense on social media has nothing to do with lack of censorship.
It has everything to do with poor governance that actively rewards, amplifies and encourages the proliferation of nonsense in order to maximize profit.
Re: (Score:2)
That wouldn't actually be a solution, but it would sure turn down the amplification.
When can we treat people as individuals? (Score:5, Insightful)
The reason an algorithm might detect more bias from a certain race might be because that race has a tendency for more bias.
Just because we find some trend among racial lines does not mean that there is automatically some kind of racism inherent in the system. There will be trends among races but we should still treat people as individuals. We should not excuse bigotry because someone is a member of a given race. By "correcting" the algorithm to account for race isn't "reverse racism", it's racism.
If there is in fact something in the algorithm that flags someone's post as "hate speech" because of one's race then that needs to be corrected. This can be done by removing any racial identification from the data, which I can only assume was done in the first place. If this still flags more hate from a given racial group then perhaps it would be logical to conclude that some races are more prone to hate speech than other races.
Again, we need to treat individuals as individuals. If we keep lumping people together by race then we lock people into paths that were chosen by their skin color instead of their own talents, attitudes, etc.
Treating people as individuals does require that we recognize trends among different groups but that an individual can fall outside of this trend.
Re: (Score:2)
Well said
Re: (Score:2)
Re: (Score:2)
Everything you said is reasonable, but it said in the summary that the training data was biased.
duh (Score:3)
Humans can't properly codify "hate speech", so why would a computer?
WHAT A SURPRISE! (Score:5, Insightful)
What a racist thing to say! (Score:2)
Identity politics is destined to fail (Score:2)
Goodness, trying to identify "hateful" people by what they say and not what they mean is going to backfire? Couldn't see that coming
Identity politics is trying to put people into boxes, and it just doesn't work. That's why the extreme left is such a hateful group... are you a black person that lives in Chicago that doesn't like democratic stuff? Be prepared to be called a white supremacist by other white guys. Unless they figure out that you are black, in which case it is Uncle Tom.
Judging people as individ
Oh baloney (Score:2)
Even censoring Agatha Christie (Score:2)
The British writer Agatha Christie wrote a story called
"ten little Ni66er"
It was based on a song (the death do follow the song) and has the same title as the song.
The US version changed the name to "And then there were none"... Other versions kept the original name, translated (for example, in french, the story is called "dix petits nègres"
There are many references to that word in the story... 10 small statues, the song, the name of the Island... And no racism behind that story !!!
slurs when used in some settings -- like the "n-wo (Score:3)
Wrong.
When Afro Americans call each other "niggahs" that IS a hate speech.
Not against Afro-Americans, but against whites.
Think about it. Why they started to use it for each other and why they still continue doing so.
Re: (Score:2)
Sorry, you're wrong.
Simple counter-example: "my car". Two different people saying that will mean quite different objects.
Re: (Score:2)
True, but there's reasonable evidence that it's true. Certainly not proof, or anything close to proof, but decent evidence with not unreasonable assumptions behind it.
My suspicion is that "hate speech" acts as an amplifier for pre-existing "hate intentions", and causes one to feel both less embarrassment and less endangered in performing "hate actions", presuming that those actions are parallel to the meaning of the speech. Thus the amplifier doesn't precisely "cause" any of the actions, but only makes th