Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Facebook AI

Facebook Apologizes After Its AI Labeled Black Men 'Primates' (bbc.com) 217

Facebook users who watched a newspaper video featuring black men were asked if they wanted to "keep seeing videos about primates" by an artificial-intelligence recommendation system. From a report: Facebook told BBC News it "was clearly an unacceptable error", disabled the system and launched an investigation. "We apologise to anyone who may have seen these offensive recommendations." It is the latest in a long-running series of errors that have raised concerns over racial bias in AI.
This discussion has been archived. No new comments can be posted.

Facebook Apologizes After Its AI Labeled Black Men 'Primates'

Comments Filter:
  • checkmate, woketardz!

    • Re: (Score:2, Interesting)

      by Ostracus ( 1354233 )

      Then the story would more closely resemble "Facebook users who watched a newspaper video featuring men [in general] were asked if they wanted to "keep seeing videos about primates" by an artificial-intelligence recommendation system. ".

      • by sjames ( 1099 )

        To be fair, the video showed black men and white men. The report chose to highlight that there were black men in the video. I saw behavior typical of primates by all parties in the videos.

        Neither video showed enough context to decide who was right or wrong (if anyone).

        • Where did you see the video? Do you have a link?

          • by sjames ( 1099 )

            An article from NPR [npr.org] had the link [facebook.com].

            • I just watched it. On a percentage of time basis, it looked to me like the white people in the video were featured just as much if not more than the black people in the video. I think the mis-classifcation may not be race based. (I know, I know, technically humans are primates, but I assume it doesn't just say "want to watch more primate videos" for all videos depicting humans, so it is still a mis-classification)

            • An article from NPR [npr.org] had the link [facebook.com].

              I see people with dark skin, I see people with light skin. Are we certain that the word "primate" was referring only to the people with dark skin?

              The video itself is about people with light color skin harassing people with dark color skin. And it is supportive of the people with dark color skin. So it seems like people that consider themselves against racism and harassment of other people would be watching. And sad to say, a fair number those folks might take offense.

              Gotta be careful with those fol

          • by jmccue ( 834797 )
            You actually want to go to Facebook on purpose ? You are braver person then me.
      • by Kisai ( 213879 ) on Tuesday September 07, 2021 @02:50AM (#61770851)

        This is really a problem with how machine learning actually works.

        This is similar to "click all the bicycles" and all you see are bricks. The ML believes there is a bike in there, and won't let you continue until you click it.

        Somewhere along the line, the training set contained black men and whatever was labeled "primates", possibly a wikipedia entry, and the ML correctly identified that yes, humans are primates, but the training set didn't have anything in there to discern why "primate" was the wrong word to use.

        It's similar to other ML problems that require babysitting the training, like ANY interaction with humans, has to be reviewed before being fed to a ML program, because you don't want it to learn bad habits (eg Microsoft Tay), and you don't want it to misinterpret slang as proper language. You can easily trick RNN's into believing that the concurrent use of certain symbols belong together, even when those symbols are not analogous. The ML RNN would quite literately learn to speak like a gangster or a nazi just be constantly being fed slang that it understands but doesn't contextually use in an inoffensive way.

    • by rsilvergun ( 571051 ) on Monday September 06, 2021 @11:19PM (#61770627)
      And black folk were called monkeys for ages in an effort to dehumanize them and excuse the terrible things done to them
      • Using a whole people as a political prop is also dehumanizing.
        Black culture has always been more than being oppressed.
      • ALL people look like monkeys, or apes, etc. This is a notoriously difficult problem for a piece of computer software to solve. I look so much like a capuchin that one in a zoo in Panama offered me a piece of his banana. (Or maybe he was just trying to make nice with the much larger primate, who knows. But we did share a significant resemblance in facial hair at the time.) And the kids in the zoo, who have presumably seen all of these animals before, found me far more interesting than any of them; I'm two me

        • by HiThere ( 15173 )

          Totally natural, possibly. Also something that they can properly be called on. When Facebook does it than a defamation lawsuit seems reasonable to me, and possibly to them, so they quickly apologized. Perhaps semi-sincerely, to the extent that a corporation can be sincere.

      • by RightwingNutjob ( 1302813 ) on Tuesday September 07, 2021 @08:37AM (#61771463)

        Small children of all races are affectionately called monkeys in many languages.

        Older children of all races are not-so-affectionately called monkeys, in many cultures and many languages, when they fail to act like civilized human beings.

        Dehumanization requires intent. Something that superficially resembles dehumanization but is not intentional is not dehumanization.

        Confusing superficial resemblance of a thing with the thing itself is a cargo cult mentality. It occurs in all sorts of places, like church attendance being confused with piety or spirituality and university attendance being confused with learning.

        You're doing it too. If you were to take offense at the Latin alphabet on the grounds that those same 26 letters were used to write racist polemics and laws and whatnot, you'd be committing the same kind of error, if only in greater degree.

        Part of the scientific revolution was the ability to distinguish superficial appearances and first impressions and anthropomorphizing ascription or morality as concocted entirely in your own mind from underlying reality as deduced from repeatable objective measurements. Let's not lose that.

      • And they are not alone in having been the target of it in the past. It was not unique to one group, and thus should not be treated as specifically targeting one group.

        Also, yes, context is important. Here, the context is a flaw in ML training, not archaic slurs or who their targets may have been.

    • Heck we are even "great ape" of the hominid sort. But the scientific context is NOT relevant. Relevant is the social context, and essentially the algorithm was comparing people to ape : remember the algorithm is looking at shape, not scientific classification. Primate should be for it "ape" "gorilla" and so forth, and not "human".
  • Facebook reacted promptly to the disturbing racist incident and permanently banned Facebook from Facebook for violating community standards.

  • They would test AI before they unleash it on the world

    • Re: (Score:3, Interesting)

      and pay someone to test it when they can just release it and get a free endless rain of bug reports?
    • I think they did which shows that their testing was bias too...

    • They would test AI before they unleash it on the world

      What's the racial diversity breakdown of Facebook's employee pool?

      • by MysteriousPreacher ( 702266 ) on Tuesday September 07, 2021 @01:19AM (#61770745) Journal

        While a black person might have noticed this, that's not the key issue. They could just as easily have missed the issue.

        Their methodology for training the AI was probably flawed. Its not racism or muh representation. It's a badly designed AI that, along similar lines, could have misidentified women as male if they are particularly tall. It could have confused bald people with eggs.

        Hiring an impossibly diverse staff would be a far less effective approach than having well designed experiments.

    • Maybe they did test it... after all, it wasn't wrong.

  • Flip it (Score:4, Interesting)

    by Baron_Yam ( 643147 ) on Monday September 06, 2021 @09:57PM (#61770487)

    Don't look at it as AI confusing black men with non-human primates (because we are primates too), look at it as AI unable to tell the difference between our species.

    Have you ever gone to a zoo and looked at an orangutan or gorilla? Maybe a chimpanzee? They're a lot closer to us than most people probably realize. If you're doing facial recognition using skin tone and distances between key features, it doesn't surprise me in the least that a man and gorilla can be a 'close match' to an AI.

    • Re:Flip it (Score:5, Funny)

      by raymorris ( 2726007 ) on Monday September 06, 2021 @10:07PM (#61770501) Journal

      > Have you ever gone to a zoo and looked at an orangutan or gorilla? They're a lot closer to us than most people probably realize.

      The differences between orangutans and humans should be obvious. Orangutans are - well, orange. And unable to override their emotions with logic. Have you ever seen a human is orange and unable Oh shit!

      • 2017-2021 will forever be burned into our memories.

      • Re:Flip it (Score:5, Informative)

        by dryeo ( 100693 ) on Tuesday September 07, 2021 @01:18AM (#61770743)

        Orangutans are quite capable of overriding their emotions with logic. Give one a screwdriver and it will pretend to be disinterested, hide the screwdriver and later disassemble its cage. The emotion is wanting out, the logic is wait until the coast is clear and use the tool for what it was designed for.
        There's all kinds of stories of Orangutans being escape artists, doing things like scoring a piece of wire, hiding it in their mouth and picking the lock later. Definitely thinking ahead and waiting for gratification.
        Do a search, there's some good stories

    • An AI can mistake a ferret for a guinea pig, and if its job is to perform image classification for humans, that's an abject failure, because any human would know we don't eat ferrets.

      There is no point in rationalizing it, I know they're both fuzzy little creatures, and the machine is not rational. Don't make excuses for a failed algorithm, it's not a child.

  • by h33t l4x0r ( 4107715 ) on Monday September 06, 2021 @09:59PM (#61770491)
    At least it didn't label them "Nigerians".
    • Watch your mouth!
    • Boy do I have a country to show you!

      https://en.wikipedia.org/wiki/... [wikipedia.org]

      It's pronounced like "knee-jair". Like you'd say it in French.
      And both countries are named after a river.
      Which is where they kidnapped people for use as slaves in America. Which is how the word came to be that then became a slur.

      Same for "moor" by the way. The Moors were a powerful nation in west Africa.

      I wonder... is any of this part of US education? (Because it should be. It's super-interesting too.)

    • by quenda ( 644621 )

      At least it didn't label them "Nigerians".

      I may have missed a joke there, but how about Archbishop Ndukuba, Primate of All Nigeria?
      I'm sure he has no problem with the Word.

      https://livingchurch.org/2019/... [livingchurch.org]

    • Maybe in Spanish versions it did.

  • Sizes and colors. There are several species of absolutely golden-blonde primates, in addition to the mostly-black gorillas. So, yeah, despite the snarky gloating comments from the right-wing gaping assholes, it is actually a bit odd that the facebook algorithm would label black people specifically as primates. Raises legit questions of internal bias or racism. Merits looking into. Not a librul,cancel-culture witchhunt.
    • by lsllll ( 830002 )
      If I understand AI correctly, what you're referring to is really an algorithm as opposed to AI. An AI if not programmed according to the distance between eyes and nose and mouth. An AI is fed images which known labels and it determines, based on the images it has seen and the labels associated with them, that the next image it seems should be given such and such label. What you're insinuating is that people labeling images for AI's consumption purposefully (internal bias and racism?) labeled primates' pi
      • There are definitely people out there posting photos of black people and describing them as primates. They probably aren't doing so with the intention that they be fed into AI systems. They are doing so with the intention of making racist comments on social media.

    • by MysteriousPreacher ( 702266 ) on Tuesday September 07, 2021 @01:31AM (#61770771) Journal

      No, it's not odd and it's a leap to begin suspecting racism. A poorly trained system is a far more likely explanation.

      Bias is more likely than racism. Their sample may have been inadequate. Blacks are around 13% of the general population. That figure may be lower in some populations (e.g. FaceBook users). If they were assigned x number of training images, even just based on population representation, then 87% of the sample would not be black. Probably far lower if including data of international users.

      It is far more likely somebody ballsed up the sampling. A form of bias, sure, but not quite burning crosses on the lawn.

      • It is far more likely somebody ballsed up the sampling. A form of bias, sure, but not quite burning crosses on the lawn.

        No it's more than someone ballsed up the sampling, someone ballsed up the entire process of making the product from the sampling all the way through to release. I do work with deep learning. We can't release something without it having to go through a review by a team who's job it is to check for things like this.

        I don't work for facebook: the company I work for could be swallowed by faceb

      • You are confusing the algorithm and the outcome.

        Let's say I go to a shop; the shop keeper is a racist but keeps that very much to himself, because they know that treating customers in a racist way is bad for business and hurts your own wallet. Shopkeeper = racist, shopping experience = not racist.

        This is the exact opposite. The algorithm isn't racist, because "racist" is not something that algorithms can be. However, the outcome is racist: A derogatory term was used for black people.

        For all those w
      • by dasunt ( 249686 )

        Bias is more likely than racism. Their sample may have been inadequate. Blacks are around 13% of the general population. That figure may be lower in some populations (e.g. FaceBook users). If they were assigned x number of training images, even just based on population representation, then 87% of the sample would not be black. Probably far lower if including data of international users.

        Why not both?

        Humans are primates and it's reasonable that an AI would classify humans as such. It's not that AI is wron

      • It might not even be a sample size issue.
        This is pure speculation here as no one here knows the exact algorithm.

        Yet, let's assume that Facebook has a generalized AI to detect what a video is about then recommend ads.

        Video of airplane --- suggest trip
        Video of cat --- suggest cat food ...
        Video of primates --- suggest more primate videos

        Let's also keep in mind the context. These videos were mainly black people v police encounter videos. This is not just black people walking around. I'd find it surprising if Fa

    • "absolutely golden-blonde primates" - Hmm, Swedish primates sure are the best.
  • by kmoser ( 1469707 ) on Monday September 06, 2021 @10:14PM (#61770517)
    Never attribute to racism that which is adequately explained by an AI trained on an insufficiently robust data set.
    • by Bert64 ( 520050 )

      This.
      The AI operates based on the data set it's been trained with.
      If you feed it a bunch of photos of white men labelled "human" and a bunch of dark gorillas labelled "gorilla" it's going to bias towards labelling darker images as gorillas.

      If you carefully control the training data, you could get the AI to recognise a pile of coal as a gorilla because it's dark.

    • Well, this is the USA. In the USA everything is racism.
    • Never attribute to racism that which is adequately explained by an AI trained on an insufficiently robust data set.

      Nobody says the word "primate" was created through racism. What is said is that using the word _is_ racism, no matter _why_ it was used.

    • In this case, an overly robust data set, that correctly identified primates.

  • Reminds me of a comedy bit by Patton Oswalt about when his baby daughter accidentally acted racist [youtube.com].

    We wouldn't let children do recognition and decision tasks that some corporations use higher-order pattern-matching (so called "AI") for ...
    Why you they think that the "AI" would be more appropriate? We could train children with the same data - but it wouldn't make them more mature.

  • Facebook is a joke (Score:4, Insightful)

    by ArhcAngel ( 247594 ) on Monday September 06, 2021 @10:33PM (#61770565)
    Film at 11!
  • by CaptainDork ( 3678879 ) on Monday September 06, 2021 @10:55PM (#61770597)

    ... of scam. AI will not be here until a computer responds with, "Fuck it! I ain't doin' it"

  • by DrLudicrous ( 607375 ) on Monday September 06, 2021 @10:58PM (#61770601) Homepage
    I'm not defending FB, which once again has inappropriately used insufficiently tested technology to guide users to content that will maximize time spent on FB, and therefore ad revenue. To me this seems like imagenet, or some similarly non-diverse training set, being used to develop a recommendation algorithm. Obviously, it has failed miserably.

    For a decent synopsis of the issue, see:
    https://venturebeat.com/2020/11/03/researchers-show-that-computer-vision-algorithms-pretrained-on-imagenet-exhibit-multiple-distressing-biases/
  • just wait the AI hiring race lawsuits to hit!

  • who think that their AI is working at SkyNet/Matrix levels.

  • by systemd-anonymousd ( 6652324 ) on Tuesday September 07, 2021 @12:15AM (#61770677)

    The footage contained cops too, but you're assuming the AI decided that it was the black people that made it decide the video contained primates, not the cops. Therefore making you racist.

  • I may have missed it, but is the video under discussion posted somewhere? It seems relevant to the discussion.
  • Are we sure the AI was calling the black guys "primates"? Or could it have been referring to the white guys as primates? Because from the video I saw, it was the white guy who was acting out of line. So maybe the AI correctly referred to the white guy as the primate?
    • It sounds more and more to me that the people calling the whole thing "racist" simply assumed that the AI labeled the black people primates.

      So yes, I do think the accusation of racism is apt. I only wonder if it's made in the right direction.

  • I sure would hate to be called a "primate", even though I am one. An Ape, too! People need to chill and learn fucking science.
  • by Budenny ( 888916 ) on Tuesday September 07, 2021 @02:50AM (#61770849)

    I find this impossible to assess.

    First not having found or seen the video, its impossible to tell what its depicting. From the postings it appears to show black and white men in dispute. Without knowing how the algorithm works we do not know what in the video led to the suggestion of more clips of primates. Was it behavior? Was it the black guy? Or was it the white guy? Who knows?

    Second, we do not know what you got to, if you clicked on the 'more primates' link. Did you get just ape clips? Or did you get a mixture of clips of humans and apes? Or did it give you just humans? Or maybe clips of all kinds of unrelated stuff or creatures?

    If FB's algorithm were routinely to suggest clips of apes to anyone looking at clips in which blacks are found, but not otherwise, then you'd have to agree there would be a problem.

    But from the reported facts and available evidence its impossible to know if that is happening.

  • I checked by searching YouTube for 'primates'.

    As far as I can tell it is not coming up with pictures of black people, or not preferentially so. The few people in the little icons seem mostly (not all) to be white or quite light skinned. The subject matter seems to be mostly clips of apes, and quite a few are of people talking pedagogically about the differences and similarities and evolutions of apes and humans in general.

    I didn't check out what is in them, having spent enough time on this already. I don

  • Zoology says (Score:5, Informative)

    by vbdasc ( 146051 ) on Tuesday September 07, 2021 @03:08AM (#61770873)

    Humans:

    Domain: Eukaryota
    Kingdom: Animalia (Metazoa)
    Phylum: Chordata
    Subphylum: Vertebrata
    Class: Mammalia
    Order: Primates
    Family: Hominidae
    Genus: Homo
    Species: Homo Sapiens

    Now, calling someone a Prokaryote would really be an insult.

  • by pele ( 151312 )

    They, as always, apologise for nothing yet fail to apologise for fake news and rigged elections and referendums.

  • People assume because it's a computer it's results are unbaised and correct, when in reality AI is only as good as the assumptions that went into the programming. Humans still create the rules and algorithms, the computer just does the calculations a lot faster. As a result, we blithely accept the results as valid until an obviuosly flawed one comes along, and then wonder what happened because the computer is never wrong.
  • The story is almost too perfect.

    While I can easily believe a poorly trained AI is at work, it's just sooo perfect for a Two Minutes Hate that one can't help but be suspicious.

  • There is no need to have AI analyze a video and guess what the content may be. AI is insufficiently capable of doing so and the entire effort is better served by simply suggesting videos with similar keywords.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...