Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Twitter AI Social Networks Technology

Apparent Racial Bias Found in Twitter Photo Algorithm (venturebeat.com) 119

An algorithm Twitter uses to decide how photos are cropped in people's timelines appears to be automatically electing to display the faces of white people over people with darker skin pigmentation. From a report: The apparent bias was discovered in recent days by Twitter users posting photos on the social media platform. A Twitter spokesperson said the company plans to reevaluate the algorithm and make the results available for others to review or replicate. Twitter scrapped its face detection algorithm in 2017 for a saliency detection algorithm, which is made to predict the most important part of an image. A Twitter spokesperson said today that no race or gender bias was found in evaluation of the algorithm before it was deployed "but it's clear we have more analysis to do." Twitter engineer Zehan Wang tweeted that bias was detected in 2017 before the algorithm was deployed but not at "significant" levels.
This discussion has been archived. No new comments can be posted.

Apparent Racial Bias Found in Twitter Photo Algorithm

Comments Filter:
  • Seems abit strange that its only racist when its in favour of someone white. Im sure it would not be equally racist algoritm if it prioriticed a black person.
    • by Anonymous Coward

      Seems abit strange that its only racist when its in favour of someone white. Im sure it would not be equally racist algoritm if it prioriticed a black person.

      Who said it was racist?

      • Re:Its abit strange. (Score:4, Informative)

        by Geoffrey.landis ( 926948 ) on Monday September 21, 2020 @02:30PM (#60528668) Homepage

        Seems abit strange that its only racist when its in favour of someone white. Im sure it would not be equally racist algoritm if it prioriticed a black person.

        Who said it was racist?

        The headline only said it was racially biased. Since the algorithm seems to have a bias based on skin color (which is to say, race), that headline seems accurate to me.

        • Which is it, biased based on skin color, or on race? The two aren't exactly the same thing... (Pretty clear it's skin color, not race.)

          • Which is it, biased based on skin color, or on race? The two aren't exactly the same thing... (Pretty clear it's skin color, not race.)

            You are indeed right in your pedantry, but if we are going to be a pedant then the fact that "race" doesn't exist from a biological point of view should figure in to it. I guess they meant skin colour and most people would understand that we conflate the two since one doesn't really exist.

            • Yep, exactly. Race doesn't exist from a biological point of view.

              What's more, I'm pretty sure the pattern-matching-system in question doesn't have a concept of race which matches those wielding identity politics and making everything (including things like this, which simply aren't...) about race.

              Which is why people with different eye shape (as opposed to skin color) likely don't factor into this problem, despite that being just as much about "race" as popularly conceived as skin color is. So then who gains

            • Hmmm... So why do doctors use race as one of the primary methods for determining predisposition to certain genetic disorders? Hmmmmmmm... Someone should tell them they're doing my it wrong...
              • Why don't you talk to a doctor instead of taking on the mantle of spreading idiocy until someone corrects you? Could it be that what doctors consider race and what you consider race are two different terms describing two different things?

                Nah! That couldn't possibly be it!

                • Typical sjw bs - "you're wrong and if you can't be bothered to educate yourself neither can I." How about this: you tell me exactly how they are different.
                  • If you think I am sjw you are an idiot. That is very exact, isn't it. You are in idiot and I gave you the reason why.

                    Now, can you give a real example when you went to a doctor and they asked you for your race? Don't give me "it happened" bs. Tell me a real incident that happened to you. What was the illness? Did you ask the doctor why he needed that? Do you know or have some idea why the doctor asked you that?

                    Because I have NEVER been asked for it in all my fucking life.

    • Re: (Score:2, Interesting)

      by Xenographic ( 557057 )

      The article is content-free, it says that a bias was detected, but not what inputs the algorithm looked at.

      The real answer might be that it showed more popular photos and those happened to be of white actors or something like that. In other words, it's probably the case that the bias here came from input from the users that was fed into the algorithm. Given that everything they do is based on popularity, it will be interesting to see how much they can overrule the decisions of the users before the users a

      • by AmiMoJo ( 196126 ) on Monday September 21, 2020 @01:13PM (#60528294) Homepage Journal

        "Apparent" because this is something people have noticed and needs investigation.

        When there are photos that need to be cropped for Twitter, especial mobile, it does it automatically. Face recognition is used to try to make sure that the subject is in the frame, but when there are a white person and a black person in the same image it usually picks the white person.

        It's a well known problem. Face recognition often fails with dark skin, devs often don't test for it. I saw it happen in a thread about Zoom removing black people's heads because the face recognition failed and considered them part of the background.

        • On each search engine, enter , "hot bikini girls" or something similar.

          99% white checks.

          • On each search engine, enter , "hot bikini girls" or something similar. 99% white checks.

            on an image search engine, enter 'jamila', which is swahili for 'beautiful woman', a language spoken by people who are mostly of black african descent.

            99% dark skinned or black women.

            What a strange coincidence.

            • by sycodon ( 149926 )

              So beautiful woman don't wear Bikinis?

              Nothing in "Hot Bikini girls" implies race.

              • Nothing in "Hot Bikini girls" implies race.

                Other than being an english phrase, spoken mostly by whites. You've restricted your data set to mostly one type, don't be surprised when your sample reflects it.

        • So a contrast issue? Seems a little over the top to call in inherent racial bias. Conjuring up images of cross burnings and lynchings does little to solve a technical problem of using shadows to detect bone structure, and geometric spacing of eyes, ears, and nose.

          Headlines should really say, black persons get a break from big brother using facial recognition to catalog and track them.

          • by AmiMoJo ( 196126 )

            Unfortunately what usually happens is the police buy a "99.999% accurate" facial recognition system that has only been tested on white people and it incorrectly identifies a load of black people who get a knee to the neck from the cop who is convinced they are the person they are looking for.

            • by e3m4n ( 947977 )

              I've been against facial recognition since before the movie minority report. I have never had a facebook account and I have made sure my wife crops me out of anything she ever uploads and at no time is my image tagged or any other sort of identity to reference me should I appear in the background of a picture. What I do not understand is all these anti-maskers. Masks are really screwing up the facial recognition systems, except for the dumbasses that are literally taking selfies of themselves wearing a mask

        • > "Apparent" because this is something people have noticed and needs investigation.

          Maybe they should investigate and find out what's going on *before* writing the story? Would it really take that long to figure out what input is going into the algorithm? A simple code dive like that should take less than a day, so why didn't they?

        • Why is it always the devs fault? Mathematically, dark images have less information in them than light images - less contrast. This actually makes the problem itself harder to solve, not just a case of the racist devs didn't test it on any black people.
          • by AmiMoJo ( 196126 )

            Where did you get "racist devs" from?

            The problem is incompetence and lack of proper testing.

    • Re:Its abit strange. (Score:5, Interesting)

      by ITRambo ( 1467509 ) on Monday September 21, 2020 @01:07PM (#60528264)
      The primary issue with facial recognition programs is that lighter skin is easier to identify than less reflective skin is. Algorithms need to use many more data points to correctly identify the features on faces with darker skin. Not doing this might indicate a bias on the programmers part, or simply a current technical challenge that will be corrected.
      • Re:Its abit strange. (Score:5, Interesting)

        by BetterSense ( 1398915 ) on Monday September 21, 2020 @02:11PM (#60528604)
        White skin isn't necessarily "easier to identify". Unless your techniques are all based off of white people. In which case of course white skin is easier to identify.

        As a photographer and programmer...

        White faces are typically defined by their shadows. Therefore you light white people so that the shadows on their face fall appropriately, because it's going to be the shadows that your brain uses to define their facial structure.

        Black faces are typically defined by their highlights. Therefore you light black faces so that the highlights on their face fall appropriately, because it's going to be the highlights that your brain will use to define their facial structure.

        You can blast a black person with light and light their face the same as a white person. It will work, but it will look more like a flash-blasted sports/documentary type of look, and it won't necessarily look natural and that's not the way we see them out and about in normal life.

        You can also light a white person like a black person too, but the result will be a deliberately "dark" portrayal of the person and again it won't look natural or the way we see them in normal life. This is actually a mature technique for shooting something in broad daylight, while making it look like it was shot at night. Many nighttime scenes from classic hollywood were shot during the day actually, like many of the scenes in Psycho or Jaws for example. This is not going to look normal and neither is over-lighting a black person.

        It's all about contrast either way, but whether the algorithm is programmed, or whether it's trained, or whether it's a combination of both, we should expect a facial recognition algorithm would need to use a really different techniques, and/or be trained on completely different datasets in order to detect black faces equally as well as white ones. In normal lighting situations, the physical light patterns will be more or less totally different. If you train with a dataset of mostly white people I would expect the resulting algorithm to be bad a detecting black people, which is really common sense isn't it.
        • Thank you for giving a much more comprehensive answer to this than I did. All I'd add is that an algorithm doesn't have to actually light the people in a photo properly, something you clearly know how to do.

        • thanks I'd mod you up if I had points. so it sounds like to make it work you would first have to identify which algorithm to run on a face ( for dark vs light skin) and so you would need at least 3 algorithms. ( 1 to choose which of the other 2 to use).

          • OCR software often deals with a similar problems. I used to run OCR for a global manufacturing company and the software needed to be able to read even under contrast tone inversion from lithographic processes, or even when the target is disappearing for certain wavelengths of light. In that case it was impossible to devise an algorithm that would work for competing lighting conditions so instead you optimized for each scenario separately, then in real-time operation the software simply threw all the algorit
    • No it is racist, no matter what, take for example facial recognition not recognizing black people as well, racist right. What if someone made a facial recognition system that recognized black people better than white, that would be racist too, police are targeting black people. I think it is a fundamental problem with the way people think, we don't use the facts to come to a conclusion, we have a view of the world and arrange the facts to match our view of the world. So if you think the world is racist you

  • by MrLogic17 ( 233498 ) on Monday September 21, 2020 @01:07PM (#60528260) Journal

    Twitter trying to find what parts of a photo I'm most interested in seeing as a thumbnail seems to be a problem too hard to solve.
    Their attempt at focusing at faces may have seemed like a good idea, but some faces are easier to detect than others, depending on how you've trained the algorithm.

    Twitter would be better off scaling the whole image down to an actual thumbnail, and stop trying to guess what I want to see.

  • Interesting how algorithms developed by all the Whities in SV end up being racially biased. They should have tested in on Token first, short flight to Colorado.
  • When your algorithms look at points of contrast, any picture with less contrast will be harder to classify in detail.

    People with darker complexions naturally have less contrast, so they're harder to classify. This is not racist - it is, as someone else pointed out, a property of light reflecting off surfaces.

    What might be racist is asking a computer to classify things for you, since it isn't going to be affected by prejudice that says you must get a certain outcome, regardless of facts.

    • "What might be racist is asking a computer to classify things for you, since it isn't going to be affected by prejudice that says you must get a certain outcome, regardless of facts."

      You were doing well until that part. If the goal is to detect faces and sometimes it doesn't because a face has dark skin then it has failed, period. And if the way it is written causes it to demonstrate bias, then it is biased. It doesn't mean it's racist, even if it's racially biased, because it doesn't know anything about ra

  • How much of this has to do with the charge-coupled device sensors [youtu.be] in the first place? Wouldn't it at least partially have to do with biasing light-gathering towards a section of the total dynamic range, after which you'd probably be better able to tweak the algorithms that do their own raw luminance -> detection-biased luminance mapping prior to detection?

  • by Nkwe ( 604125 ) on Monday September 21, 2020 @01:56PM (#60528516)
    Which is it? This is a site for nerds, so let's talk about the the actual technical problem, not society's problem. Things such as algorithms having a hard time with low contrast images, poor training of AI models, etc. If there is actual racial bias, let's talk about if it is because of actual intentional bias, unconscious bias, or just technical momentum (biased or not) because whomever trained the model happened to have access to more caucasian images than those showing people of color. I would be interested to know if the algorithm is actually biased, as the headline suggests (meaning some code specifically makes a racial judgement) or if the process leading to the creation of the algorithm was influenced by bias (meaning that the headline is not technically accurate), or if there isn't actually any bias.
    • "Biased" simply means that the algorithm preferentially selects white faces as interesting over black faces.

      The existence of bias has nothing to do with whether that was built into the code, or the algorithm learned it from scratch.

      (It is almost certainly due to the fact that the algorithm was trained on more white faces than black faces. Is that bias? There are more white faces than black faces in America, that's just demographics. But if the result is preferentially selecting white over black, it is stil

      • Yes, but reading through the Twitter thread linked in the article, it seems there is an assumption of preference without actually demonstrating it.

        Let's say you have 100 pictures, 20 of black people and 80 of white people. If the first step of your algorithm is to randomly select 10 pictures, the uniformly sampled and random result (ie: unbiased) would have 2 black people and 8 white people. In reality, a single sample may not have these exact proportions. For example, with 1 sample, you may get 1 black per

        • I think you're missing the subject here. The twitter algorithm is taking images that have a white person and a black person in them, and cropping them to show just the white person.

          You say it takes something more to determine if the algorithm has bias, but this is the very definition of bias.

          • It is cropping them because it is trying to display a larger image on devices with smaller screens (the problem seems most apparent on mobile devices). It could just grab a random part of the image to crop, which is what a lot of these viewers do, but Twitter was trying to be clever by selecting the "most important" part of the image to crop. Notwithstanding the obvious subjectivity of "importance", clearly this is a difficult problem.

            One way to approach it is to think about it as a semantic segmentation ta

      • (It is almost certainly due to the fact that the algorithm was trained on more white faces than black faces. Is that bias? There are more white faces than black faces in America, that's just demographics. But if the result is preferentially selecting white over black, it is still bias, even if the bias is caused by demographics. By definition.)

        You make a reasonable point, but I wonder if you're too kind about the demographics. When I'm testing a function, I focus my effort on what's likely to cause my system to fail, not on what's most common. I think a lot of programmers do. Saying that you're going to train on a dataset with 90% white faces because that's the demographics of the target population is like saying you're going to test your division function with 1-9 because 0 only happens 10% of the time.

    • Which is it? This is a site for nerds, so let's talk about the the actual technical problem, not society's problem. Things such as algorithms having a hard time with low contrast images, poor training of AI models, etc. If there is actual racial bias, let's talk about if it is because of actual intentional bias, unconscious bias, or just technical momentum (biased or not) because whomever trained the model happened to have access to more caucasian images than those showing people of color. I would be interested to know if the algorithm is actually biased, as the headline suggests (meaning some code specifically makes a racial judgement) or if the process leading to the creation of the algorithm was influenced by bias (meaning that the headline is not technically accurate), or if there isn't actually any bias.

      Bro, chill the fuck up. It is a legitimately technical discussion to discuss "bias" in an AI algorithm, which could be "racial bias", which is distinct from "racist bias."

      If there's someone that is technically incapable to distinguish between these terms, that's you, so just stop telling the rest of us what is technical or what is not.

      Or just keep ranting at the wind going out of a tangent if it makes you feel better, I am not judging.

    • unfortunately , with many articles here. The article is so low on useful information as to make such discussions not much more the speculative. if the algorithm is selecting 'white' faces as 'interesting' as seems to be suggested. Where did it get the definition of interesting? I mean , if the majority of the people looking are white and most of them are more interested in seeing family or people who look like themselves then not , it may simply be performing as designed and giving 'the majority' of use

    • So much easier to just cry "RACISM" and let some angry people burn shit down.
      Nobody reads to the 2nd tier comments anyway.

  • by gurps_npc ( 621217 ) on Monday September 21, 2020 @02:00PM (#60528540) Homepage

    Look, if the software was developed by black scientists, it would not have been released because it failed to identify black faces.

    The racism is not in the failure to identify black faces as easily as white because of light reflectivity. Instead it is a) not testing the software enough and b) using a software that clearly fails when used on black faces.

    Fix the software, then implement it, not the other way around.

    • Look, if the software was developed by black scientists, it would not have been released because it failed to identify black faces.

      The racism is not in the failure to identify black faces as easily as white because of light reflectivity. Instead it is a) not testing the software enough and b) using a software that clearly fails when used on black faces.

      Fix the software, then implement it, not the other way around.

      Which would this be racist? A "racial bias" can occur in a system without requiring racism among the implementers. You are absolutely right if this was developed by black scientists, this problem would not have occurred, but we can be almost sure that it would have been biased in another way. This industry is in its infancy.

      Why there were no black scientists or testers involved, that's a good question. And for all we know, there might have been - just because a test cycle doesn't detect a bug (in particu

    • The racism is not in the failure to identify black faces as easily as white because of light reflectivity. Instead it is a) not testing the software enough and b) using a software that clearly fails when used on black faces.

      That isn't racism. Racism is hating people based on their race. I'm not sure where the notion that anything having to do with race is racist. But that's not how it works. Only intentional malice is actually bigotry. Everything else is usually accidental, lack of awareness, ignorance or at the worst negligence.

      Just because some programmers failed to account for this in training their models doesn't make them racist.

      • Devs not testing their work thoroughly, I don't believe it! When I read this article I immediately thought of the show 'Silicon Valley', specifically the 'hotdog' app. So yeah, I'm not sure that racism is present so much as the naivety, tight deadlines and shortened test phase that beset every project :)
    • so if it works for 90% of the use cases it isn't good enough because it makes other people feel bad? I wish someone would send Microsoft the Memo, I think most of they time they figure 80% success on all use cases is really good testing.

    • It does not fail to identify black faces. The claim is that it chooses white faces over black faces when asked to identify the "important" part of the picture. I say "claim" because it is stated, not proven. It's easy to say they "just didn't test it enough", but assigning a category such as importance is much harder than simply identifying it. There is likely to be a high error rate and a significant number of exception cases. The algorithm may in fact be biased, but we can't just assume that, not if you w

    • So anything that is biased toward white people can not be used, even if it is physics that causes the problem?
  • There is horrible racism in America and other countries. Relative reflectivity is not part of that propblem.

    Post this stuff on WhineDot: News for social scientists. Stuff that we can make matter.

  • ...is the contrast setting.

  • by Miles_O'Toole ( 5152533 ) on Monday September 21, 2020 @02:45PM (#60528728)

    I love the way Slashdot has suddenly become the home of dozens of expert photographers, many of whom are throwing around terms like "relative reflectivity" and "contrast settings" and "luminance", to prove this whole kerfuffle is nothing more than SJWs getting their knickers in a knot for no reason.

    Well, I've worked as a professional photographer, and I can tell you one thing: if this algorithm is actually making poor cropping choices based on skin colour, it's because whoever wrote it was too lazy or too stupid to ask a pro how we somehow magically manage to create group photos that with minimal correction or none at all make people with a wide variety of skin tones look pretty much the same. And by "the same", I mean we don't produce results where black people look fine while white people look washed out and flat, or white people look fine while black people look like featureless shadows with eyes.

    • Interesting. So, as a professional photographer, do you have an algorithm that just magically does your job for you no matter what the starting image looks like? Or do you have to manually touch up each image, applying different filters as necessary to achieve the result you are looking for? Can you work your magic on a low resolution out of focus compressed JPEG, or do you typically start from a high quality RAW image?

      Before criticizing the programmer, maybe take some time to appreciate the difficulty of y

    • by K10W ( 1705114 )

      I love the way Slashdot has suddenly become the home of dozens of expert photographers, many of whom are throwing around terms like "relative reflectivity" and "contrast settings" and "luminance", to prove this whole kerfuffle is nothing more than SJWs getting their knickers in a knot for no reason.

      Well, I've worked as a professional photographer, and I can tell you one thing: if this algorithm is actually making poor cropping choices based on skin colour, it's because whoever wrote it was too lazy or too stupid to ask a pro how we somehow magically manage to create group photos that with minimal correction or none at all make people with a wide variety of skin tones look pretty much the same. And by "the same", I mean we don't produce results where black people look fine while white people look washed out and flat, or white people look fine while black people look like featureless shadows with eyes.

      That isn't apples to apples and I feel you're missing the point. Or at least if you haven't don't indicate that you get it. Fwiw I totally get what you are saying and I've taken plenty of photos of white people in dark clothing/dark hair against dark background. Same for very dark skintone people in pale clothing against dark backgrounds yadda yadda all without blown highlights or crushed blacks. I am not a pro as I do it as a hobby and not for a living for various reasons, however a bit more able than most

  • by Anonymous Coward
    If you look here [twitter.com] and then compare to this [twitter.com], it'd seem like perhaps maybe some people just find racism anywhere they look.
  • Dark skin just doesn't reflect light so much and is therefore more tricky to photograph well in the first place. If you've had to take group photos with a mix of subjects then hopefully you had some good flash fill and done some subtle arranging of positions!

It is easier to write an incorrect program than understand a correct one.

Working...