Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Twitter AI Social Networks The Internet

Twitter Algorithm Prefers Slimmer, Younger, Light-Skinned Faces (bbc.com) 45

An anonymous reader quotes a report from the BBC: A Twitter image-cropping algorithm prefers to show faces that are slimmer, younger and with lighter skin, a researcher has found. Bogdan Kulynyc won $3,500 in a Twitter-organized contest to find biases in its cropping algorithm. Earlier this year, Twitter's own research found the algorithm had a bias towards cropping out black faces. The "saliency algorithm" decided how images would be cropped in Twitter previews, before being clicked on to open at full size. But when two faces were in the same image, users discovered, the preview crop appeared to favor white faces, hiding the black faces until users clicked through. As a result the company revised how images were handled, saying cropping was best done by people.

The "algorithmic-bias bounty competition" was launched in July -- a reference to the widespread practice of companies offering "bug bounties" for researchers who find flaws in code -- with the aim of uncovering other harmful biases. And Mr Kulynyc, a graduate student at the Swiss Federal Institute of Technology in Lausanne's Security and Privacy Engineering Laboratory, discovered the "saliency" of a face in an image could be increased -- making it less likely to be hidden by the cropping algorithm -- by "making the person's skin lighter or warmer and smoother; and quite often changing the appearance to that of a younger, more slim, and more stereotypically feminine person".

Awarding him first prize, Twitter said his discovery showed beauty filters could be used to game the algorithm and "how algorithmic models amplify real-world biases and societal expectations of beauty." Second prize went to Halt AI, a female-founded University of Toronto start-up Twitter said showed the algorithm could perpetuate marginalization in the way images were cropped. For example, "images of the elderly and disabled were further marginalized", the company said. Taraaz Research founder Roya Pakzad won third prize for an entry that showed the algorithm was more likely to crop out Arabic text than English in memes.

This discussion has been archived. No new comments can be posted.

Twitter Algorithm Prefers Slimmer, Younger, Light-Skinned Faces

Comments Filter:
  • Now to fix it. (Score:4, Insightful)

    by Gravis Zero ( 934156 ) on Tuesday August 10, 2021 @07:34PM (#61678335)

    a Twitter-organized contest to find biases in its cropping algorithm

    It sucks that the algorithm was poorly trained but it's good that they are trying to fix it. This was a bad foul up on their part but it's not a Microsoft makes a Neo-Nazi Sex Robot [slashdot.org] level mistake.

    • This was a bad foul up on their part but it's not a Microsoft makes a Neo-Nazi Sex Robot level mistake.

      To be fair, few companies have the raw talent required to fuck up quite that hard.

  • And so what? (Score:2, Interesting)

    by wbcr ( 6342592 )
    The model is doing exactly what is was trained to do - suggests the most likely face to be cropped from the image based on the demographics of Twitter. This is maximizing its utility as fewer users need to manually correct the choice made by the algorithm. Why is no one outraged that old fat and ugly users don't tend to make their selfie public? By "correcting" it to someones subjective perception of "fairness" it will just perform worse.
    • Re:And so what? (Score:4, Insightful)

      by phantomfive ( 622387 ) on Tuesday August 10, 2021 @08:41PM (#61678495) Journal

      No that doesn't make sense. The goal isn't to make it work well sometimes, the goal is to make it recognize faces. A human doing the task wouldn't have trouble, so neither should an AI.

      • There's a huge chunk of human brain that does nothing but recognize faces, and it's been training your whole life. Give AI a break.
      • Why should anyone believe that a human would not subconsciously select what to crop based on the same (or some other) bias?

        • Have you ever accidentally cropped a black person out of a picture?

          • by GuB-42 ( 2483988 )

            Imagine a picture of a celebrity, an attractive white women escorted by a black bodyguard. The kind you can find in tabloids.
            There is a high chance for the bodyguard to be cropped out of the picture, at least a much higher chance than the celebrity being cropped.

            • Yes, but a human would never mix up the white bodyguard and the black celebrity. The human would look at the clothes, the pose, the expression. If the AI model hasn't included those factors that's a failing of the model designers.

              • by GuB-42 ( 2483988 )

                I'm sure many humans will mix up if you have a black celebrity and a white bodyguard. Especially if the bodyguard is a woman, because it is uncommon.

                In fact, people mistaking the "VIP" for the "servant" when the VIP is black and the servant is white are often pointed as an example of racial bias. I am sure most people have this bias, but smart ones double check before saying anything. AI is not smart but it can match an insensitive, poorly educated or very young human. And the AI may not have a way of doubl

      • No that doesn't make sense.

        Well quite. Here, we generally accept that computers are buggy pieces of shit programmed largely by incompetents with AI and especially deep learning being the bullshit cherry on top of the bullshit pie.

        Unless the AI does something aligned with a well known human prejudice in which case the AI and everyone involved in its creation are unimpeachably perfect and it's a flawless reflection of natural justice.

      • by GuB-42 ( 2483988 )

        The Twitter algorithm can recognize faces well enough. But sometimes, it has to make a choice. The usual test is to make an tall image, one face on top, one face on the bottom and nothing in the middle. The cropping algorithm understands that there are faces and faces are important, but it has to make a choice. And that is be a difficult choice, even for humans, so much that I expect many humans to just dodge the problem: do not crop at all, edit the image, refuse to do anything, etc...

        But the algorithm is

      • by epine ( 68316 )

        A human doing the task wouldn't have trouble, so neither should an AI.

        I took a long Slashdot hiatus recently. Eventually I ran out of groans. It seems I will again soon.

        Man, way to undervalue your own brain. You have 100 trillion synapses that form an extremely sophisticated Bayesian prior on everything ordinary under the sun (for phenomena illuminated by precision instruments, the Bayesian prior from 90% of the daylight human population is complete batshit, but we'll ignore this for the moment).

        The human b

  • There are certain faces it recognizes, not that it prefers. This doesn't seem to be unusual for AI facial recognition, similar issues are showing up in other organizations that use it.

    Part of it could be the data set they're training it on, but also the data set they're applying it to. Social media selfies don't always have perfect lighting. If your face is already darker, maybe that lowers the quality enough so that the AI doesn't recognize it. They cite only a "4% difference from demographic parity" so we

  • So it took about a zillion people screaming about racism and sexism for what felt like a zillion years to convince Twitter to just let people do their own cropping?

    Why was this not the design from the beginning? Can someone, anyone explain the fetish of the past 15 years or so for removing choice from the user? This seems especially prevalent in social media, and it doesn't seem like all of the railroading is just for ad views. Just because you made a whiz-bang neat-o feature that can be pretty impressive w

  • Almost all of the US will fail the 'slim' test and the rest will fail at the other 2.

  • Awarding him first prize, Twitter said his discovery showed beauty filters could be used to game the algorithm and "how algorithmic models amplify real-world biases and societal expectations of beauty."

    You mean, real world biases like the average preference that black men have for lighter skinned women?

    (Totally real world; just ask some black women, they are not shy about talking/complianing about it.)

    • by PPH ( 736903 )

      the average preference that black men have for lighter skinned women?

      That's an error in analysis of the data. Representing about 10% of the population, I would expect a black man (or woman) to select a non-black 9 times out of ten if their choice was made randomly or based only on other factors. That this is not the case indicates that they tend to select partners of their own race.

      • the average preference that black men have for lighter skinned women?

        That's an error in analysis of the data. Representing about 10% of the population, I would expect a black man (or woman) to select a non-black 9 times out of ten if their choice was made randomly or based only on other factors. That this is not the case indicates that they tend to select partners of their own race.

        Again, talk to some black women about it. Even among black women, black men on average prefer lighter skinned women. It's a thing, not an "error of data analysis".

  • But when the machines come for you, and they can't detect you, you will be glad

    • My thoughts exactly. As a fat and ugly ogre, when they check the facial recognition cameras I will be invisible. Free to wander and wreak havoc upon society.
      • by Z80a ( 971949 )

        Better than this is having a "master face" as on the other article posted today where the recognition will think you're many different people.

  • ...seems to be that it holds up a mirror to society. So far, we don't like what we see. It's not the AI mirror's fault, it's ours. If we want AI to be 'better', we have to change so that our datasets are 'better.' So what is 'better' according to us?
  • The fundamental, unspeakable confound in this space is that light skin is more reflective than dark skin, and the camera fundamentally receives more photons on average (in the same photo) from faces with light skin rather than dark skin.

    Look up the definition of light and dark, as it was on the Lord's seventh day, long before the invention of grievance studies, and the modern interrogation into whether photons are the primary physical conduit of misinformation (yes, they are—for every distinguished sl

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...