Twitter Algorithm Prefers Slimmer, Younger, Light-Skinned Faces (bbc.com) 45
An anonymous reader quotes a report from the BBC: A Twitter image-cropping algorithm prefers to show faces that are slimmer, younger and with lighter skin, a researcher has found. Bogdan Kulynyc won $3,500 in a Twitter-organized contest to find biases in its cropping algorithm. Earlier this year, Twitter's own research found the algorithm had a bias towards cropping out black faces. The "saliency algorithm" decided how images would be cropped in Twitter previews, before being clicked on to open at full size. But when two faces were in the same image, users discovered, the preview crop appeared to favor white faces, hiding the black faces until users clicked through. As a result the company revised how images were handled, saying cropping was best done by people.
The "algorithmic-bias bounty competition" was launched in July -- a reference to the widespread practice of companies offering "bug bounties" for researchers who find flaws in code -- with the aim of uncovering other harmful biases. And Mr Kulynyc, a graduate student at the Swiss Federal Institute of Technology in Lausanne's Security and Privacy Engineering Laboratory, discovered the "saliency" of a face in an image could be increased -- making it less likely to be hidden by the cropping algorithm -- by "making the person's skin lighter or warmer and smoother; and quite often changing the appearance to that of a younger, more slim, and more stereotypically feminine person".
Awarding him first prize, Twitter said his discovery showed beauty filters could be used to game the algorithm and "how algorithmic models amplify real-world biases and societal expectations of beauty." Second prize went to Halt AI, a female-founded University of Toronto start-up Twitter said showed the algorithm could perpetuate marginalization in the way images were cropped. For example, "images of the elderly and disabled were further marginalized", the company said. Taraaz Research founder Roya Pakzad won third prize for an entry that showed the algorithm was more likely to crop out Arabic text than English in memes.
The "algorithmic-bias bounty competition" was launched in July -- a reference to the widespread practice of companies offering "bug bounties" for researchers who find flaws in code -- with the aim of uncovering other harmful biases. And Mr Kulynyc, a graduate student at the Swiss Federal Institute of Technology in Lausanne's Security and Privacy Engineering Laboratory, discovered the "saliency" of a face in an image could be increased -- making it less likely to be hidden by the cropping algorithm -- by "making the person's skin lighter or warmer and smoother; and quite often changing the appearance to that of a younger, more slim, and more stereotypically feminine person".
Awarding him first prize, Twitter said his discovery showed beauty filters could be used to game the algorithm and "how algorithmic models amplify real-world biases and societal expectations of beauty." Second prize went to Halt AI, a female-founded University of Toronto start-up Twitter said showed the algorithm could perpetuate marginalization in the way images were cropped. For example, "images of the elderly and disabled were further marginalized", the company said. Taraaz Research founder Roya Pakzad won third prize for an entry that showed the algorithm was more likely to crop out Arabic text than English in memes.
Re:Twitter is racist (Score:4, Insightful)
Machine learning systems will be biased if the training data is biased.
If most photos are of young slim white people, then that will be the bias.
The obvious solution is a bigger training set with more variety of ages, body fat, and race.
Re: (Score:2)
Racism in African tends to be about things other than skin colour. Shape of the skull, shape of the eyes and so on.
Darker skin colour evolved only after leaving high UV intensity areas, not before. You'd still get skin burns and early skin cancer with lighter skin in Africa, just like you get severe vitamin D efficiency and depression with black skin in Northern Europe.
Re: (Score:2)
"Efficiency" was supposed to be "deficiency" in previous post. Obviously.
Re: (Score:2)
Second this.
Fairer skin is one parameter of beauty in various Asian (most) cultures, perhaps even before start of civilization. Chinese literally has a phrase meaning "white skin masks three defects" () and Japanese do have less strong phrases relating to that too.
Japanese friends once told me the whiteness also make a person's face features stand out better at night- which was critical in ancient millennias until a century ago with only candle/fire/moon lighting. Especially since nighttime is where most
Re: (Score:2)
You were modded troll for some reason but you are correct. In this case though it's not just the AI at fault, some of the hand coded algorithm stuff that Twitter uses (e.g. for face detection and framing) have problems too.
Re: (Score:2)
How about "fat white incels like you"? Is that redundant?
Now to fix it. (Score:4, Insightful)
a Twitter-organized contest to find biases in its cropping algorithm
It sucks that the algorithm was poorly trained but it's good that they are trying to fix it. This was a bad foul up on their part but it's not a Microsoft makes a Neo-Nazi Sex Robot [slashdot.org] level mistake.
Re: (Score:2)
This was a bad foul up on their part but it's not a Microsoft makes a Neo-Nazi Sex Robot level mistake.
To be fair, few companies have the raw talent required to fuck up quite that hard.
And so what? (Score:2, Interesting)
Re:And so what? (Score:4, Insightful)
No that doesn't make sense. The goal isn't to make it work well sometimes, the goal is to make it recognize faces. A human doing the task wouldn't have trouble, so neither should an AI.
Re: (Score:2)
Re: (Score:2)
I'm not giving AI a break. If AI wants to run with the men, then it needs to train with the boys. It's the Eye of the Matrix, with sweat and hard binary [youtube.com].
Re: (Score:1)
Re: (Score:2)
Why? How about someone explain it to you - clearly, from your comment, all you know is Faux "News" level propaganda about it.
Re: (Score:2)
Why should anyone believe that a human would not subconsciously select what to crop based on the same (or some other) bias?
Re: (Score:2)
Have you ever accidentally cropped a black person out of a picture?
Re: (Score:2)
Imagine a picture of a celebrity, an attractive white women escorted by a black bodyguard. The kind you can find in tabloids.
There is a high chance for the bodyguard to be cropped out of the picture, at least a much higher chance than the celebrity being cropped.
Re: (Score:2)
Yes, but a human would never mix up the white bodyguard and the black celebrity. The human would look at the clothes, the pose, the expression. If the AI model hasn't included those factors that's a failing of the model designers.
Re: (Score:2)
I'm sure many humans will mix up if you have a black celebrity and a white bodyguard. Especially if the bodyguard is a woman, because it is uncommon.
In fact, people mistaking the "VIP" for the "servant" when the VIP is black and the servant is white are often pointed as an example of racial bias. I am sure most people have this bias, but smart ones double check before saying anything. AI is not smart but it can match an insensitive, poorly educated or very young human. And the AI may not have a way of doubl
Re: (Score:2)
No that doesn't make sense.
Well quite. Here, we generally accept that computers are buggy pieces of shit programmed largely by incompetents with AI and especially deep learning being the bullshit cherry on top of the bullshit pie.
Unless the AI does something aligned with a well known human prejudice in which case the AI and everyone involved in its creation are unimpeachably perfect and it's a flawless reflection of natural justice.
Re: (Score:2)
The Twitter algorithm can recognize faces well enough. But sometimes, it has to make a choice. The usual test is to make an tall image, one face on top, one face on the bottom and nothing in the middle. The cropping algorithm understands that there are faces and faces are important, but it has to make a choice. And that is be a difficult choice, even for humans, so much that I expect many humans to just dodge the problem: do not crop at all, edit the image, refuse to do anything, etc...
But the algorithm is
Re: (Score:2)
I took a long Slashdot hiatus recently. Eventually I ran out of groans. It seems I will again soon.
Man, way to undervalue your own brain. You have 100 trillion synapses that form an extremely sophisticated Bayesian prior on everything ordinary under the sun (for phenomena illuminated by precision instruments, the Bayesian prior from 90% of the daylight human population is complete batshit, but we'll ignore this for the moment).
The human b
Re: (Score:2)
lol I was describing the goal, not the reality. You can calm down.
"Recognizes" not "Prefers" (Score:2)
There are certain faces it recognizes, not that it prefers. This doesn't seem to be unusual for AI facial recognition, similar issues are showing up in other organizations that use it.
Part of it could be the data set they're training it on, but also the data set they're applying it to. Social media selfies don't always have perfect lighting. If your face is already darker, maybe that lowers the quality enough so that the AI doesn't recognize it. They cite only a "4% difference from demographic parity" so we
So basically, let people decide for themselves? (Score:2)
So it took about a zillion people screaming about racism and sexism for what felt like a zillion years to convince Twitter to just let people do their own cropping?
Why was this not the design from the beginning? Can someone, anyone explain the fetish of the past 15 years or so for removing choice from the user? This seems especially prevalent in social media, and it doesn't seem like all of the railroading is just for ad views. Just because you made a whiz-bang neat-o feature that can be pretty impressive w
Who are they gunning for? (Score:2)
Almost all of the US will fail the 'slim' test and the rest will fail at the other 2.
Um (Score:1)
Awarding him first prize, Twitter said his discovery showed beauty filters could be used to game the algorithm and "how algorithmic models amplify real-world biases and societal expectations of beauty."
You mean, real world biases like the average preference that black men have for lighter skinned women?
(Totally real world; just ask some black women, they are not shy about talking/complianing about it.)
Re: Um (Score:2)
the average preference that black men have for lighter skinned women?
That's an error in analysis of the data. Representing about 10% of the population, I would expect a black man (or woman) to select a non-black 9 times out of ten if their choice was made randomly or based only on other factors. That this is not the case indicates that they tend to select partners of their own race.
Re: (Score:1)
the average preference that black men have for lighter skinned women?
That's an error in analysis of the data. Representing about 10% of the population, I would expect a black man (or woman) to select a non-black 9 times out of ten if their choice was made randomly or based only on other factors. That this is not the case indicates that they tend to select partners of their own race.
Again, talk to some black women about it. Even among black women, black men on average prefer lighter skinned women. It's a thing, not an "error of data analysis".
Complain now... (Score:2)
But when the machines come for you, and they can't detect you, you will be glad
Re: Complain now... (Score:2)
Re: (Score:2)
Better than this is having a "master face" as on the other article posted today where the recognition will think you're many different people.
Best use of AI so far... (Score:2)
the fundamental, unspeakable confound (Score:2)
The fundamental, unspeakable confound in this space is that light skin is more reflective than dark skin, and the camera fundamentally receives more photons on average (in the same photo) from faces with light skin rather than dark skin.
Look up the definition of light and dark, as it was on the Lord's seventh day, long before the invention of grievance studies, and the modern interrogation into whether photons are the primary physical conduit of misinformation (yes, they are—for every distinguished sl