Amazon Is Pushing Facial Recognition Tech That a Study Says Could Be Biased (nytimes.com) 91
An anonymous reader quotes a report from The New York Times: Over the last two years, Amazon has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. Now a new study from researchers at the M.I.T. Media Lab has found that Amazon's system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft. The results raise questions about potential bias that could hamper Amazon's drive to popularize the technology.
In the study, published Thursday, Rekognition made no errors in recognizing the gender of lighter-skinned men. But it misclassified women as men 19 percent of the time, the researchers said, and mistook darker-skinned women for men 31 percent of the time. Microsoft's technology mistook darker-skinned women for men just 1.5 percent of the time. For the latest study, [co-author of the study, Ms. Buolamwini, said] she sent a letter with some preliminary results to Amazon seven months ago. But she said that she hadn't heard back from Amazon, and that when she and a co-author retested the company's product a couple of months later, it had not improved. "It's not possible to draw a conclusion on the accuracy of facial recognition for any use case -- including law enforcement -- based on results obtained using facial analysis," Matt Wood, general manager of AI at Amazon Web Services, said. He added that the researchers had not tested the latest version of Rekognition, which was updated in November.
"Amazon said that in recent internal tests using an updated version of its service, the company found no difference in accuracy in classifying gender across all ethnicities," the NYT reports. The new study is scheduled to be presented Monday at an artificial intelligence and ethics conference in Honolulu.
In the study, published Thursday, Rekognition made no errors in recognizing the gender of lighter-skinned men. But it misclassified women as men 19 percent of the time, the researchers said, and mistook darker-skinned women for men 31 percent of the time. Microsoft's technology mistook darker-skinned women for men just 1.5 percent of the time. For the latest study, [co-author of the study, Ms. Buolamwini, said] she sent a letter with some preliminary results to Amazon seven months ago. But she said that she hadn't heard back from Amazon, and that when she and a co-author retested the company's product a couple of months later, it had not improved. "It's not possible to draw a conclusion on the accuracy of facial recognition for any use case -- including law enforcement -- based on results obtained using facial analysis," Matt Wood, general manager of AI at Amazon Web Services, said. He added that the researchers had not tested the latest version of Rekognition, which was updated in November.
"Amazon said that in recent internal tests using an updated version of its service, the company found no difference in accuracy in classifying gender across all ethnicities," the NYT reports. The new study is scheduled to be presented Monday at an artificial intelligence and ethics conference in Honolulu.
Get off slashdot and go read the tabloids (Score:2)
Inaccurate (Score:1, Troll)
Software can be inaccurate. It can't be biased.
Re: (Score:2)
Machine learning doesn't work like that. You feed data into it and it works out the algorithms itself.
Re: (Score:2)
Algorithms can most certainly exhibit bias.
https://www.technologyreview.c... [technologyreview.com]
https://en.wikipedia.org/wiki/... [wikipedia.org]
https://towardsdatascience.com... [towardsdatascience.com]
Re: (Score:2)
This must be a new meaning of the word "biased" - adj, giving a result I don't agree with.
Re: (Score:2)
You read those citations and THAT'S your takeaway? I'm not sure I believe you're that stupid, but I guess I could be wrong.
Re: (Score:3)
Machine learning doesn't work like that. You feed data into it and it works out the algorithms itself.
Data can be biased. If the training set is 90% photos of white people, then the NN is going to be better at identifying white people.
But it isn't clear why bias is a problem here. If it correctly identifies a white thief 90% of the time, and a black thief 80% of the time, is it really better to "fix it" so that the white identification rate is lowered to 80%, so that it is "fair"?
Re: (Score:2)
But it isn't clear why bias is a problem here. If it correctly identifies a white thief 90% of the time, and a black thief 80% of the time, is it really better to "fix it" so that the white identification rate is lowered to 80%, so that it is "fair"?
Depending on the application, it could be, yes. The difference in false positive rate between 90% and 80% is double.
If the recognition frequently leads to police action that can be harmful or disturbing for innocents, having a system that falsely identifies one group twice as much as another might cause tension. In that case, lowering the accuracy until it's equal across the board might be prudent, so black innocents aren't twice as likely[*] to be falsely targetted as white innocents are.
Catching more th
Re: (Score:2)
It doesn't end, and shouldn't. Every circumstance is different, and new problems can and will present themselves. This is why we have a legislature and government, instead of relying on black-and-white totalitarian laws and regulation, set in stone and not allowing adjustments to reality.
One of the functions of a modern government is to protect the minorities from a tyranny of the majority, and make sure that justice is kept blind, even if it means we sometimes have to deliberately blindfold her.
Re: Inaccurate (Score:2)
I'd be curious to see how it classifies "Kaitlyn" Jenner, or "Chelsea" Manning.
Re: (Score:1)
The problem is the errors. Say you had facial recognition checking people entering a venue against their recorded details, and it decided that you were the wrong gender and barred you. At best it would be annoying as you had to get someone to manually intervene, at worse you could be badly disadvantaged.
There was a story last year about a woman who had endless trouble with telephone banking because the system was convinced her voice sounded male. The bank said they couldn't do anything about it.
Re: (Score:2)
There was a story last year about a woman who had endless trouble with telephone banking because the system was convinced her voice sounded male.
Do you have a citation? I am curious why a bank would treat one gender differently than another, and give "endless trouble" only to males.
I have a Vanguard account, and they use voice recognition as an optional extra security feature, but they treat males and females exactly the same. The VR identifies each customer as an individual. Categorizing voices by gender would be pointless and unnecessary.
Re: (Score:2)
https://www.bbc.co.uk/news/uk-... [bbc.co.uk]
Their fraud system sees that the account belongs to a woman and flags it up when it things a man is calling.
Re: (Score:2)
Do you know that for certain?
A lot of this stuff is more smoke and mirrors than you might think. They may well feel that a rough classification is better than nothing.
Female voice and they know the account holder is male? Reject!
British accent, and they know (from voice analysis) the account holder has a Valley Girl accent? Reject, fershure!
A very reasonable approach, really. Don't assume that this stuff is doing sophisticated voice prints.
Re: (Score:2)
Its not a math, design, computer problem. Its a global data set problem.
Keep working on the design until it works as expected on all average passengers, drivers around the USA.
The demographics of a city should be easy to understand. Find nations with the same average demographics and see what their best CCTV detection rate is?
Other advanced nations have the same count of people to track with CCTV and passports/natio
Re: (Score:1, Informative)
The word biased has a scientific connotation to it. Fake journalists use it to sound smarter. Plus it's been adopted by the offended community, which is the main audience of the Fake News Times. It has special meanings.
Re: (Score:1)
Is there one single Tolerant Liberal out there who isn't a raging violent profane lunatic?
Re: Bias: no no,no no no! (Score:2, Insightful)
Why the hell would you WANT some fucked up bezos computer to identify you as you walk down the street?
I would say it is biased against bald beardless white men. Black women get misidentified at such a high rate the tech is worthless to identify them. That is a GOOD THING FOR BLACK WOMEN!!!
Iâ(TM)m not bald and now I am definitely keeping my beard. For once, a benefit to being a black woman: you dont get tracked and stored by bezos and his evil minions.
And any amazon engineers reading this who worked
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Wow being modded down for wrongthink.
No my post was not a troll because I'm not trying ot get a rise. Bias in statistics is a thing. Any alorithm that implements statistics risks bias. Algorithms are not perfect just because they're run on a computer.
So, biased against white people? (Score:2)
So the message is that the software is much more likely to be successful at apprehending guilty white people? Sorry for using a racist tag ("white") in my comment.
It does sound like it's strongly biased against white people and should be scrutinized carefully.
Re: (Score:2)
Re: (Score:2)
If it's misidentifying *this* black woman as *that* black woman then surely there's no overall change?
Now if it's misidentifying white men, traffic cones or parakeets as black women then maybe there are some bugs that need fixing.
Sign of just how far gone we are.... (Score:1)
...when the concern about this tech is not that it exists but that it might be "unfair" to some artificial identity politics minority.
Re: (Score:1)
Re: Sign of just how far gone we are.... (Score:1)
You are assuming false positive identifications as the problem in the system. The article is talking about incorrect gender identification. There is no evidence presented here about false positive rates with regard to sub-groups, so your assumption isnâ(TM)t substantiated.
Idiocracy. (Score:4, Funny)
Lol idiots (journalists)
Re: (Score:2)
Whoosh. That the sound of the point completely passing you by.
Your face gets matched, they haul you in, or send a SWAT team to your house, or make you miss your flight. These systems encourage lazy policing. We have seen it before, and they assured us that it wouldn't be rolled out until the problems were fixed. They lied.
If you were being pulled over and detailed regularly because your face kept triggering the facial recognition software you would get pissed off pretty quickly. For some people it's more th
Re: (Score:2)
To drive out and stop every wrong "matched" face is another urgent call to the police at the same that has to be held back.
How to fix that?
Buy better equipment that works and gets better results all over the USA.
Criminals and illegal migrants in inner city areas get caught/tracked when driving, as a passenger.
Along with any real time smart phone in use, any criminals in cont
Re: (Score:2)
Criminals doing their decades of "crime" in inner city areas not stop crime and bring back investment and jobs.
Re "because having less police do the work and moving it all to machines is the answer huh?"
A city can only afford so many police every decade and to cover their pensions.
Thats a set number of police in a city to cover all requests for help and all results of CCTV.
Detecting inner city crime using CCTV allows the same
Re: (Score:2)
Ethnicity? (Score:3)
the company found no difference in accuracy in classifying gender across all ethnicities
Maybe the spokesperson is clueless, but ethnicity is not race. Look at people in Cuba: some appear Black, some European, some Native American, many are mixed. But all are Hispanic ethnicity.
Re: (Score:1)
That's because "hispanic" is a catch-all made-up bullshit ethnicity to begin with. Those people don't "appear" black, white, native Indian/Asian, etc. They ARE black, white, native Indian/Asian, etc. There are "hispanics" who have more lily-white European DNA than a country music fest in Salt Lake City. There are also "hispanics" who are as African as a poor slave who just got dragged off the boat. And there are "hispanics" who are as pure Asian as the first ambitious bear hunter who crossed the land-bridge
Well, visible light camera sensors (Score:4, Insightful)
Darker-toned faces reflect less visible-wavelength light.
That's just physics, not racism.
So the amount of light, and ability to resolve contrasts, edges etc, would be less.
So the image classification task might be subject to more error.
Perhaps a different spectral range would work better?
Re: (Score:1)
I'll just latch on here.
In the entire article, there is no mention of the word racism. There is mention of bias... and it is biased. It performs better against some groups and worse against others. Pretty much the definition of bias.
Now I am not saying you're connotation is wrong here.
There's a high chance some people will read the article and read 'bias' as 'racism' in these times we live in, when it is most likely just a case of Amazon's software not being as good as it could be. But there is no claim of
Is this face recognition? (Score:2)
When a person recognises another person's face we usually mean to say that they've seen the person before and/or can possibly identify the person.
This slashdot article suggests that something else is meant here: gender and race recognition. Is that indeed the case? Are we asking law-enforcement systems to identify gender and race?
If so, to what end? To find people based that match often vague descriptions?
I'm probably being a moron for not realising this until now. All this time I thought they were just loo
Re: (Score:2)
Re: (Score:2)
Move into any larger US city using any establish method of transport and that new face is detected.
That detects all criminals and illegal migrants expecting their fake ID to work.
Inner city crime can then be detected and tracked over time.
Prediction then sets in as average criminals that move in the parts of the inner city have always expect to be as safe. As
I don't believe it. (Score:3)
Facebook would *never* promote a technology without thoroughly thinking through the implications. They are the pinnacle of corporate social responsibilty...
Come to think of it, that last part may actually be true.
Obviously this technology should be banned (Score:2)
Obviously this technology needs to be banned from use until it misidentifies men as women as often as it misidentifies women as men. We can't allow anything that yields unequal results from ever being used.
Probably trained on full-time development staff (Score:2)
They probably trained this on their full-time development staff.
They should have included warehouse staff, and then a double measure of cleaning/maintenance staff.
There. Fixed that.
Re: (Score:2)