Google Executive Warns of Face ID Bias (bbc.com) 71
Facial recognition technology does not yet have "the diversity it needs" and has "inherent biases," a top Google executive has warned. From a report: The remarks, from the firm's director of cloud computing, Diane Greene, came after rival Amazon's software wrongly identified 28 members of Congress, disproportionately people of colour, as police suspects. Google, which has not opened its facial recognition technology to public use, was working on gathering vast sums of data to improve reliability, Ms Greene said. However, she refused to discuss the company's controversial work with the military. "Bad things happen when I talk about Maven," Ms Greene said, referring to a soon-to-be abandoned project with the US military to develop artificial intelligence technology for drones. After considerable employee pressure, including resignations, Google said it would not renew its contract with the Pentagon after it lapses some time in 2019. The firm has not commented on the deal since, only to release a set of "AI principles" that stated it would not use artificial intelligence or machine learning to create weapons.
Wrong? (Score:2, Insightful)
"There is no native criminal class except Congress." -- Mark Twain
Re:Wrong? (Score:5, Funny)
How noble of them (Score:1, Insightful)
If those employees were so concerned about rights and liberties they'd have blocked the tech altogether.
Face ID has no bias, training sets may (Score:5, Insightful)
The technology behind FaceID has no bias. It works really well - if given the right training data. Now it could easily be that the training data you are feeding it is biased in some way, but that is why extensive testing of the resulting recognition engine you have built is key, so you can go back and correct training data...
Because training neural networks is kind of a blackbox, it's sometimes hard to say what kind of bias you may have built in. the Amazon system recognizing a set of politicians as criminal might be down to the lighting used in the picture being a lot like mug shot lighting!
Or who knows, maybe it's latched onto specific micro-expressions of criminals and the politicians it identified really are criminals, we just don't know it yet... :-)
Re: (Score:2)
I looked, I couldn't find any actual data corroborating the claim.
The actual FBI crime stats are easy to get to.
No, training sets may (Score:4, Interesting)
The technology reflects the biases of its inventors
The "technology" behind this is neural networks. How do they reflect bias at al?
They are nothing more than the ultimately transparent black box, reflecting whatever you choose to put into it...
They are so non-biases in fact, the same technology is used to detect if something is a cat or a slice of pizza, or even if tumors are cancerous or not. Yet you could claim racial bias (you did not state that but it was implied).
Like I said, care needs to be taken both in training and in testing, that is where bias may be introduced. But the technology itself is inherently unbiased and a great tool, if used correctly.
Bugs are still a thing (Score:3)
The "technology" behind this is neural networks. How do they reflect bias at al?
Several ways but the two most straightforward are Bad training data and Bugs. Bugs are an issue in ANY software and neural networks are no different and bugs can result in biases. And of course train it with bad data and you'll get biased results. We've seen examples of both cases.
That is not right (Score:4, Interesting)
the two most straightforward are Bad training data
Which is not inherent in the technology as I've pointed out in every post.
and Bugs. Bugs are an issue in ANY software and neural networks are no different and bugs can result in biases.
That reflects a lack of understanding of how neural networks work. They actually are not buggy themselves as they are very simple; all bugs are entirely represented in training data and testing, not in the software itself. You put something in the black box and it classifies it in some desired way, the way the box works is all about the training data.
Furthermore it's a bit odd to claim bugs are a kind of "bias" as usually they are more about failure than bias...
Re: (Score:1)
I would suggest that dark skinned people are harder for cameras to pick up clean details, without great lighting many details are lost in the noise leading to much porer results.
Bugs (Score:2)
Which is not inherent in the technology as I've pointed out in every post.
It most certainly is. Go ahead and try to make a useful neural network without any data to train it on. What you'll have without the training data is a good approximation of a boat anchor. And making sure you have an unbiased data set is FAR from trivial.
They actually are not buggy themselves as they are very simple;
Being simple does not equal being bug free. Even simple devices and simple programs can and do have bugs.
You put something in the black box and it classifies it in some desired way, the way the box works is all about the training data.
That is only true if there are no flaws in the design and implementation of the box.
Furthermore it's a bit odd to claim bugs are a kind of "bias" as usually they are more about failure than bias...
Bias is favoritism against one thing, person, or group compared
Re: (Score:2)
To enlarge on that, even trying to create an unbiased data set can accidentally bias the data set. For example, should the racial mix of the training set be similar to the distribution in the population? You could accidentally undertrain it for some races resulting in the technological version of "they all look alike to me" BECAUSE you tried to make the training set unbiased.
It is thought that the same thing happens to the neural net between our ears for people who live in a more homoginous population. No p
Re: (Score:3)
Citation:
https://www.independent.co.uk/... [independent.co.uk]
Re: (Score:2)
It's okay. Not everyone has to understand the basics of machine learning.
Re: (Score:2)
Re: (Score:1)
Using unbiased training data doesn't necessarily lead to an unbiased system. If your task is to classify thumbnail pictures of cats, dogs, coins, and fake coins, then your system will have problems distinguishing the last two even though you were given 1000 training pictures of each type.
Re: (Score:3)
Guess where that comes from, and how it was tuned? Decisions about how to data reduce the input - create bias.
What helps tell say, white faces apart and white from black (to vastly oversimplify, not trying to exclude any race et
Who watches the watchmen? (Score:3)
The technology behind FaceID has no bias. It works really well - if given the right training data.
Given that clearly there isn't a set of "right training data" available that sounds like a hasty conclusion without evidence. Your argument is circular. You say the technology has no bias but proving that it has no bias requires feeding it an unbiased data set which hasn't happened. So neither of us knows if there is an inherent bias built into the system or not. Maybe there is and maybe there isn't but you don't have the data to say either way.
Furthermore it's more complicated than just the training da
Tha's a pretty bad argument (Score:3)
Given that clearly there isn't a set of "right training data" available>
Apple got this right, but you are right that generally there is not a "right training set" for facial recognition.
There cannot be though because it all depends on what you are trying to do. What is right for one purpose would be wrong for another.
Your argument is circular. You say the technology has no bias but proving that it has no bias requires feeding it an unbiased data set
That reflects a total lack of understanding of neura
Re: (Score:2)
Are you sure it doesn't have problems caused by a different contrast between skin tone and background, for example? Perhaps the camera's automatic exposure and white balance adjustments are losing detail?
Easy solution (Score:4, Funny)
This presents an obvious solution. To further the goal of eliminating racial bias, we need to turn off all the lights. That means all light bulbs need to be banned, and existing ones destroyed. NASA should launch a huge unfurling disk to block out the sun and leave the planet in perpetual darkness. Newborns should have their eyes surgically removed upon birth (they won't suffer because they won't know what they're missing). Only then can we be free of the evil racial bias being promulgated by light.
Re: (Score:2)
At least, get rid of outdoor lighting. So when you encounter someone at night, everyone is equal.
Re: (Score:2)
Re: (Score:2)
Oh sure, then we'll be doomed when the Triffids invade.
Re: (Score:2)
Just use IR cameras, or properly light areas where you are using the tech.
Or just don't use facial recognition, that's better for everyone.
Re: (Score:1)
Non-thinking AI imitates life (in this case) (Score:2)
Re: (Score:2)
'Our species' == homo sapiens. Prove me wrong; I see clear evidence of it every single day in the news.
..but Rick, not all humans are racist!
Sure. But it's far from being stamped out now isn't it?
Why this? (Score:2)
Physics is racist (Score:5, Insightful)
It's harder to see the contours of a dark-colored shape (ie a face) than a white one.
Seriously, people, how are we going to get around that?
Re: (Score:2)
Seriously, people, how are we going to get around that?
New makeup requirements for everyone:
https://en.wikipedia.org/wiki/... [wikipedia.org]
Match Confidence Levels (Score:3)