Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Mozilla AI The Internet Technology

Mozilla Highlights AI Bias and 'Addiction by Design' Tech in Internet Health Report (venturebeat.com) 42

Mozilla this week released the 2019 Internet Health Report, an analysis that brings together insights from 200 experts to examine issues central to the future of the internet. From a report: This year's report chose to focus primarily on injustice perpetuated by artificial intelligence; what NYU's Natasha Dow Schull calls "addiction by design" tech, like social media apps and smartphones; and the power of city governments and civil society "to make the internet healthier worldwide." The Internet Health Report is not designed to issue the web a bill of health, rather it is intended as a call to action that urges people to "embrace the notion that we as humans can change how we make money, govern societies, and interact with one another online."

[...] The modern AI agenda, the report's authors assert, is shaped in part by large tech companies and China and the United States. The report calls particular attention to Microsoft and Amazon's sale of facial recognition software to immigration and law enforcement agencies. The authors point to the work of Joy Buolamwini, whom Fortune recently named "the conscience of the AI revolution." Through audits published by Buolamwini and others in the past year, facial recognition software technology from Microsoft, Amazon's AWS, and other tech companies was found to be less capable of recognizing people with dark skin, particularly women of color.

This discussion has been archived. No new comments can be posted.

Mozilla Highlights AI Bias and 'Addiction by Design' Tech in Internet Health Report

Comments Filter:
  • by mark-t ( 151149 ) <markt.nerdflat@com> on Wednesday April 24, 2019 @07:26PM (#58486532) Journal

    And regardless of the decision algorithm, there will be correlations that are observed and patterns identified that will appear to indicate bias even if the relevant variables to bias were directly not under consideration by the algorithm.

    Even the big bang was biased to produce more matter than antimatter, despite there being no reason why the amounts should not have been exactly equal (of course if they were, we wouldn't be here to debate the point, but the fact that we are here is evidence of the bias, regardless of what the theory would otherwise predict)

    • by Anonymous Coward

      The bias that people are concerned about is self-emergent as you say. The issue is the unfair treatment of individual people as a result of the bias. Say Mike has all of the characteristics that indicate he is 20x more likely to steal from stores compared to the average person. Every store Mike goes into he is tailed and asked to show what he has inside his backback before he leaves. Yet Mike is a decent young man that has never stolen from a store in his life. Should Mike be treated as a criminal because h

      • by Anonymous Coward

        Should Mike be treated as a criminal because he happens to have the features that a significant number of criminals also have or should he be treated with respect?

        Wrong answers. The correct answer is to treat everyone as suspicious and unworthy of respect.

      • by mark-t ( 151149 )
        Except that such "unfair treatment" is just as self-emergent as the bias itself, since we only measure such unfairness by the results that the decisions produce. When bias in the decisions is detected, then there will also be a detectable amount of unfair treatment that corresponds precisely with it.
    • Light is. Less light bounces off darker skin than it does lighter skin. The AI simply has less data to work with in photos of darker skinned individuals. If you used a set of photos of black people under regular lighting, and a set of photos of white people under dim lighting so their skin was reflecting about the same amount of light as the black people's skin in the first set, I suspect the AI would perform about the same for both sets.
    • by AmiMoJo ( 196126 )

      The problem is trying to apply judgement to these decisions. If you apply for a mortgage the decision shouldn't be based on the judgement of the person processing the application or an AI. It should be based on rules using verifiable, hard data about your situation. And if you are rejected you should be able to know exactly which of those rules excluded you.

      Then the only bias will be in the design of the rules themselves, and it's much easier to scrutinze and fix those than it is to try to identify problems

  • by WoodstockJeff ( 568111 ) on Wednesday April 24, 2019 @07:34PM (#58486594) Homepage

    Why do we have research going on in AI? Because every where you look, you'll find people who wish to avoid making DECISIONS. Anything you DECIDE has consequences, and being the one making a DECISION makes you the target of whoever is harmed by that DECISION. They see the "fix" in Artificial Intelligence, if only because now they can blame AI for bad decisions.

    And a lot of money is paid to system designers, analysts, programmers, and whoever to create AI systems. People at Google and Facebook spend countless hours developing a better way to tell what is in a picture (be it a face or other objects) to make their searches and friend-matching work better, to read vast collections of postings to assemble a more complete picture of the kind of person you are, and who your friends are.

    But these same people will then make showy, and very public protests about EVIL when they discover that the SAME tech is useful by POLICE, who want to track a pick pocket (or suspected terrorist) across a London, or a GOVERNMENT AGENCY for tracking insurrection against the (insert government leader here).

    It's not that the tech is good or evil. It's just whether someone agrees or disagrees with the agenda of the user.

    • by AmiMoJo ( 196126 )

      It's because decision making is labour intensive and error prone. You can train someone to make a certain decision, but it takes time and expense and being humans they do sometimes make mistakes or exhibit bias.

      Computers have been used to automate decision making for over half a century. Enter known parameters and the computer will apply a pre-determined algorithm perfectly every time.

      And truth be told, most of what is described as "AI" is really just complex algorithms, e.g. image recognition.

  • by Anonymous Coward

    It's the training data.
    Add more diversity to the training data and the "AI" won't be as biased.

  • by Anonymous Coward

    1. Calls for privacy are becoming mainstream
    2. There’s a movement to build more responsible AI
    3. Questions about the impact of ‘big tech’ are growing
    4. Internet censorship is flourishing
    5. Biometrics are being abused
    6. AI is amplifying injustice

  • 1. Government will make Internet better.
    Selling AI to government bad.
    Whoosh!

    2. Government will make In5ernet better.
    Many governments regulating speech.

    3. Facial recognition is a dancing bear. The wonder isn't how well the bear dances. The wonder is the bear dances at all.

On the eighth day, God created FORTRAN.

Working...