Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Google AI Technology

Google Quietly Disbanded Another AI Review Board Following Disagreements (wsj.com) 71

Google is disbanding a panel in London to review its artificial-intelligence work in health care, WSJ reported Monday, as disagreements about its effectiveness dogged one of the tech industry's highest-profile efforts to govern itself. From a report: The Alphabet unit is struggling with how best to set guidelines for its sometimes-sensitive work in AI -- the ability for computers to replicate tasks that only humans could do in the past. It also highlights the challenges Silicon Valley faces in setting up self-governance systems as governments around the world scrutinize issues ranging from privacy and consent to the growing influence of social media and screen addiction among children. AI has recently become a target in that stepped-up push for oversight as some sensitive decision-making -- including employee recruitment, health-care diagnoses and law-enforcement profiling -- is increasingly being outsourced to algorithms. The European Commission is proposing a set of AI ethical guidelines and researchers have urged companies to adopt similar rules. But industry efforts to conduct such oversight in-house have been mixed. Further reading: Google Cancels AI Ethics Board In Response To Outcry.
This discussion has been archived. No new comments can be posted.

Google Quietly Disbanded Another AI Review Board Following Disagreements

Comments Filter:
  • by Anonymous Coward

    But industry efforts to conduct such oversight in-house have been mixed.

    Sorry, but why would we trust multi-billion dollar companies to self regulate, when their clear goal is maximizing profits and getting as much of your data as possible.

    I wouldn't trust any company to self regulate, let alone anything like Google or Facebook who have demonstrated time and time again they don't care about your privacy.

    We need to be regulating them, not just trusting they'll do the right thing ... because we know they won

    • Way too simplistic (Score:4, Insightful)

      by SuperKendall ( 25149 ) on Monday April 15, 2019 @10:29AM (#58440086)

      Sorry, but why would we trust multi-billion dollar companies to self regulate

      Because if they do not they die, or are punished rather badly.

      their clear goal is maximizing profits

      Here's the problem with being afraid of that - you have no idea what that actually means. In fact, even GOOGLE does not know what that really means.

      No-one knows what actions would truly "maximize profits". Certainly not the people outside the company's top execs who have no inkling of the roadmap for the company, and very little ability to understand what will even be possible in five years or longer. But for those inside the company, even then actions are just an educated guess.

      So companies may be trying to "maximize profits" but since there is no one sure way to do so, instead what they are really doing is trying to follow a mission statement to move a company forward toward one or more end goals. Often those goals can have some altruistic purpose to help people, alongside the goal to help the company.

      getting as much of your data as possible.

      Some but not all, Google for sure this is indeed true of.

      We need to be regulating them

      Oh so you'd like the citation much worse? You'd like all other companies to end up like pharmaceutical companies, the most heavily regulated industry there is?

      The problem with using regulation as the only tool to shape company actions is that if a company is large enough it can easily control the regulations that supposedly control them. Then not only can they do what they like without worry about government, but they use regulations as a tool to ensure competitors cannot function well, thereby removing the only real force that actually changes company behavior - market pressure. If you can't have some small company come up and compete against you, a company will do what it likes forever - the more regulation the better.

      • by GameboyRMH ( 1153867 ) <gameboyrmh@@@gmail...com> on Monday April 15, 2019 @10:51AM (#58440240) Journal

        The 737 Max 8 disaster should be the final nail in the coffin of the idiotic idea of self-regulation. Boeing didn't stop themselves from making relatively basic mistakes even though they knew it could cost them dearly, which it did. How could anyone continue to defend self-regulation after this?

        • by reanjr ( 588767 )

          The airline industry is heavily regulated. We have an entire agency dedicated to it.

          • by ceoyoyo ( 59147 ) on Monday April 15, 2019 @12:03PM (#58440592)

            That agency, the FAA, delegated some of its regulatory oversight tasks to Boeing. The GP's comment is insightful: it was considered critical to have independent oversight, so a government agency was set up. That agency decided to compromise on its oversight responsibility in favour of a small degree of self-regulation, and disasters occurred.

            Companies can (and do) set up advisory boards, but those are advisory only. Real regulation must be imposed by an independent body with legal power to do so.

            • by jrumney ( 197329 )
              How about if the FAA wants to delegate some of its safety related responsibilities to aircraft manufacturers, we at least have a regulation that the type of aircraft to be used on a route be fully disclosed before purchase of tickets, and any change of aircraft be subject to full refund and compensation at the passenger's sole discretion. If we can't trust an independent body to regulate these things, we need to be able to vote with our wallets.
              • by ceoyoyo ( 59147 )

                Could do. That would probably be a solution a libertarian would propose. That would put the responsibility for validating new aircraft safety on each individual. Or you could just wait for enough to crash. Since airplanes crash so infrequently, the result probably wouldn't be much better than superstition.

                Especially for complex, high stakes things like new aircraft safety, regulation works very well. The FAA has lost it's international reputation learning that cutting corners isn't worth it. Boeing has a

        • Part of the reason is that "regulators" of industries typically have *no idea whatsoever* how any of the stuff they regulate actually works, and if you took the time to explain it to them, you would never, ever, get past the first review board.

          The 737 Max issue is a great example - you almost certainly have no idea how it works, why it is needed, or how it is implemented outside news reports and enthusiast publications. Neither do the regulators or people working at the FAA, nor will you ever get anyone to

          • A slight variant of the "those who can't do, teach" argument - which is obviously fallacious or nobody would ever be qualified to do anything. And we'd be having these disasters all the time and not just as a result of lapses in regulation.

            As a bit of an aircraft enthusiast you lost the gamble in assuming I didn't understand the system. This issue isn't even that complicated, Boeing failed to design the system to use redundant sensors correctly, even though they were present, and then skimped on error-check

        • Careful not to oversimplify this. The bigger problem is incompetence in bureaucracies. This issue is going to crop up regardless whether there's an outside regulator. The downside to having an external regulating body is that in the case of incompetence-induced disaster, the company can shrug and say "well, we were just following best practices as defined by the National FOO Organization" and that's an argument that sometimes lets a company dodge charges of incompetence. It gets particularly bad when you ha
        • by Cederic ( 9623 )

          What would an external regulator have done differently that would've prevented those mistakes?

          • Maybe, upon review, said something like "Hey, why does this system have redundant sensors but flip out if only one of them malfunctions? Is it safe that it will completely and silently override the pilot's controls even at maximum opposite input? Shouldn't pilots have extensive training on how to deactivate this system in case it malfunctions? And why the fuck is it that on an aircraft with a glass cockpit, the indicator to warn that this system is malfunctioning is an optional physical gauge cluster that c

            • by Cederic ( 9623 )

              So this regulator needs to have an army of professional aeronautical engineers, software experts, metallurgy professors and user interface specialists.

              Ok, I'm sure the global airline industry and its customers will be happy to pay for all of that.

              • Or just one guy with some aeronautical engineering experience. Maybe an amateur pilot who can put 2+2 together. This isn't a complicated problem that takes an Apollo 11 team to identify and fix, this is Babby's First Lesson in Avionics, and Boeing failed it the moment they were left unsupervised. The airline industry and its customers have paid for the required level of regulation in the past. And how the hell did metallurgy get involved in a software and training problem?

                • by Cederic ( 9623 )

                  Because this isn't a software and training problem. This is a complex aircraft safety regulation challenge and if you don't think metallurgy is relevant to that then I suggest you read up on basic aircraft construction.

                  What, you want one fucking amateur pilot to safety assess every new airframe worldwide? Good luck getting anything safe launched in the next century.

                  • One fucking amateur pilot could've caught this problem, but again, a suitable level of regulation has been in place in the past and could be again. This problem can be fixed without any metallurgy knowledge.

                    • by Cederic ( 9623 )

                      Oh, I see. You want to prevent this one specific issue and entirely fucking ignore safety aspects of the rest of the fucking aircraft?

                      You fucking idiot.

                    • No. The FAA will need to have a metallurgist on staff, and all aircraft should have their metallurgy reviewed. I thought we were talking about the known flaws in the 737 Max 8. So in the big picture, yes, the FAA will need to have a wide range of professionals on staff, as they have in the past. That's the price of safe air travel and it has proven to be affordable.

      • I'm with you that government regulation needs to be an avenue of last resort because of how generally terrible it ends up being in practice. My go-to example is all the Parkland kids making noise about wanting gun control to feel safe, but the government regulation they got was a mandate of clear backpacks in their school. More directly related, my local municipality makes it very difficult to get permits to run cables for ISPs...little startups like Verizon would have had to pay about half a million dollar

      • getting as much of your data as possible.

        Some but not all, Google for sure this is indeed true of.

        It's actually not. I work for Google and I'll tell you that "We should avoid collecting that" is a common phrase in design reviews. I don't know how much I can share about the relevant privacy directives or processes, so I won't go into the rationale behind that statement, but it's common and basically always agreed to instantly. Usually with "Oh, yeah, I didn't mean to imply that we'd collect any of that."

        As an example, I'm working on Android infrastructure for storing identity documents like driver's

    • by jrumney ( 197329 )

      As this sequence of events appears to show.

      1. Google sets up a review board to ensure ethical standards are met for their medical AI
      2. Review board finds that Google is being unethical
      3. Google disbands review board because of disagreement
      ...
      5. Be Evil.
      6. Profit!!!

  • When I read the first three words of that headline, I was shocked, until I read the rest.
  • # Make sure each board member has the same (Google compatible) views on AI.
    for MEMBER in ${CLOWN_PARADE}
    do
    MEMBER_OK="$(check_for_compliance ${MEMBER})"

    if [ MEMBER_OK == true ]
    then
    continue
    else
    generate_bogus_excuse

  • "We didn't ask you if it was ok to do AI."

    "We asked you to tell us how it's ok for us to use AI to enslave humanity!"

  • Expected behavior (Score:5, Insightful)

    by rossz ( 67331 ) <ogreNO@SPAMgeekbiker.net> on Monday April 15, 2019 @01:49PM (#58441134) Journal

    This is what happens when a significant majority of your workforce does not wish to hear any opposing viewpoints and actively punish anyone who does not toe the party line. They create a self-imposed echo chamber so that "all is well" in their tiny little world.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...