Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Microsoft

Microsoft Lays Off Key AI Ethics Team, Report Says (platformer.news) 131

According to Platformer, Microsoft's recent layoffs included its entire ethics and society team within the artificial intelligence organization. "The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said." From the report: Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company's AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs.

But employees said the ethics and society team played a critical role in ensuring that the company's responsible AI principles are actually reflected in the design of the products that ship. "People would look at the principles coming out of the office of responsible AI and say, 'I don't know how this applies,'" one former employee says. "Our job was to show them and to create rules in areas where there were none."

In recent years, the team designed a role-playing game called Judgment Call that helped designers envision potential harms that could result from AI and discuss them during product development. It was part of a larger "responsible innovation toolkit" that the team posted publicly. More recently, the team has been working to identify risks posed by Microsoft's adoption of OpenAI's technology throughout its suite of products. The ethics and society team was at its largest in 2020, when it had roughly 30 employees including engineers, designers, and philosophers. In October, the team was cut to roughly seven people as part of a reorganization.
"Microsoft is committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that prioritize this," the company said in a statement. "Over the past six years we have increased the number of people across our product teams and within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice. [...] We appreciate the trailblazing work the ethics and society team did to help us on our ongoing responsible AI journey."
This discussion has been archived. No new comments can be posted.

Microsoft Lays Off Key AI Ethics Team, Report Says

Comments Filter:
  • News? (Score:5, Insightful)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday March 15, 2023 @05:09AM (#63372269) Homepage Journal

    If there's news here it's that anyone at Microsoft even knows the word "ethics"

    • by Anonymous Coward

      Sure they do. It's one of those annoying weird words externals keep saying they should do something with, but frankly, they don't understand it nor do they see the point. It costs money, doesn't make money, so out it goes.

      So one way this is good news: They're no longer pretending. In another, it's bad news: They'll get away with it. Again.

      • by ShanghaiBill ( 739463 ) on Wednesday March 15, 2023 @06:19AM (#63372371)

        One way to tell a company doesn't care about ethics is when they make a committee responsible for it.

        I can't imagine an "ethics and society" committee was doing anything useful.

        Now these people will have to find real jobs.

        • Yup. Fire the wokie grifters.

      • Steve Ballmer, is that you?

    • Well as is the case everywhere in contemporary business, they dunno what it means but they do know it's part of the buzzword bingo.
    • Microsoft

      Ethics

      Pick one.

    • I'm betting they got laid off by the AI.

    • If there's news here it's that anyone at Microsoft even knows the word "ethics"

      Came to say pretty much that. I have this picture in my mind of the 'team' occupying some surplus utility space in a distant corner of the basement, near a parking garage in which their vehicles aren't allowed. My wife just added "Playing solitaire. With cards, because they don't have computers".

      • by HiThere ( 15173 )

        It's my guess that the "team" was brought onboard by the acquisition of another company. And that previously they prevented certain "products" from being offered.

        FWIW, OpenAI hasn't been that ethical for several years, but I suspect that acquisition by MS has "opened new doors". (OK, the web says it's officially owned by Zuckerberg. So perhaps not. Or perhaps Zuckerberg couldn't manage it as closely as MS.)

        It's really hard thinking of Musk as the "good guy" here, but perhaps in this case he was.

    • If there's news here it's that anyone at Microsoft even knows the word "ethics"

      There's not a single major IT company that does. Microsoft isn't unique in this. From sacking American workers and replacing them with cheap foreign H1-B's, to kowtowing to China and Arabia on app store stuff, everyone from IBM to Apple has sins on their ledger. And don't get me started on Google's "Don't Be Evil" farce.

    • You may have found the reason why they were invited to explore other external opportunities - can't have independent thinkers rousing a rabble, can we?

    • If there's news here it's that anyone at Microsoft even knows the word "ethics"

      It probably boils down to two cases:
      (1) The ethics team was not their job well enough.
      (2) The ethics team was doing their job too well.

      Recent reporting / interactions tilt me toward #1, but knowing MS tilts me toward #2.

      Chidi Anagonye: "I am vexed, Tahani. Vexed."

  • Race is on (Score:5, Insightful)

    by monkeyxpress ( 4016725 ) on Wednesday March 15, 2023 @05:12AM (#63372277)

    Six months ago, AI was a research field that might or might not produce any profit within the next few years. Having a group studying the ethical implications of advances was a useful feeder into this research.

    Today, AI is the battle ground for pretty much Google's entire business model. There is no time for regular nice meetings about what should and shouldn't be done - there is likely to be only one big winner and a whole bunch of losers. Having a whole bunch of process documents and lots of regular meetings isn't much use if you're the loser.

    BTW, this is not an endorsement of this model, but it is the way competitive markets are designed to work. If you want ethics then you have to regulate.

    • there is likely to be only one big winner and a whole bunch of losers.

      The big losers only case is still on the table. Fingers crossed!

    • by AmiMoJo ( 196126 )

      I doubt that Google search is under much threat really. It has become apparent that current AI language models are not very reliable, and will confidently give completely wrong information.

      Google is in the best position to fix that, having already got a search engine that is able to provide snippets of information from reliable sources. Bing can't do that, and nobody else has a search engine.

      • It has become apparent that current AI language models are not very reliable, and will confidently give completely wrong information.

        100%

        This is mainly because we are talking about AI language models, which are all about statistical relations between word and concepts (there is no notion of right or wrong). This has some nice applications in the real world, and can even be used to replace the usual approach of "search on google, synthetize information from 10 first results, refine search", but it is not as groundbreaking as gullible people may think once you understand the principles of the underlying algorythm.

        The paper on GPT-4 [openai.com] is actu

        • (there is no notion of right or wrong)

          Doesn't that pretty much sum up our most primal fear of AI? Heck, even the LLM's that are currently being hailed as AI sometimes appear to cross the line between amoral and immoral.

      • will confidently give completely wrong information.

        And most will blindly trust that answer no questions asked.

      • I doubt that Google search is under much threat really. It has become apparent that current AI language models are not very reliable, and will confidently give completely wrong information.

        I find this a short sighted view.

        Yes, you're 100% correct that chatGPT will respond to prompts with wrong information. I've seen it many times. (What's interesting is that if you say "That's wrong" it will almost always correct itself and answer with the right information.)

        We've seen exponential leaps in all areas of AI in roughly the last 5-6 years, and we're still in the exponential phase.

        What will, invariably, come next is the phase of incremental improvements, correcting errors, and so forth.

        Compare cha

        • My laymen's interpretation is that I haven't seen "exponential leaps" as such in AI. The things that today's AI is doing badly is the same stuff that it was doing badly with a decade or more ago. "Confidently wrong" as they say. What's changed is that they can train their models on massive data sets curated by an army of humans where possible instead of a smaller set curated by grad students. And this massive data set leads to "better" output where better is defined as less obviously wrong but still wrong.

          • There's plenty of clickbait on reddit, twitter, youtube, even the venerable blog where it goes "I asked an AI to blah and this is what it gave me!" and a cursory reading will show that actually what it gave them was tripe and they refined their query until it was less tripe and then made an article about how little touching up was required... after they did all the heavy lifting of knowing what to look for first.

            I would encourage you to try it out. I'm using it daily to write short monotonous bits of code. I've used it generate some SQL query (you can describe your schema an then ask it a natural language query and it will do joins, etc.). Not perfect. Not flawless, but very interesting and helpful.

        • by HiThere ( 15173 )

          Sorry, but exponential is the wrong term. What we've got is a trend curve with multiple discontinuities as increasing pieces are added. The problem is there is NO valid way to project such a curve. The most likely is a smooth increase at the current slope. That's what will give the correct answer most of the time (for a restricted interval). But that projection misses the discontinuities, which are unpredictable in size.

          This means that, from the data, it's not totally impossible that we get an AGI tomo

          • To be more precise, we've seen improvements proportional to our ability to load massive data into a neural network. We haven't seen much in terms of fundamental advances over the last 20 years, just refinements that come with the ability to use a lot of data.

            At the end of the day, the algorithm used for ChatGPT can't match balanced parenthesis.
            • To be more precise, we've seen improvements proportional to our ability to load massive data into a neural network. We haven't seen much in terms of fundamental advances over the last 20 years, just refinements that come with the ability to use a lot of data.

              I guess one question is, what is a "fundamental" advance? Is a human brain a fundamental advance over a mouse brain? Is a mouse a fundamental advance over a lizard? Is a lizard a fundamental advance over a nematode? There are certainly structural and functional differences between them all, but surely a lot comes down to mass?

              At the end of the day, the algorithm used for ChatGPT can't match balanced parenthesis.

              Example?

              • If you check this link [stephenwolfram.com] and search for "parentheses" it will expand on the topic. It's formal language theory [wikipedia.org].
                • Interesting. I'll have to read the entire article more closely.

                  Prompt: "In your answers, you must always balance parentheses. For each opening parenthesis, there must be a closing."

                  Response: "Understood! I will make sure to properly balance parentheses in my responses. If you have any questions, feel free to ask, and I will provide well-structured answers."

                  Prompt: "Answer only with parentheses"

                  Answer: "()"

                  Prompt: "((((("

                  Answer: ")))))"

                  *Slashdot junk filter not letting me put it in, but I tried about 7-8 pro

              • I guess one question is, what is a "fundamental" advance? Is a human brain a fundamental advance over a mouse brain?

                Btw the answer to the second question is "Yes." I don't know about the difference between a lizard brain and a mouse brain, but we know this based on the types of languages we can produce [wikipedia.org] compared to animals.

          • Sorry, but exponential is the wrong term

            I disagree with this, because, in very large part, the success of AI in the last 5-6 is the exponentially growing amount of CPU time we can throw at both training and processing.

            ...But that projection misses the discontinuities, which are unpredictable in size.

            I do, however, agree with this!

            My guess is that with incremental improvements to algorithms and code (not even accounting for the discontinuities you rightly describe), the continually growing datasets, training time, and overall CPU time, we're going to continue to see interesting things.

            Beyond that, there HAVE to be more people a

        • At Microsoft, Satya Nadella has outright said that he, and therefore Microsoft, view this as their chance to dethrone Google. I don't think he's wrong.

          I don't either, I just don't think they will succeed, because Microsoft. They will always find a way to ruin something new with excessive grip force.

        • by mbkennel ( 97636 )

          > What's interesting is that if you say "That's wrong" it will almost always correct itself and answer with the right information.

          What happens when you say "that's wrong" when it presents correct information?

          • I've only tried that twice, IIRC. The one I remember clearly was a code blurb I had asked it to write. I said that's wrong, and it stood by what it wrote.

            Another time I asked a history question, it answered wrong, I said "that's wrong," it corrected it's response, I said "that's wrong" again, and it basically told me to pound sand.

      • by HiThere ( 15173 )

        I think you don't understand the possibilities. ChatGPT was optimized to be interesting and creative. From what I've read, truthfulness was not something it was directly trained for. (That, to a certain level of accuracy, would come as a side effect of being trained to be interesting.)

      • I doubt that Google search is under much threat really.

        If Google was just a benign search service then sure, but it's not. It's an entire business model that relies on them being able to monitor (and influence) the user the entire way through their browsing session.

        But ChatGPT can currently just aggregate the info you want from multiple sources and serve it to you without all the ads (well, without Google's ads). And Google doesn't have access to its users. This essentially cuts Google out of the entire search chain and is an existential threat to its ad suppor

    • If you want ethics then you have to regulate.

      Ethics are moral principles. Regulations can encourage behaviors, but they're not going to change people's beliefs. I'm not even sure I'm comfortable going down that path; a government trying to impose its morality on individuals sounds rather tyrannical to me.

      • If you want ethics then you have to regulate.

        Ethics are moral principles. Regulations can encourage behaviors, but they're not going to change people's beliefs.

        Okay, so maybe GP should have said "If you want ethical behaviour then you have to regulate". But I'm pretty sure everyone knew what he meant.

        I'm not even sure I'm comfortable going down that path; a government trying to impose its morality on individuals sounds rather tyrannical to me.

        We're talking about corporations here, and it sounds to me as though - either consciously or sub-consciously - you have bought into the notion that corporate entities really are people. Self-serving legalistic nonsense aside, they aren't people and shouldn't have the same rights and freedoms as people.

        Corporations started out as servants; that we have allowed them to

      • Well buckle up, because we have sitting lawmakers that are pro-Christian Nationalism. And they are the same fools that just a single-digit number of years ago used to be screaming about how we need to get government off our backs, and individual freedoms should be upheld.

        If that's not the best example of a government trying to impose its morality on individuals, I don't know what is.

    • there is likely to be only one big winner and a whole bunch of losers

      I don't think that will be the case. With this kind of AI the software and methodology is very well known. This isn't doing much, from an algorithmic side, that hasn't been known to computer science for half a century now. It is simply the size of the model, the amount of training invested, and the amount of computational power (and RAM) companies are now willing to spend to roll this out to the public.

      There are already LLMs, like Facebooks LLaMa, that are out in the public, and others have packaged this al

      • by HiThere ( 15173 )

        Well, I *have* been surprised by the large number of competing Chatbots that have been announced. And some are claimed to be superior to ChatGPT (in one way or another). So as of a few months ago you were correct. But how many of the recent developments are open and "out in the public"? My guess is that it would be just about none.
        (For that matter, most of the code that is shared between the various models is probably from quite awhile ago. I went looked for downloadable code a few months ago and only

    • by larwe ( 858929 )

      likely to be only one big winner and a whole bunch of losers

      I don't see this trajectory, mainly because I don't see today's "AI" hype being a general-purpose success so much as a party trick that sometimes, accidentally and non-explainably, produces seemingly useful output. The big tulip money is going into the bet that general-purpose AI is going to be a general-purpose solution broadly applicable to a lot of high-stakes problems, and this doesn't seem likely. What seems much more likely is that the general-purpose case will fail spectacularly, a smoke ring will ri

    • I think the problem is you don't want ethics involved in what's essentially a corporate War. The last thing you want is a bunch of documents leaking from your ethics department saying that what you're doing is bad and it comes out she did it anyway. It's bad press you don't need. And if we ever start enforcing antitrust law and creating privacy laws it's something that could potentially come up in court.

      I guess what I'm saying is it's less about going fast and breaking things and more about being able t
    • If you want ethics then you have to regulate.

      In a way it makes me hope that Section 230 doesn't protect AI, which was a Lawfare topic last week. If they have to worry about the liability of the outcomes their AI produces, then perhaps they'll be a bit more ethical about it.

      "I’m not afraid of AI; AI will allow us to unlock wonders. But I’m afraid of your AI."

      • In other news, 1000000000 not-so-Red Chinese aren't going to handicap themselves with quaint notions like "ethics" and "liability."

        This is going to be a game of technological thrones. Win, or die.

      • by HiThere ( 15173 )

        Section 230 will not protect AI. But it may(will) protect transmissions by AI. The transmissions will officially be by the corporations that own/run the AI.

  • by bradley13 ( 1118935 ) on Wednesday March 15, 2023 @05:23AM (#63372307) Homepage

    TFA fails to ask some essential questions. First, was this team actually competent, or were they a dumping ground for employees you couldn't quite just fire? Second, if they were competent, were they taken seriously? Or were they just a fig leaf, so that MS could say they really, really pay attention to ethics?

    The truth is, no big company really cares about ethics. They care about money, and about "not getting caught" doing something that might cause negative publicity. I'm going to bet that the answers to the two questions above are "dumping ground" and "fig leaf".

    • My guess would be a fig leaf. They probably got rid of them because someone felt that maintaining an entire team as a PR exercise was too expensive. Or maybe the ethics team started coming up with ideas and asking questions.

    • And big companies absolutely care about ethics... And so much as they might get sued, break a law that gets them a large fine or they might do something the public finds so abhorrent that new regulations are demanded.

      Ethic departments in companies are real they're just not there to be ethical. They're not there to figure out what the right thing to do is they're there to figure out how close up to the line they can get before we demand action against them.
    • Most likely they were actively trying to stop Microsoft from building a ChatGPT thing. Whereas they were hired to play an advisory role.
  • by blabla688 ( 449281 ) on Wednesday March 15, 2023 @05:28AM (#63372315)

    No wonder the whole team got laid off, any ethics team worth their salt would eventually be a pain in the butt for Microsoft.

  • Redundant (Score:5, Funny)

    by nagora ( 177841 ) on Wednesday March 15, 2023 @05:29AM (#63372317)

    They can just run things past chatGPT.

    "Can you justify us using user tracking on all the installed versions of our OS?"

    "Sure I can!"

  • by davide marney ( 231845 ) on Wednesday March 15, 2023 @05:42AM (#63372331) Journal

    But employees said the ethics and society team played a critical role in ensuring that the company's responsible AI principles are actually reflected in the design of the products that ship. "People would look at the principles coming out of the office of responsible AI and say, 'I don't know how this applies,'" one former employee says. "Our job was to show them and to create rules in areas where there were none."

    This is some very loaded language. Terms such as "responsible", "principles", and "rules" are enormously value-laden and therefore highly subjective. One could arguably read this quote as saying that Microsoft's leadership is unhappy with the biases that have been built into its AI models and training data thus far, and wants a different set of biases to be built in instead.

    • Companies aren't supposed to be neutral any more, they're supposed to be 'allies,' that's how things are now.
    • "People would look at the principles coming out of the office of responsible AI and say, 'I don't know how this applies,'"

      Sounds like they could have been more clear about explaining how it applies.

  • Because it realized that "Moneysoft" has a better ring to it!

    The real funny part??? After some arguing, I got ChatGPT to come up with this joke ;-)

  • by Junta ( 36770 ) on Wednesday March 15, 2023 @05:49AM (#63372341)

    Frankly for this to be meaningful, it has to be independent.

    A team ostensibly responsible for keeping a company on the rails being employed by the company without assume external accountability is an insurmountable conflict of interest.

    So it's inevitable that such a team becomes toothless at best.

    • by HiThere ( 15173 )

      The problem is that any external team will not get a chance to look at the code, or to ask questions during development.

      Yes, external teams are ALSO needed. They should work in places like the DOJ, and help prioritize cases. There's a difference between an AI that's a nuisance, one that's accidentally dangerous, and one that's intentionally dangerous. All 3 varieties should be suppressed, but with limited resources, you select which to go after.
      (Yeah, that's an oversimplification. I left out "dangerous

  • by Laxator2 ( 973549 ) on Wednesday March 15, 2023 @05:53AM (#63372351)

    Based on the article, the problem they are facing are lawsuits from artists whose images have been used in training the AI.
    Simply using the artists' name in the prompt will result in the AI creating an image indistinguishable from that of the artist.
    This is the same issue they face with GitHub Copilot, namely "open source laundering.":
    https://drewdevault.com/2022/0... [drewdevault.com]
    It is not like M$ cares about ethics, it is just that they face much higher liabilities if turns out that they were aware of art and code laundering by the AI.
    Eliminating the ethics team reduces their legal exposure.

    • The artists will fail due to a lack of recognizable elements from their copyrighted works in the results. You cannot copyright a style.

      The code generators, though, are capable of producing results indistinguishable from the input, in limited cases. They might be in real trouble.

  • by DrXym ( 126579 ) on Wednesday March 15, 2023 @07:01AM (#63372443)

    I wonder if they were laid off by AI

    • Oh man, knowing other humans, even some sub humans that are in Human Resources, it would be a moral imperative to lay these people off with text written by AI with woke slider dialed up to 11. But did it take 3 real minutes or 3 Microsoft minutes?
  • by nevermindme ( 912672 ) on Wednesday March 15, 2023 @07:25AM (#63372469)
    I miss the Bill Borg Topic Art.
  • We're these ethicists some of the "excess" employees that big tech corporations have been laying off recently?

  • Seems there was two AI ethics teams within Microsoft. One within the AI department, and one that oversees the AI department. They got rid of the redundancy.
  • If the money is there, it will be done.
  • That Microsoft even bother with having an ethics teams. There was in no way that team held any power over anything at all.
  • One rubber stamp is a whole lot cheaper than a team of humans.
  • They probably asked ChatGPT/Bing who should get laid off, and it said get rid of the Ethics officers because they are "Bad users."

  • MS had an ethics team? Who would have thunk!
  • You think the Chinese are going to tie one hand behind their back with a "Directorate of Responsible AI"? LOL. It's a recipe for abandoning competitiveness in a game that hasn't even started yet.

  • Should have seen that coming, and presumably the AI wants the savings directed to more, "cleaner", power.

    It had been watching the Terminator: The Sarah Connor Chronicles episode where that was a thing.

  • if there's one thing I've learned on /. it's that IT/developer/CS types are so knowledgeable, logical, ethical, moral and honest on just about every topic imaginable

    in fact, we could easily do any job in the company better than the chumps currently doing it if we weren't so busy writing leet code

    first we'll come for the ethics folks, then after we re-sharpen our pitchforks, renew the torches and snag a new bag of chips, we'll be coming for the accountants, exces and janitors

    ftw!!! nerds rule!!!

    • by HiThere ( 15173 )

      Well, most of us generally are honest. The ones I've known have been logical, though often you had to dig to find out what axioms and postulates they were operating off of.

  • The first good news I've seen all day. These are the people whose job it was to ensure that products reflected so-called woke ideology rather than statistical reality.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...