Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Businesses Software

AI At Work: Staff 'Hired and Fired By Algorithm' (bbc.com) 122

The Trades Union Congress (TUC) is calling for new legal protections for workers, warning that they could soon be "hired and fired by algorithm." "Among the changes it is calling for is a legal right to have any 'high-risk' decision reviewed by a human," reports the BBC. From the report: TUC general secretary Frances O'Grady said the use of AI at work stood at "a fork in the road." "AI at work could be used to improve productivity and working lives. But it is already being used to make life-changing decisions about people at work -- like who gets hired and fired. "Without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment -- especially for those in insecure work and the gig economy," she warned.

The union body is calling for:
- An obligation on employers to consult unions on the use of "high risk" or "intrusive" AI at work
- The legal right to have a human review decisions
- A legal right to "switch off" from work and not be expected to answer calls or emails
- Changes to UK law to protect against discrimination by algorithm


This discussion has been archived. No new comments can be posted.

AI At Work: Staff 'Hired and Fired By Algorithm'

Comments Filter:
  • HR (Score:4, Insightful)

    by AlexHilbertRyan ( 7255798 ) on Friday March 26, 2021 @05:17AM (#61200486)
    Cant be worse than being filtered by clueless HR and unskilled managers.
    • Re:HR (Score:5, Funny)

      by Freischutz ( 4776131 ) on Friday March 26, 2021 @05:28AM (#61200508)

      Cant be worse than being filtered by clueless HR and unskilled managers.

      So you, for one, welcome our new AI overlords?

      • I see you didnt disagree that the current situation is a disaster in itself. Take the IT industry, asking HR who have no idea what software development is to decide if a person is worthy is a joke.
        • Re:HR (Score:5, Interesting)

          by geekmux ( 1040042 ) on Friday March 26, 2021 @06:01AM (#61200558)

          I see you didnt disagree that the current situation is a disaster in itself. Take the IT industry, asking HR who have no idea what software development is to decide if a person is worthy is a joke.

          No, the ACTUAL joke here, is listening to this exact same complaint about HR's inability to properly evaluate technical people and positions, for fucking decades.

          And it's quite pathetic and unacceptable. Kills me that we don't find value in training HR better when the entire company is the victim of that incompetence.

          • Arguably by coming up with an interview loop staffed by coders and managers it's better than before. HR are less of a blocker the loop is. Now, it's terrible that the questions are from this tiny subset of the field and almost never relevant to actually doing your job. And passing is mostly luck - you have to get a correct answer 4-6 times in a row and for somewhere like google, avoid getting a question that isn't possible to answer.

    • Depends on where you work.

      In some places HR is dysfunctional and all managers have to do the work.

      • Re: HR (Score:5, Interesting)

        by Otis B. Dilroy III ( 2110816 ) on Friday March 26, 2021 @08:54AM (#61201010)
        When I was working for Intel in the mid 90s, the company determined that HR was "an impediment to hiring" and that the situation was causing Intel to spend an inordinate amount of money on professional recruiters. Intel decided that HR's participation in the hiring process would be limited to scheduling interviews with candidates chosen by technical management and tendering offer letters

        Friends tell me that the worm has turned and HR is back in control, judging engineers by whether or not they have neatly trimmed fingernails.
      • > Depends on where you work.
        Really ?
        > In some places HR is dysfunctional and all managers have to do the work.
        How exaclty can a manager who cant program going to do a programmers job ?
        How is a manager who cant do brain surgery going to help with an operation ?
        Managers are filled with people who cant do anything except bullshit and take credit for other peoples work and well we know who they blame. They are parasites.
    • Re: HR (Score:5, Insightful)

      by e3m4n ( 947977 ) on Friday March 26, 2021 @07:19AM (#61200672)
      While I share your frustration at the dead weight in HR that feels they have to justify their positions with wasted hours on redundant trainings and virtue signaling, I think it could actually be worse. There are worse things than incompetent inaction. Too much tinkering can be catastrophic. We called it micromanaging a few decades ago. Ive seen micromanaging so bad I had to coin the phrase nano-managing.
      • > There are worse things than incompetent inaction.
        Its not inaction its being downright useless because they dont have the knowledge or skills in the area they are in charge of hiring.
        > Too much tinkering can be catastrophic. We called it micromanaging a few decades ago. Ive seen micromanaging so bad I had to coin the phrase nano-managing.
        You forgot to state the reason why they are annoying, they are annoying because theya re not contributing positively to the activity. This is because they dont
    • Sill human, your resume doesn't get to clueless HR until after it passes through an algorithm that determines you meet the requirements set, usually, by HR. You are luck to get to a clueless HR drone or unskilled manager.
      • > Sill human, your resume doesn't get to clueless HR until after it passes through an algorithm that determines you meet the requirements set, usually, by HR.
        Most (the majority ) HR staff are not qualified or knowledgable in the area they are hiring for. Its a running joke in IT, where HR cant even spell terms correctly or usethem in a meaningful way or ask for experience that is many times greater than the age of the tech.
    • It depends from what has been used to train the algorithm. If they used Dilbert [dilbert.com] strips, we are in trouble!
    • Put in the job description for HR that I was looking for a system admin with (among a long list of stuff) data warehouse experience. HR would filter out unqualified applicants and send me the rest. I got a bunch of applications from fork lift operators as well as quite a few who just moved boxes around.

    • by I75BJC ( 4590021 )
      This!

      We already face the AI (Asshot Inquiry) by HR. You know, those people who don't work at the job being filled; probably have never worked at that job; and have no clue about what is required for that job.
  • Happening for years (Score:2, Interesting)

    by Anonymous Coward
    20 years ago at a major high-tech company in Silicon Valley, we couldn't figure out the logic behind the layoffs. For one thing, performance ranking didn't matter. Nor did importance of the employee's job in the eyes of their managers. People couldn't figure out why they were kept and others were not. We found out that an HR consulting outfit back East was using an algorithm in an attempt to avoid age, race, gender, etc. discrimination.
  • Otherwise, hasta la vista, baby.
  • by wiredog ( 43288 ) on Friday March 26, 2021 @05:45AM (#61200534) Journal

    This assumes that the humans reviewing the decisions are any more likely to pass a Turing Test than the AI originally making the decision.

  • Stupid (Score:2, Interesting)

    Algorithms ARE human decisions. Who else writes them, pigeons?

    • Re:Stupid (Score:4, Informative)

      by AmiMoJo ( 196126 ) on Friday March 26, 2021 @06:12AM (#61200576) Homepage Journal

      Current GDPR rules cover this. When decisions are made about you by algorithm you have a right to ask for them to be explained, and "computer said no" is not adequate. There has to be an explanation how what factors the algorithm considers and how they are weighted, and what inputs it received in order to make the decision.

      This could do with strengthening to protect people from poorly designed algorithms that end up factoring in illegal inputs such as gender or race. In a work environment some more generalized stats should be possible too, e.g. a bell curve of performance relative to employees in similar roles.

      • That makes sense.

      • Call for transparent operation of algorithms, when said algorithms use machine learning fed with dubiously sourced data, sounds like an oxymoron.

        • That's exactly the point: it is not sufficient to disclose the data on which the algorithm based its decision. The company has to explain which factors are being considered and how they are weighed. That's kind of hard when using machine learning. So, perhaps the upshot is that you cannot use ML at all when deciding on whom to hire or fire, because the algorithm does not work transparently.
          • by Pimpy ( 143938 )

            It's not that difficult with supervised learning, as you basically know the points in which the internal graph changes and you can snapshot the decision-making carried out at a given point in time. Even in deep learning cases, you can apply a similar methodology for snapshotting the before-and-after state even if you're only chopping off and retraining a few convolutional layers. It's not enough from a compliance point of view to be able to show how a decision was made, you need to be able to demonstrate ho

      • by Entrope ( 68843 )

        Does that really help? Suppose the answer is "our deep learning analysis shows that people who wear purple on Thursday are 30% less productive than other workers, and your wore purple last Thursday". It doesn't obviously discriminate against any protected class, but it's so opaque that it's not a satisfying answer.

        • by Pimpy ( 143938 )

          It does when it fundamentally clashes with anti-discrimination laws and begins to correlate data that is not permitted to be correlated, precisely because it leads to discrimination. If you trained an AI model based purely off of data points from your existing employees, it may infer that e.g. people with young children are more likely to take days off during the year than those without children and penalize them for this. Or it may decide that a woman in her mid-20s is at high risk of starting a family and

          • by Entrope ( 68843 )

            It does when it fundamentally clashes with anti-discrimination laws and begins to correlate data that is not permitted to be correlated

            You really begged the question there. Please elaborate which anti-discrimination laws prohibit discrimination against people who wear purple shirts on Thursdays, and which laws prohibit correlating that with other data.

            My hypothetical example specifically avoided any mention of things like "has young children" or "member of a minority group" because those are already (as fa

            • by vux984 ( 928602 )

              Please elaborate which anti-discrimination laws prohibit discrimination against people who wear purple shirts on Thursdays

              Who is wearing purple shirts on Thursdays?

              Maybe in this place of work, women tend to wear more color than men; (fairly common anywhere business attire is worn) and now 'purple on thursday', is really just a proxy for 'women'.

              You can't fire someone of a protected class by selecting some proxy attribute; the courts would see right through it.

              "No your honor, we didn't fire her because she was a 'young woman likely to get pregnant and take time off' the AI picked her, for lets see, "new ring on 4th finger in la

        • Well, it's about the GDPR, and in the UK and much (all?) of Europe employment protections are much stronger than at-will states in the US. You can't simply fire someone because you feel like it.

          • by Entrope ( 68843 )

            Which is it, the GDPR or differences in employment law? GDPR seems like an odd basis to say "you cannot associate wearing purple shirts with productivity", and the whole point of my hypothetical is that the reason given is apparently spurious. Why couldn't one -- either by manual effort or computer analysis -- find some legal, but potentially arbitrary, reason to justify firing a given person? That would satisfy the letter of what is being requested here, but violate the spirit.

        • by sjames ( 1099 )

          Congratulations, you are now discriminating against Catholics during Lent. While not all Catholics choose to add purple to their wardrobe during Lent, some do as a reminder to themselves. Enjoy your lawsuit!

          Part of the problem with machine learning systems is how easy it is for the system to learn a proxy for a forbidden discriminator. Also how easily it can turn a simple statistical quirk in the training data into a hard fast rule that will be followed forever.

    • by sinij ( 911942 )

      Algorithms ARE human decisions. Who else writes them, pigeons?

      Applying the same logic, segfault is a programming decision.

    • Re:Stupid (Score:4, Insightful)

      by hey! ( 33014 ) on Friday March 26, 2021 @07:38AM (#61200726) Homepage Journal

      Algorithms ARE human decisions. Who else writes them, pigeons?

      In machine learning an algorithm could well be written by another algorithm (e.g. classification and regression trees).

      The problem isn't whether the algorithm was written by a human or not; it's whether the people making the decision to use an algorithm understand its limitations. People put too much faith in an algorithm because it's inside a black box as far as they're concerned. If you put a classification tree onto paper and made them work it manually, they would think about each step more critically, asking whether this particular decision point really makes sense in this case.

      A machine learning algorithm is only as good as far as the population being tested is identical to the population used to train the algorithm. Even then at best what you get is automated mediocrity. True, mediocre but cheap and good enough is *sometimes* preferable to excellent but expensive, but probably not in hiring decisions. This would be blindingly obvious if you made managers apply the algorithm *manually*, but put in software and call it "proprietary" and they'll trust it unquestioningly because it's *magic*.

      I see vendors boasting that their algorithm as "proprietary"; from a marketing standpoint what they're saying is "you can't get this anywhere else". But what that says to me is "you can't judge what it's doing because you can't see it."

      • > People put too much faith in an algorithm because it's inside a black box as far as they're concerned.

        This applies equally to the algorithms in computers as to the algorithms in their own heads. "I trust my gut" needs to be occasionally checked by actual data analysis.

        • by hey! ( 33014 )

          I agree. But in an organization that practices critical thinking "gut" feelings should only come into play after evidence is exhausted. In practice you ask someone to justify their feelings and you examine the rationalizations they come up with critically.

      • by Pizza ( 87623 ) on Friday March 26, 2021 @09:19AM (#61201122) Homepage Journal

        In

        The problem isn't whether the algorithm was written by a human or not; it's whether the people making the decision to use an algorithm understand its limitations. People put too much faith in an algorithm because it's inside a black box as far as they're concerned.

        The entire purpose of this is to CYA by removing humans from the decision. Because then the humans can dodge liability by truthfully claiming "We apply the same process to everyone". Having knowledge of how the algorithm works, especially the problems and limitations it has, makes them complicit in (and thus partially liable for) any negative outcomes.

        Not to mention provides an easy out for "fixing" the problems. Of course that fix will always take longer than any public attention span.

        • by hey! ( 33014 )

          No, I think I got the point. Relying an algorithm as an oracle is a cheap way to achieve mediocrity.

    • A thousand monkeys typing or 10,000 pigeons pecking... both can get the job done eventually! :-)

    • by sjames ( 1099 )

      If you use machine learning to classify inputs (in this case into hire or do not hire), it is entirely possible to end up with a system that at least appears to make good decisions understood by nobody.

      It is also possible that there exists one or more classes of inputs where it will make very bad decisions. You will only discover that in retrospect if ever. You still won't know why.

  • by ytene ( 4376651 ) on Friday March 26, 2021 @06:08AM (#61200572)
    Ultimately, the concern here should not necessarily be with the use of AI (as uncomfortable as we would all feel about it) but the way that the AI itself was developed and trained, a task performed by humans. For example, there have been "counter arguments" made that suggest that AI might be able to eliminate unconscious bias in HR related activities, such as recruitment. See here [ideal.com] and here [spotlightrecruitment.com].

    Unfortunately, it's also important recognise that the same issues exist today, with humans.

    A little over a year ago I attended (mandatory) training covering bias in the hiring of women into our technology organisation. The whole event gave a very uncomfortable feeling - and not just because 3 out of the 4 moderator/presenters were male or that they treated their one female colleague like some form of ornament.

    So I started asking what I hoped were constructive questions. Things like, "When we get external job applicants to apply for roles, and then give the details to the hiring manager, to we filter out details such as the name, gender and age of the candidate and instead use an "Applicant Number", in order to help eliminate age and gender bias?" (We didn't, and the HR dweeb looked uncomfortable)... "OK, when we assess the performance of individuals formally towards year-end, do we anonymize the subject of each review and discuss them purely in terms of their accomplishments, so that we can assess them objectively?" (We didn't and the HR dweeb started glaring at me)... "Do we perform any kind of comparative analysis, when we recruit or evaluate performance, to look for gender bias? (I was asked to leave the workshop).

    My conclusion from that experience was that the male-dominated HR Team tasked with providing the workshops were - for a reason I could not discern - hostile to the idea and had paid the entire activity only lip-service. I was challenging them to be both sincere and effective and to make real change - and they didn't like it.

    But if you have someone with that outlook participating in the training of the AI, you can quite easily inject some horrific bias in to your AI. The problem in this scenario is potentially more harmful than if you had continued to use a human team on the tasks - because your executive management and maybe even general employees might be open to accepting that human beings can be flawed and therefore put safeguards in place to ensure that recruitment and evaluation mistakes aren't made... but lacking a detailed understanding of how AI works, those same executives might be inclined to blindly trust an AI that has programmed-in bias. That's something that might be more difficult to spot.

    Ultimately, computers can do the same sorts of things humans can do, only a lot faster. That includes making mistakes. We need to remember that.
    • Mod parent up. Any AI is only as good as the training data you feed it. Guess who gets to choose the training data?

    • Mod up. HR is already aware it is biased, and must make noises and lipservice to nice to meet targets, that are second fiddle with fast talent on the ground. The Dutch have a better system - taxes rise when the workforce is out of balance with the job function code, plus tax for for a younger than average workforce. HR can do whatever, but oncosts keep going up relative to biases. Also keep in mind- training and development - Nah, too old, over 40 - no training or stretch projects for you.
      • by MobyDisk ( 75490 )

        when the workforce is out of balance with the job function code

        That's really interesting! Do you know how they calibrate the expected ratio with each job? I assume "construction worker" and "police officer" have different ratios than "nun" and "primary school teacher." Might this not actually reinforce existing imbalances? I'm gonna have to read up on this, this is interesting.

        • by HiThere ( 15173 )

          That would depend on the shape of the curve of taxation. If it leveled off at population percentage, or even at "expected ratio" then it couldn't have that effect. (Also, it would bring in a bit more taxes.)

          OTOH, this could put certain industries at a disadvantage. I can see that most construction workers would likely be male, and thus construction companies would have a higher rate of taxation. This would incentivize hiring women even if they were generally less capable of, e.g., lifting heavy weights.

          • by MobyDisk ( 75490 )

            ...this could put certain industries at a disadvantage. I can see that most construction workers would likely be male, and thus construction companies would have a higher rate of taxation. n. This would incentivize hiring women...

            Unless they set the "expected ratio" for constructor workers at something like 100 men for every 1 woman. Then the company only pays the tax if they go below that rate.

            That's why this concept is so fascinating -- wouldn't someone have to decide what that rate should be for every industry? Or do they just set it to 50% no matter what, and construction work just eats the tax knowing there is nothing they can do about it? I assume the tax goes both ways too, so since 97% of kindergarten teachers are women,

    • by sinij ( 911942 )
      I have hard time believing this story, as "male dominated HR" is just like dark matter, only exists theoretically. Everywhere company I encountered, HR is female-dominated to a very large degree.
      • by ytene ( 4376651 )
        Please go and re-read my earlier post.

        I said that:-

        1. Three out of four of the Presenters were male 2. The HR representative [singular] was not comfortable with my observations

        The fact that I used the disparaging term "HR Dweeb" in the singular might have clued you in to fact that only one of the four people present came from HR.

        I'm afraid everything else you inferred from my previous post might in fact be subconscious bias on your part.
        • by sinij ( 911942 )
          In that case I misunderstood your story. Still, complaints you have about the HR process, which I largely agree with - we need data to make good decisions, are firmly within HR department jurisdiction. If there is no data, then how is it the fault of entrenched patriarchy (you didn't use these terms) ?
    • by tomhath ( 637240 )
      It's not just the training data. The algorithm will make a decision based on the data it has for a given employee. It won't take long for humans to learn what to enter and what to not enter in order to get the desired result. Calling it an AI decision doesn't change anything from how it's done today.
      • The last time I dealt with technical recruiters, they would have me re-write my resume for each job I applied for in order to game the software application that filters job applications. They would tell me which keywords were necessary for the job position, and the number of instances in which to use the keyword(s). So, we're kind of already there.
    • So I started asking what I hoped were constructive questions. Things like, "When we get external job applicants to apply for roles, and then give the details to the hiring manager, to we filter out details such as the name, gender and age of the candidate and instead use an "Applicant Number", in order to help eliminate age and gender bias?" (We didn't, and the HR dweeb looked uncomfortable)... "OK, when we assess the performance of individuals formally towards year-end, do we anonymize the subject of each review and discuss them purely in terms of their accomplishments, so that we can assess them objectively?" (We didn't and the HR dweeb started glaring at me)... "Do we perform any kind of comparative analysis, when we recruit or evaluate performance, to look for gender bias? (I was asked to leave the workshop).

      This has been tried before [abc.net.au]. The idea is that stripping away information like race and gender will remove biases. Resulting in better outcomes for women and minorities. The reality that is exactly the opposite. Stripping away that information means no affirmative action can happen so those groups do even worse. And as a consequence these initiatives get shutdown because they are "bad for diversity".

      Professor Michael Hiscox, a Harvard academic who oversaw the trial, said he was shocked by the results and has urged caution.

      "We anticipated this would have a positive impact on diversity — making it more likely that female candidates and those from ethnic minorities are selected for the shortlist," he said.

      "We found the opposite, that de-identifying candidates reduced the likelihood of women being selected for the shortlist."

      The trial found assigning a male name to a candidate made them 3.2 per cent less likely to get a job interview.

      Adding a woman's name to a CV made the candidate 2.9 per cent more likely to get a foot in the door.

      "We should hit pause and be very cautious about introducing this as a way of improving diversity, as it can have the opposite effect," Professor Hiscox said.

    • For example, there have been "counter arguments" made that suggest that AI might be able to eliminate unconscious bias in HR related activities, such as recruitment.

      Yet, the existence of unconscious bias is questionable at best. It is just an assumption by SJW and CRT zealots. Please crawl back into your SJW hole.

      • by HiThere ( 15173 )

        Unconscious bias has been known since the time of Jung and Freud. And that was when the word was invented (or at least given its modern meaning). The *nature* of unconscious bias is certainly arguable, and it's arguable whether it exists in any particular case, but it's existence is solidly proven. If you assume that people are Bayesian reasoners, then the "unconscious bias" is the priors that you don't notice. And no serious student of reasoning thinks that we notice everything that our decisions are b

        • Unconscious bias has been believed, not known, to exist. There is zero credible evidence for it because it is not possible to test for it in a meaningful way.

          It is quite like the "criminals have low self-esteem" belief that led to millions of dollars being wasted on self-esteem courses in prison. When someone actually tested criminals' self-esteem, it was found that they have high self-esteem, so high that they believe the law shouldn't actually apply to them.

          Your post makes use of "arguable", "assume",
    • Just so we are clear, you started off with you woke assumptions, decided you were right, saw something you decided proves you are right, so everything is exactly how you think it is based on a single anecdote.
    • by AmiMoJo ( 196126 ) on Friday March 26, 2021 @08:21AM (#61200892) Homepage Journal

      It's not just a question of if the AI's decision is fair. With a human being you have an opportunity to find out *why* they made a decision. They can be questioned at an employment tribunal if it comes to that.

      A black box AI that simply produces an output with no or very vague reasoning is difficult to scrutinise. Not only are you denied a reason for a life-changing decision, but it becomes much more difficult to tell if the black box is illegally biased or not.

    • male-dominated? (Score:1, Interesting)

      by MrPCsGhost ( 148392 )
      I contend that your male-dominated is an outlier. A cursory search for "gender makup of hr departments" brings up: https://www.visier.com/clarity... [visier.com] https://study.com/blog/why-is-... [study.com] https://www.workforce.com/news... [workforce.com] So you should applaud your male-dominated HR-team, because they are working against a steep hill of discrimination. No? Now, as in my personal experience, women/females have been highly dominant in the HR-field (I've been working for over 35 years, in multiple industries, public and priva
    • Isn't "Tay" a prime example of just how wrong things can go with AI? https://en.wikipedia.org/wiki/... [wikipedia.org]
      • by WDot ( 1286728 )
        Tay is not representative of what people applying AI to hiring would do. Tay generates freeform text and learns from random people on the Internet. It was an idealistic scientific project that was set up to be trolled.

        If someone wanted to apply AI to hiring, they might do something like this:

        1. Gather all the resumes and performance evaluations of all the company's employees, past and present.

        2. Make a statistical model that tries to predict the performance evaluation (0-5) from features of the re
      • by HiThere ( 15173 )

        Well, it's a good example of what biased input training data will do the the decisions made. Most commercial uses of AI, however, don't train on live data. Of course, that just pushes the problem back one step into obscurity and causes it to be more widely spread, but it does *potentially* allow validation to ensure that the training isn't too biased.

        OTOH, AIs don't reason the way we do. Even with the best intentions they make correlations that people would never make, and don't make ones that we would.

    • You can have the best training in the world, and there's the flip side of AI balancing out human bias: people, especially employment agencies and contractors, will learn how to game the AI.
    • by Megahurts ( 215296 ) on Friday March 26, 2021 @10:29AM (#61201370)

      "Ultimately, computers can do the same sorts of things humans can do, only a lot faster. That includes making mistakes. We need to remember that."

      This is spot on. There is the old saying, a computer will do what you ask it to do, not what you want it to do.

      As someone who has been in the field of quality management for over 15 years, I have to point out something here. The precept that computers may or may not be better at performing performance reviews is one of those concepts that need not even be discussed. Whatever answer is ultimately reached, is one of those things that is not only not right, but also not even wrong. Annual reviews as an institution need to be wholly abandoned. Performance reviews of individuals in lieu of study of the system, must fall by the wayside. The idea that an individuals performance can be reviewed apart from the system's performance fails to appreciate the nature of a system. It is an attempt to analyze the parts, to try to orthogonalize something that is completely and inherently confounded with the rest of the system.

      The system's performance is not a result of the sum of its parts but rather the product of its interactions. It is very much the case that people reviewed as "highly effective" are the ones who do not adhere to the system's way of functioning and create perturbation which might positively impact their immediate circle of influence but will almost always negatively impact other areas of functioning. Or people who are seen as just basically functional, craete better systemic results because the things they do are not dismantling necessary structures for others farther out from their own role.

      And this isn't even starting to touch on the whole angle of psychology and epistemology that absolutely must be considered to be able to create a strong, functional workplace. For that, I would recommend Peter Scholtes's excellent summary in his article Total Quality or Performance Appraisal: Choose One. The crux of it is, this institution we can performance appraisal causes far more harm to an organization than good. And as Deming would say of the topic (paraphrasing) when corporate leaders are told they should stop conducting annual reviews, they ask how they should appraise performance... which is akin to being told to stop banging their head on the wall and asking "well what should I bang it on instead". Just stop.

      • by davidwr ( 791652 )

        And as Deming would say of the topic (paraphrasing) when corporate leaders are told they should stop conducting annual reviews, they ask how they should appraise performance... which is akin to being told to stop banging their head on the wall and asking "well what should I bang it on instead". Just stop.

        The corporate leaders may really be asking "who can we least afford to lose"/"who can we most afford to lose." This isn't the same question as "how do we evaluate performance" but they are related.

    • I suspect that the reason HR was uncomfortable with these questions is because they are under intense pressure to be biased *in favor* of females and other "diversity" hires. If that information is anonymized, they might not meet their "inclusiveness" quotas.

  • A short story, called Manna, where an AI decides who to hire and when. https://marshallbrain.com/mann... [marshallbrain.com]

    Ironically I found it in a slashdot comment about 5 years ago.

    • by RevDisk ( 740008 )
      Overly simplistic, but it gets the point across. It glides over voting, pretty much every civil right, etc and tries to boil the future down to corporate dystopia vs socialist/libertarian hybrid open source utopia. Basically a slightly lower tech Star Trek or a more friendly form of communism, hopefully without the usually mandatory genocide.

      The corporate dystopia version ignores voting, civil rights and all social conventions. To enforce that level of slavery and social control, you need folks with guns
      • > To enforce that level of slavery and social control, you need folks with
        > guns willing to kill people who don't follow the corporation's wishes.

        I'm not convinced that's true. If you can keep people from organizing and communicating enough, keep their malaise contained, you can wedge them into some pretty nasty scenarios already. I don't think we've got enough data about what a truly determined social control scheme could do with modern tech. I *hope* that you're right, that the light touch is not su

  • I am a freeman!
  • "I'd like to, but computer say's no"
  • We already do get "hired and fired by algorithm" ... it's just (usually) applied by human HR drones instead of computer programs.
    • The difference would be that humans executing an algorithm can significantly "bios" the algorithm's input or output interpretation. With "computer algorithm" the outcomes may be less "biosed" (hopefully) alas more ridgit possibly also making mistakes.

      However, the "AI" is not "algorithm" in the core sense of the term but rather a statistical correlator displaying "garbage in, garbage out" property. And what comes in could already be biosed w/o clear indication.

  • They can't flag any union activities by law to fire people who are trying to start one.

  • The problem with AI is that you don't always know what it selects as the most important features of the data it's provided. There was a case a while back where one of these CV ranking models was broken down and analyzed, and they found that one of the features the AI concluded was most important was whether or not the applicant was named 'Bob'. Ostensibly this is because it was given datasets containing strong CVs from people named Bob and drew a connection where none should have been made. Sooner or later

  • If you breakdance by the photocopier every other Tuesday can you trick the system into making you CEO?

  • I worked for a company that produced warehouse management software. Our main business was to tell people who worked in warehouses where to go and and what to do.

    Labor standards were a way of measuring performance. We'd send people out who would do studies and figure out how long it should take for someone to accomplish different tasks. There was a lot of pushback over labor standards from the unions, but we really strived to be objective - we had to be, because the unions held our feet to the fire, which wa

  • For example, pilots have to retire at (I think) age 60.
    if age(pilot) > 60 {
        fire(pilot);
    }
    Does that really need human review?

    • Internationally, ICAO International Civil Aviation Authority sets it to 65, and I thought FAA adopted that recently? Maybe someone here knows. Japan sets it to 67 but they are near-immortal like elves, amiright

  • or by John-Baptiste Emanuel Zorg

  • So if there was a bios prooven in a orgenazation, the use of AI can degrade it to "programming mistake" (training) rather then blaming actual culture and leadership?

  • What they are failing to realize is that those rules themselves may in fact be unknowingly unfair and discriminatory. You know what they say about good intentions and where they can lead. I have witnessed first hand how active attempts by a company to avoid being discriminatory in a workplace when there were not any *specific* issues within the company that called for such measures led to an entirely different form of discrimination that felt no closer to being the right way to do things than how they

    • by malkavian ( 9512 )

      There's a problem with that too.. Now that you can sue for an implied bias, and win on that, there are quite a lot of cases where workplaces are having to restructure and reorganise work such that it eliminates this 'implied discrimination', and to do that, they have to implement flat out directly discriminatory policy that is insanely counterproductive.

      • by mark-t ( 151149 )

        Consider this: Why would we ever reasonably conclude that an algorithm is more likely to exhibit unfair discrimination and biases than a living human being would?

        And if that's not the case, why should it be more of a concern?

  • All so-called 'AI' crapware they keep trotting out is garbage that can't be trusted with anything important, therefore nothing as important as whether someone is employed or not should ever be the decision of garbage 'AI'.
    Companies need to value their employees not treat them like disposable 'work units'.
  • It's now.

    The company I was working for was bought by another owned by an equity fund and there was a mass layoff.
    We were told the decision for who bought the farm was driven by an AI algorithm.

    This is what the quants are doing when they aren't gaming the markets.

  • Somebody wrote the algorthm. (Or trained it. Which results in the same thing but with less of a clue.)

    Somebody decided what those rules would be, that would be programmed.

    Saying "the algoritm decides who to fire" is as insanely clueless and actively harmful as saying "The VCR decided what shows to record". (Haven't found a newer example.)

  • Why are we worrying about AI's making bad choices?

    #CancelDisney
  • Ah, yes--here it is: Manna – Two Views of Humanity’s Future> [marshallbrain.com]
    Although the "happy" ending includes the "brain in a jar" horror trope. Made me wonder what *really* happened to the people who (were told that they) emigrated to Aus.

You are always doing something marginal when the boss drops by your desk.

Working...