Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Technology

It's Getting Hard To Know What is Automated and What Isn't (axios.com) 62

It's increasingly becoming a challenge to know when -- and if -- AI is at play in things we come across in our daily lives. From a report: Applicants usually don't know when a startup has used artificial intelligence to triage their resume. When Big Tech deploys AI to tweak a social feed and maximize scrolling time, users often can't tell, either. The same goes when the government relies on AI to dole out benefits -- citizens have little say in the matter. What's happening: As companies and the government take up AI at a delirious pace, it's increasingly difficult to know what they're automating -- or hold them accountable when they make mistakes. If something goes wrong, those harmed have had no chance to vet their own fate. Why it matters: AI tasked with critical choices can be deployed rapidly, with little supervision -- and it can fall dangerously short. The big picture: Researchers and companies are subject to no fixed rules or even specific professional guidelines regarding AI. Hence, companies have tripped up but suffered little more than a short-lived PR fuss.
This discussion has been archived. No new comments can be posted.

It's Getting Hard To Know What is Automated and What Isn't

Comments Filter:
  • by Anonymous Coward
    AI, without the I.
  • What is Automated and What Isn't ?
  • by religionofpeas ( 4511805 ) on Friday January 11, 2019 @02:40PM (#57945516)

    So when you're being disadvantaged by another human in a similar situation, is there a way to hold them accountable ?

    • It's at least possible. And some humans have higher level thinking skills, and they can step back and look at a system and see if it seems to be working the way it should, and producing the results that are desired. AI really doesn't have that at this time, and probably won't for a long time.

      A second, far bigger problem is that we've always had humans who have held the process above all else. No exceptions to the process. And when the process is obviously flawed, they refuse to address this. If you can get

  • I suspect there will be 2 phases of AI growth. The first phase will be giving "bots" the ability do relatively complex but practical tasks, and the 2nd phase will be creating systems that partition and track each intelligence step so that they use divide-and-conquer of both AI-creating staff, and of processes (modules). This will make it easier to understand how a bot acts the way it does, and to tune it.

    AI will grow regimented and standardized, along the lines of MVC and similar development partition techn

  • Age discrimination (Score:4, Insightful)

    by rsilvergun ( 571051 ) on Friday January 11, 2019 @02:53PM (#57945590)
    I know folks over 40 who hide their age because they won't get interviews if the company realizes they're over 40.

    AI and big data have the potential to break that. There's still markers left over from the places you worked, how long, the types of apps you've worked on ,etc.

    You used to see this with black neighborhoods unable to get mortgages because of their zip code. When you put numbers into a database without regard to what comes out you can end up with crap like this.
    • > AI and big data have the potential to break that. There's still markers left over from the places you worked, how long, the types of apps you've worked on ,etc.
      Couldn't anyone competent enough to do an interview figure that out, too? I'd think actual people going through resumes would be more prejudiced than an AI, and programming in a filter for age would be blatantly against the law.

      Amazon actually had a similar problem [reuters.com]. In their case, the "women's" keyword counted against candidates. I don't see thi

      • Amazon actually had a similar problem [reuters.com]. In their case, the "women's" keyword counted against candidates. I don't see this as an insurmountable issue; as AI improves, it should actually be able to filter for the better candidates, regardless of gender, age, race, etc., and it won't need to take shortcuts, like assuming everyone in a zip code isn't a good fit.

        That would be a fundamentally different form of AI from what's popular these days, which is algorithms generated through training on human-generated datasets - this is where they pick up human biases.

      • in an interview, but only if you get the interview in the first place. As for them catching it before the interview, if you're just glancing at a resume it's easy enough to miss.

        The thing about AI and data automation is that it makes it practical to catch things that time pressed humans miss. These little efficiency boosts add up with mega corporations resulting in tens of millions of dollars in savings. On the downside those savings usually come at the cost of longer hours and harder work for anyone wh
      • In their case, the "women's" keyword counted against candidates.

        That's simple to fix -- just put in a feminist AI. I've got a good feeling about this.

    • by tlhIngan ( 30335 )

      You used to see this with black neighborhoods unable to get mortgages because of their zip code. When you put numbers into a database without regard to what comes out you can end up with crap like this.

      Actually that's because the law was set up that way - if you were white, congratulations you could get a mortgage, but if you were black, too bad, loan denied.

      It was a technique known as "redlining" - be white and live in a nice white suburb, great!

      The unfortunate thing is its effects are still felt today.

      If

    • by Kjella ( 173770 )

      You used to see this with black neighborhoods unable to get mortgages because of their zip code. When you put numbers into a database without regard to what comes out you can end up with crap like this.

      Yeah, because problems like driving while black never existed before we had AI robocops. Oh, wait... Truth is, sharing some kind of common characteristic with an ill-perceived group is always going to be problematic because they don't know you personally. And there'll never be time to know everyone personally, like if I walk home drunk late at night and happen to walk the same way as a woman she's a lot more scared I'll jump and rape her than that she'd jump and rape me. Simply because I have a penis and sh

  • by Anonymous Coward

    Just ask the thousands of foreclosure victims from 2009/2010 who were foreclosed and evicted, despite not even being behind on their mortgages.

    Not only were the foreclosure processes at the banks automated, but so were the eviction and auction proceedings with the state courts. There were never any human eyes checking things to make sure a foreclosure was legit.

    I spent $10K on attorney fees stopping the foreclosure on my house in 2010, which was PAID FOR FREE AND CLEAR, for 4 MONTHS. The bank's automated sy

  • What's more probable, that the fools that programmed these HR bots made them to regard skill and experience as being highly valuable or that they are simply going to discard everyone that doesn't meet the "desired" qualifications? HR was shitty to start with but this is absolute trash.

  • >> Researchers and companies are subject to no fixed rules or even specific professional guidelines regarding AI. Hence, companies have tripped up but suffered little more than a short-lived PR fuss.

    This looks like it's teeing up to make the case for government regulation, which is really stupid. AI is a leading edge technology, so all the experts who would even understand the parameters involved in implementing "fixed rules" regarding AI are the ones inventing the thing to begin with. All the governm

    • AI will never be "available to the general public." The general public will be subjected to proprietary AI forced upon them by banks, retailers, social media sites, the police, etc. - These systems are not going to be open or transparent in any way, and you will not be able to opt out. To think otherwise demonstrates a lack of understanding of the technology, and of the people implementing it.

      Of course, you weren't thinking about that - you needed to find some way to talk about how "government regulations
  • If it worked, its not automated. If it didn't work, maybe its automated. Automating is still only works well on fixed function tasks and humans are fallible.
  • It's not "AI" that needs to be regulated or blamed for these issues.
    "AI" is becoming the whipping point, the man, the fall guy for shady actions enacted by corps and employees who are (last I checked) still accountable to regulations and laws enforcing fairness and transparency.
    If they're hiding behind AI for bizarre outcomes that are obviously against regulations then that's still - ILLEGAL. Don't blame the AI - take the corp to court for implementing the AI in that way in the same way Wells Fargo was h
  • The 'real danger' is actually the marketing departments that shill this garbage, making people believe it's 'magic' and can actually THINK, i.e. orders of magnitude better than it actually is. Meanwhile nobody not even the programmers really understand why it's spitting out the 'results' it does and therefore how can you trust it at all? I'll be glad when this fad comes to an end (again).
    • by gweihir ( 88907 )

      Indeed. This is dumb automation and statistics, nothing else.

  • There is no AI at this time (at least none that deserved the name), so "never" is the correct answer. Now, if you are talking about dumb, non-intelligent automation and statistical classification, that is something else...

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...