Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI The Internet Technology

Researchers Built an 'Online Lie Detector.' Honestly, That Could Be a Problem (wired.com) 70

A group of researchers claims to have built a prototype for an "online polygraph" that uses machine learning to detect deception from text alone. But as a few machine learning academics point out, what these researchers have actually demonstrated is the inherent danger of overblown machine learning claims. From a report: When Wired showed the study to a few academics and machine learning experts, they responded with deep skepticism. Not only does the study not necessarily serve as the basis of any kind of reliable truth-telling algorithm, it makes potentially dangerous claims: A text-based "online polygraph" that's faulty, they warn, could have far worse social and ethical implications if adopted than leaving those determinations up to human judgment.

"It's an eye-catching result. But when we're dealing with humans, we have to be extra careful, especially when the implications of whether someone's lying could lead to conviction, censorship, the loss of a job," says Jevin West, a professor at the Information School at the University of Washington and a noted critic of machine learning hype. "When people think the technology has these abilities, the implications are bigger than a study."

This discussion has been archived. No new comments can be posted.

Researchers Built an 'Online Lie Detector.' Honestly, That Could Be a Problem

Comments Filter:
  • by mykepredko ( 40154 ) on Friday March 22, 2019 @01:49PM (#58316900) Homepage

    If this app was put online labeled as "Fred's AMAZING online truth teller" with the usual ads for bikinis, penis enlargement, crockery, the latest Chevy, I don't think you have anything to worry about in terms of it causing problems.

    If it's part of the Google home page or comes up automatically when submitting documents to the IRS, I think there is a great deal of concern regarding whether or not people believe the results are accurate.

  • But it can't even stop auto correct from sucking!

    • AI doesn't need to do rocket science.

      I would settle for an online Troll detector.

      And maybe also an AI that tells us what we are supposed to think.
      • Since humans can't tell the difference between honest text and sarcastic text, and nor can they reliably determine what's trolling and what's not, I highly doubt that AI will be able to.

        And even if it's able to, a good percent of humans won't agree! What's the use in that then?

        You've helpfully volunteered a perfect example for why both humans and AI fail at this, DickBreath, I know that AI won't get this right, and there's a good chance half the humans won't either.

  • I suspect it will say "false"

    • by ark1 ( 873448 )

      Better yet "Fake News!:

      • If they need a reliable source of untruths as a dataset, follow @realDonaldTrump.

        Huh. I was trying to think of a reliable source of truthful writings. Give me a minute...

  • by bugnuts ( 94678 ) on Friday March 22, 2019 @01:53PM (#58316926) Journal

    When it can determine its own press release is a lie, then I'll believe it.

  • by sfcat ( 872532 ) on Friday March 22, 2019 @01:57PM (#58316938)
    All ML algorithms have an error rate. Its baked into the design. ML researchers talk about error rate all the time. There is even a term, 'irreducible error' in ML that refers to data points that can never be classified correctly by a specific algorithm. Its a mistake to completely trust what a computer database tells you because the wrong data could have been input or bugs could have changed that data in a weird way. Its all those risks plus the error that comes from the ML algorithm. The way to get around this is to have multiple algorithms "vote" but even then there is still an error rate. The error rate can be double digit % or lower than 1% but its always there. And all of this is on top of the risk of bad data just like a DB gives you. Garbage in/garbage out is a real principle. Trusting this stuff is tricky but the bar isn't perfection. Its better than a skilled human. And since I don't really trust a "skilled human" in lie detection, why on earth would I trust a ML algorithm that at best is only marginally better than that and could be far worse.
    • Also, most ML algorithms of this type produce a continuous score and not a binary Yes/No classification. Human users/customers provide a score cutoff that gives an acceptable confusion matrix for their use case.

      Alternatively stated, the machine does not say "Bob is lying", rather, "I think Bob is lying with X confidence".

      • by sfcat ( 872532 )

        Also, most ML algorithms of this type produce a continuous score and not a binary Yes/No classification. Human users/customers provide a score cutoff that gives an acceptable confusion matrix for their use case.

        Alternatively stated, the machine does not say "Bob is lying", rather, "I think Bob is lying with X confidence".

        You are confusing Regression with Classification. Classification produces a yes/no, choice A/B/C type answer. Regression produces a number or set of numbers often in a given range. You can turn a Regression system into a classifier with a simple cutoff but not vis-versa. Its common that ML algorithm implementions come in a Classification and Regression flavors. Also, getting a confidence interval out of a ML algorithm is possible but really difficult and of potentially questionable use. Its more hones

        • by Creepy ( 93888 )

          Pretty sure I can tell some stories that are 100% true but only about 5% believe them, much less machines. Weird things happen when you work in the music industry (which I did when I was in my 20s, not anymore). Really. Effing. Weird. Things. Like finding a 2 foot long, 2 inch thick rubber cock under a couch with a rubber chicken, which we HAD to use in our show... because. Bad things followed, but it was ridiculously funny until then.

        • You are confusing Regression with Classification. Classification produces a yes/no, choice A/B/C type answer.

          Stop modding this guy up as he really doesnt know what he is talking about. Grossly so.

          All the top image classifiers produce probabilities, and further produce many probability not just a single one. They arent answering the question "Is this an image of a cat?" They are answering the questions "Whats the chance that there is a cat in this picture? What the chance that a horse is in this picture? Whats the chance that there is a burned up Tesla Roadster in this picture? Whats the chance that someone is s

  • You can't tell if someone is lying online from just text. Humans tell if people are lying by their Facial expressions, Tone of voice and body language. It's actually very hard to lie without slipping in one of these areas unless you're a psychopath or a very skilled actor. It's much harder to tell from say a chat.

  • Much like the "real" polygraph that is prone to false positive and easy enough to fool.
  • Captain Kirk: Everything Harry tells you is a lie. Remember that. Everything Harry tells you is a lie.
    Harry Mudd: Now listen to this carefully, Norman. I am... lying.

  • they're supposed to provide probable cause for a search warrant.
  • by wbr1 ( 2538558 ) on Friday March 22, 2019 @02:22PM (#58317054)
    This 'AI' and others of it's ilk will be jumped on by law enforcement and government. They do not care if it is wrong. Just like existing polygraphs, it will be used to psychologically bully people and fool juror/the populace while having no basis in real science.

    With the current trend towards anti-intellectualism we have now, this will only get worse, not better.

    Read some of the info here about 'lie detection'. https://antipolygraph.org/ [antipolygraph.org]

    I have some intimate experience with polygraphs. As a convicted sex offender, I have had to submit to them as part of a treatment regimen. I have passed polygraphs I lied on, and failed them while telling the truth. The judgement lies in the examiners subjective whims, not anything objective.

    • This 'AI' and others of it's ilk will be jumped on by law enforcement and government.

      No. It will first be used extensively in private industry. Watch and see.

  • Erk (Score:2, Insightful)

    But when we're dealing with humans, we have to be extra careful, especially when the implications of whether someone's lying could lead to conviction, censorship, the loss of a job,

    We already hit people with all that stuff just for typing stuff online that we don't like, to say nothing of whether it's true or not!

  • by meerling ( 1487879 ) on Friday March 22, 2019 @02:30PM (#58317078)
    As nobody has actually built a "lie detector" that is significantly more effective than random chance, and yes that includes the polygraph, those "researchers" are fooling themselves.
    Even the inventor of the polygraph agrees that it's just b.s.

    Are we sure these people aren't actually from North Korea?
    You know, the place where "researchers" have claimed to have found a Magical Unicorn Cave, and have perfected human cloning, and so many other absurd claims.
  • I'm dying to see what it says. It will be fooled like 85% of the population
  • If I typed in to it "This thing is a fraud."

  • Wait, why is that computer laughing at me?

  • I actually lost the original article I was going to post, but found a completely different AI for lie detection. Both were from 2018.

    https://futurism.com/new-ai-detects-deception-bring-end-lying-know-it/ [futurism.com]
    This was developed for determining if someone is lying in a court.

    https://www.fastcompany.com/40575672/goodbye-polygraphs-new-tech-uses-ai-to-tell-if-youre-lying/ [fastcompany.com]

    This one seems to be being developed for homeland security.
  • I have to agree with the skeptics. Certainly, the right software could analyse text and make some kind of assessment about the potential for deception. I have no doubt that that can be done. Some people have a good intuition about other people's honesty. I do transcription professionally and have to listen to spoken word very carefully, and I can tell that people who are lying sound a bit different from people not lying.

    But it's far from a reliable measure, and it's certainly not anything I could convin

  • Honestly! Gosh darn it!
  • It isn't deception if the deceiver believes the statement. 9X% of garbage on the internet is people parroting false statements.
  • What happens when you put the claims about the lie detector in the lie detector?

  • I can, and if it fail it should be made widely known.

  • by ledow ( 319597 )

    "Could be a problem?"

    Not in the vast, vast, vast majority of the world, where people know - and have always known - that the polygraph is a load of bullshit, always has been and has basically never been admissible in a court of law in most places.

    Only the US are stupid enough to think you can actually make a lie detector with any accuracy whatsoever.

  • If (Posters Occupation) = "Politician" or "salesman" then text = lie
  • using in-duh-vidual 1's tweets!

  • Want an excuse to ban conservatives on various online platforms? Simple; feed the AI the DNC election platform and the SPLC;s pronouncements as "the truth sample".

Algebraic symbols are used when you do not know what you are talking about. -- Philippe Schnoebelen

Working...