Researchers Built an 'Online Lie Detector.' Honestly, That Could Be a Problem (wired.com) 70
A group of researchers claims to have built a prototype for an "online polygraph" that uses machine learning to detect deception from text alone. But as a few machine learning academics point out, what these researchers have actually demonstrated is the inherent danger of overblown machine learning claims. From a report: When Wired showed the study to a few academics and machine learning experts, they responded with deep skepticism. Not only does the study not necessarily serve as the basis of any kind of reliable truth-telling algorithm, it makes potentially dangerous claims: A text-based "online polygraph" that's faulty, they warn, could have far worse social and ethical implications if adopted than leaving those determinations up to human judgment.
"It's an eye-catching result. But when we're dealing with humans, we have to be extra careful, especially when the implications of whether someone's lying could lead to conviction, censorship, the loss of a job," says Jevin West, a professor at the Information School at the University of Washington and a noted critic of machine learning hype. "When people think the technology has these abilities, the implications are bigger than a study."
"It's an eye-catching result. But when we're dealing with humans, we have to be extra careful, especially when the implications of whether someone's lying could lead to conviction, censorship, the loss of a job," says Jevin West, a professor at the Information School at the University of Washington and a noted critic of machine learning hype. "When people think the technology has these abilities, the implications are bigger than a study."
Depends on "who's" online polygraph it is (Score:4, Insightful)
If this app was put online labeled as "Fred's AMAZING online truth teller" with the usual ads for bikinis, penis enlargement, crockery, the latest Chevy, I don't think you have anything to worry about in terms of it causing problems.
If it's part of the Google home page or comes up automatically when submitting documents to the IRS, I think there is a great deal of concern regarding whether or not people believe the results are accurate.
Machine Learning is going to do rocket science! (Score:2)
But it can't even stop auto correct from sucking!
Re: (Score:2)
I would settle for an online Troll detector.
And maybe also an AI that tells us what we are supposed to think.
Re: (Score:2)
Since humans can't tell the difference between honest text and sarcastic text, and nor can they reliably determine what's trolling and what's not, I highly doubt that AI will be able to.
And even if it's able to, a good percent of humans won't agree! What's the use in that then?
You've helpfully volunteered a perfect example for why both humans and AI fail at this, DickBreath, I know that AI won't get this right, and there's a good chance half the humans won't either.
Should use it on it's own press release (Score:2)
I suspect it will say "false"
Re: (Score:2)
Better yet "Fake News!:
Re: (Score:2)
If they need a reliable source of untruths as a dataset, follow @realDonaldTrump.
Huh. I was trying to think of a reliable source of truthful writings. Give me a minute...
Re: (Score:2)
Re: (Score:2, Offtopic)
So you don't care how bad things get, just as democrats suffer.
It isn't that the Democrats and Republicans have irrefutable differences, they are both Americans, and share many of the same culture and ideals.
Unfortunately you have influenced by propaganda and are no longer objective, and allowed the crudity instinct in you to become dominate.
Likely closer to sarcasm detector (Score:5, Funny)
When it can determine its own press release is a lie, then I'll believe it.
Re: (Score:2)
"Cake" would be another good test.
All Machine Learning systems have an error rate (Score:4, Insightful)
Re: (Score:2)
What about the human side of lie detection? Can it tell if a person is mistaken instead of lying? What about an accomplish con man that can lie convincingly? Or my mother who genuinely believes the stories she makes up? What about a white lies, where the intent it is be polite not deceive? What about propaganda and campaign promises where what is said has no bearing on anything really? What if a statement is only partially a lie? What if a statement, while completely true, is misleading, like lying by omission?
That's all 'irreducible error'...aka the real world...aka gray area...aka it depends
Re: (Score:3)
Also, most ML algorithms of this type produce a continuous score and not a binary Yes/No classification. Human users/customers provide a score cutoff that gives an acceptable confusion matrix for their use case.
Alternatively stated, the machine does not say "Bob is lying", rather, "I think Bob is lying with X confidence".
Re: (Score:3)
Also, most ML algorithms of this type produce a continuous score and not a binary Yes/No classification. Human users/customers provide a score cutoff that gives an acceptable confusion matrix for their use case.
Alternatively stated, the machine does not say "Bob is lying", rather, "I think Bob is lying with X confidence".
You are confusing Regression with Classification. Classification produces a yes/no, choice A/B/C type answer. Regression produces a number or set of numbers often in a given range. You can turn a Regression system into a classifier with a simple cutoff but not vis-versa. Its common that ML algorithm implementions come in a Classification and Regression flavors. Also, getting a confidence interval out of a ML algorithm is possible but really difficult and of potentially questionable use. Its more hones
Re: (Score:2)
Pretty sure I can tell some stories that are 100% true but only about 5% believe them, much less machines. Weird things happen when you work in the music industry (which I did when I was in my 20s, not anymore). Really. Effing. Weird. Things. Like finding a 2 foot long, 2 inch thick rubber cock under a couch with a rubber chicken, which we HAD to use in our show... because. Bad things followed, but it was ridiculously funny until then.
Re: (Score:2)
You are confusing Regression with Classification. Classification produces a yes/no, choice A/B/C type answer.
Stop modding this guy up as he really doesnt know what he is talking about. Grossly so.
All the top image classifiers produce probabilities, and further produce many probability not just a single one. They arent answering the question "Is this an image of a cat?" They are answering the questions "Whats the chance that there is a cat in this picture? What the chance that a horse is in this picture? Whats the chance that there is a burned up Tesla Roadster in this picture? Whats the chance that someone is s
Online Lie detector Bunk (Score:2)
You can't tell if someone is lying online from just text. Humans tell if people are lying by their Facial expressions, Tone of voice and body language. It's actually very hard to lie without slipping in one of these areas unless you're a psychopath or a very skilled actor. It's much harder to tell from say a chat.
A text-based "online polygraph" that's faulty... (Score:2)
Re: (Score:2)
/Oblg. Clippy: I see that you are trying to lie! Would like help embellishing the truth or just out right lie and ignore facts; calling everyone who disagrees with you sexist, racist, and misogynistic supporting the patriarchy?
I'm sure Kirk and Harry Mudd can deal with this .. (Score:3)
Captain Kirk: Everything Harry tells you is a lie. Remember that. Everything Harry tells you is a lie.
Harry Mudd: Now listen to this carefully, Norman. I am... lying.
Polygraphs aren't supposed to work (Score:2)
It will be (ab)used (Score:3)
With the current trend towards anti-intellectualism we have now, this will only get worse, not better.
Read some of the info here about 'lie detection'. https://antipolygraph.org/ [antipolygraph.org]
I have some intimate experience with polygraphs. As a convicted sex offender, I have had to submit to them as part of a treatment regimen. I have passed polygraphs I lied on, and failed them while telling the truth. The judgement lies in the examiners subjective whims, not anything objective.
Re: (Score:2)
No. It will first be used extensively in private industry. Watch and see.
Erk (Score:2, Insightful)
But when we're dealing with humans, we have to be extra careful, especially when the implications of whether someone's lying could lead to conviction, censorship, the loss of a job,
We already hit people with all that stuff just for typing stuff online that we don't like, to say nothing of whether it's true or not!
LoL (Score:3)
Even the inventor of the polygraph agrees that it's just b.s.
Are we sure these people aren't actually from North Korea?
You know, the place where "researchers" have claimed to have found a Magical Unicorn Cave, and have perfected human cloning, and so many other absurd claims.
Let's feed this thing the bible and quran (Score:2)
What would this result be (Score:2)
If I typed in to it "This thing is a fraud."
NO COLLUSION (Score:2)
Wait, why is that computer laughing at me?
From 2018, AI has been sugested for L.D. (Score:2)
https://futurism.com/new-ai-detects-deception-bring-end-lying-know-it/ [futurism.com]
This was developed for determining if someone is lying in a court.
https://www.fastcompany.com/40575672/goodbye-polygraphs-new-tech-uses-ai-to-tell-if-youre-lying/ [fastcompany.com]
This one seems to be being developed for homeland security.
Error rate (Score:2)
I have to agree with the skeptics. Certainly, the right software could analyse text and make some kind of assessment about the potential for deception. I have no doubt that that can be done. Some people have a good intuition about other people's honesty. I do transcription professionally and have to listen to spoken word very carefully, and I can tell that people who are lying sound a bit different from people not lying.
But it's far from a reliable measure, and it's certainly not anything I could convin
Honestly! (Score:2)
Deception (Score:2)
I have to ask. (Score:2)
What happens when you put the claims about the lie detector in the lie detector?
Shall we test it? (Score:2)
I can, and if it fail it should be made widely known.
Sigh. (Score:2)
"Could be a problem?"
Not in the vast, vast, vast majority of the world, where people know - and have always known - that the polygraph is a load of bullshit, always has been and has basically never been admissible in a court of law in most places.
Only the US are stupid enough to think you can actually make a lie detector with any accuracy whatsoever.
Heres the Code (Score:2)
They could train it to recognize lies (Score:2)
using in-duh-vidual 1's tweets!
Basically an "Automated Snopes" (Score:2)
Want an excuse to ban conservatives on various online platforms? Simple; feed the AI the DNC election platform and the SPLC;s pronouncements as "the truth sample".