Machine Learning System Detects Emotions and Suicidal Behavior 38
An anonymous reader writes with word as reported by The Stack of a new machine learning technology under development at the Technion-Israel Institute of Technology "which can identify emotion in text messages and email, such as sarcasm, irony and even antisocial or suicidal thoughts." Computer science student Eden Saig, the system's creator, explains that in text and email messages, many of the non-verbal cues (like facial expression) that we use to interpret language are missing. His software applies semantic analysis to those online communications and tries to figure out their emotional import and context by looking for word patterns (not just more superficial markers like emoticons or explicit labels like "[sarcasm]"), and can theoretically identify clues of threatening or self-destructive behavior.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Interesting twist on the Turing test, can a program detect sarcasm in comments more accurately than humans?
Re: (Score:2)
Interesting twist on the Turing test, can a program detect sarcasm in comments more accurately than humans?
Yeah, sure it can.
Simpsons Did It! (Score:2)
But (Score:2)
Can it tag a switch to Beta (either all at once or feature by feature over a year) as suicidal behavior?
These stories suck they need to fit the format.... (Score:2)
Machine Learning System can detect X with such high accuracy and with so few false positives as to be actually useful.
Frankly detecting X is basically defined as getting better than random chance. If you decide that anybody who posts "I am so sad" is suicidal, you'll bound to get a few hits, so there. I developed an algorithm that can detect suicide and depression. The problem here is that it's useless unless it's really really accurate.
I'm of two minds about this. (Score:5, Interesting)
FTA: “Now, the system can recognise patterns that are either condescending or caring sentiments and can even send a text message to the user if the system thinks the post may be arrogant”
On the one hand, maybe it's a good idea to notify users that their comments will likely be interpreted by most readers as having 'X' emotional tone. On the other hand, it may result in people habitually self-censoring to the extent that they show no warning signs before they explode, (literally or figuratively), in some destructive action or activity.
I'm also thinking that this kind of ongoing **parentalistic monitoring is the wet dream of corporate overlords and wannabe dictators the world over.
--
**A word I coined, not a spelling mistake...
Re: I'm of two minds about this. (Score:1)
Re: (Score:1)
This is what we heading to with all these technologies. Face recognition, thought recognition, etc
Re: (Score:2)
Science fiction sometimes has a theme in which someone wants to die but is prevented from doing so. Often by law, sometimes by some magical immortality gene. It's technically illegal to commit suicide in many places, with or without help.
I'm of two minds. Suppose your employer considers you essential to her business; takes a life insurance policy on you and surrounds you with protection to prevent any 'accident'. You aren't allowed near any sharp objects. You are a wage slave (if not a sex slave), an invest
You can trust it 'cause it works with Fffacebook (Score:3)
Yeah - I looked for the paper that won him the Amdocs prize but couldn't find it. All reports seem to be, um, based on this [timesofisrael.com] story. Which is where I found he trained the system using two Fffacebook pages:
posts on Hebrew-language Facebook pages that are almost pure opinion, called “superior and condescending people [facebook.com]” and “ordinary and sensible people [facebook.com].” The pages are basically forums for people to let off steam about things and events that get them mad, a substitute for actually confronting the offending person. Between them, the two pages have about 150,000 “likes,” and active traffic full of snarky, sarcastic, and sometimes sincere comments on politics, food, drivers, and much more.
“Now, the system can recognize patterns that are either condescending or caring sentiments and can even send a text message to the user if the system thinks the post may be arrogant,” explained Saig.
System Alert - Possible Arrogance Detected - user message issued [mailto]
So it's a startup pitch - expect optimistic projections of outcomes. It's even possible (would it detect that) it's based on pure supposition - you know, like maybe the opinion of the machine learning program matched a readers take on those Fffacebook pages.
Oblig Clippy (Score:2)
This is only a test (Score:2)
Of course it can detect sarcasm, the algorithms required are really simple.
Here we go again... (Score:2, Interesting)
But the rea
72 hour psychiatric hold (Score:1)
And the language space is? (Score:2)
English... how about other's, like Hebrew, for example?
Or folks not writing in their native language on forums/social networks but in English which may be substantial.
Sounding sarcastic, critical, suicidal or otherwise emotional may not be authentic.
Not sure what the actual benefit would be using this and how many false positives this could create if some institution like DHS would use such a thing.
Re: (Score:2)
Sounding sarcastic, critical, suicidal or otherwise emotional may not be authentic.
Greeks banks will run out of money next week. The positive side of this, is that there will be no more queues in front of banks, because if there is no more money in the bank, there is no point in queuing in front of it.
How would the algorithm rate that comment . . . ?
Re: (Score:2)
Greeks banks will run out of money next week. The positive side of this, is that there will be no more queues in front of banks, because if there is no more money in the bank, there is no point in queuing in front of it.
How would the algorithm rate that comment . . . ?
"I'm sorry Dave, but don't give up the dayjob."
Detect this. (Score:2)
Yeah, that'll work.