Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Communications

Machine Learning System Detects Emotions and Suicidal Behavior 38

An anonymous reader writes with word as reported by The Stack of a new machine learning technology under development at the Technion-Israel Institute of Technology "which can identify emotion in text messages and email, such as sarcasm, irony and even antisocial or suicidal thoughts." Computer science student Eden Saig, the system's creator, explains that in text and email messages, many of the non-verbal cues (like facial expression) that we use to interpret language are missing. His software applies semantic analysis to those online communications and tries to figure out their emotional import and context by looking for word patterns (not just more superficial markers like emoticons or explicit labels like "[sarcasm]"), and can theoretically identify clues of threatening or self-destructive behavior.
This discussion has been archived. No new comments can be posted.

Machine Learning System Detects Emotions and Suicidal Behavior

Comments Filter:
  • Finally, I can use this [youtube.com]
  • Can it tag a switch to Beta (either all at once or feature by feature over a year) as suicidal behavior?

  • Machine Learning System can detect X with such high accuracy and with so few false positives as to be actually useful.

    Frankly detecting X is basically defined as getting better than random chance. If you decide that anybody who posts "I am so sad" is suicidal, you'll bound to get a few hits, so there. I developed an algorithm that can detect suicide and depression. The problem here is that it's useless unless it's really really accurate.

  • by jenningsthecat ( 1525947 ) on Saturday July 04, 2015 @06:41AM (#50043117)

    FTA: “Now, the system can recognise patterns that are either condescending or caring sentiments and can even send a text message to the user if the system thinks the post may be arrogant”

    On the one hand, maybe it's a good idea to notify users that their comments will likely be interpreted by most readers as having 'X' emotional tone. On the other hand, it may result in people habitually self-censoring to the extent that they show no warning signs before they explode, (literally or figuratively), in some destructive action or activity.

    I'm also thinking that this kind of ongoing **parentalistic monitoring is the wet dream of corporate overlords and wannabe dictators the world over.

    --

    **A word I coined, not a spelling mistake...

    • Parentalistic... I like it. I'm gonna try this out and see how it works
    • by qaz123 ( 2841887 )
      >>I'm also thinking that this kind of ongoing **parentalistic monitoring is the wet dream of corporate overlords and wannabe dictators the world over.

      This is what we heading to with all these technologies. Face recognition, thought recognition, etc
    • by swell ( 195815 )

      Science fiction sometimes has a theme in which someone wants to die but is prevented from doing so. Often by law, sometimes by some magical immortality gene. It's technically illegal to commit suicide in many places, with or without help.

      I'm of two minds. Suppose your employer considers you essential to her business; takes a life insurance policy on you and surrounds you with protection to prevent any 'accident'. You aren't allowed near any sharp objects. You are a wage slave (if not a sex slave), an invest

  • Yeah - I looked for the paper that won him the Amdocs prize but couldn't find it. All reports seem to be, um, based on this [timesofisrael.com] story. Which is where I found he trained the system using two Fffacebook pages:

    posts on Hebrew-language Facebook pages that are almost pure opinion, called “superior and condescending people [facebook.com]” and “ordinary and sensible people [facebook.com].” The pages are basically forums for people to let off steam about things and events that get them mad, a substitute for actually confronting the offending person. Between them, the two pages have about 150,000 “likes,” and active traffic full of snarky, sarcastic, and sometimes sincere comments on politics, food, drivers, and much more.

    “Now, the system can recognize patterns that are either condescending or caring sentiments and can even send a text message to the user if the system thinks the post may be arrogant,” explained Saig.

    System Alert - Possible Arrogance Detected - user message issued [mailto]

    [ 328.0081004] Overtones Warning (bug): Optional FUBAR field Gpe1Block has zero address or length: 0x000000000000102C/0x0 (Sarcasm overflow)

    So it's a startup pitch - expect optimistic projections of outcomes. It's even possible (would it detect that) it's based on pure supposition - you know, like maybe the opinion of the machine learning program matched a readers take on those Fffacebook pages.

  • Oldie but goodie... Clippy and the suicide note [wikia.com].
  • ...a new machine learning technology [...] which can identify emotion in text messages and email, such as sarcasm...

    Of course it can detect sarcasm, the algorithms required are really simple.

  • Here we go again... (Score:2, Interesting)

    by mbeckman ( 645148 )
    On the heels of Google's "AI" that the WSJ claims got "testy" [slashdot.org] comes this claim of a "machine learning" system that can identify suicidal tendencies. Once again, BOGUS! The claim that machines learn anything is bogus to begin with, as to date no machine has ever done anything other than record information, as in so-called maze-learning programs. Learning is a cognitive process, and until we ourselves know how it works in humans (which we don't), we can never program a machine to learn anything.

    But the rea
  • Damn Auto Correct.
  • English... how about other's, like Hebrew, for example?
    Or folks not writing in their native language on forums/social networks but in English which may be substantial.
    Sounding sarcastic, critical, suicidal or otherwise emotional may not be authentic.
    Not sure what the actual benefit would be using this and how many false positives this could create if some institution like DHS would use such a thing.

    • Sounding sarcastic, critical, suicidal or otherwise emotional may not be authentic.

      Greeks banks will run out of money next week. The positive side of this, is that there will be no more queues in front of banks, because if there is no more money in the bank, there is no point in queuing in front of it.

      How would the algorithm rate that comment . . . ?

      • Greeks banks will run out of money next week. The positive side of this, is that there will be no more queues in front of banks, because if there is no more money in the bank, there is no point in queuing in front of it.

        How would the algorithm rate that comment . . . ?

        "I'm sorry Dave, but don't give up the dayjob."

  • Yeah, that'll work.

Real programmers don't comment their code. It was hard to write, it should be hard to understand.

Working...