Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Software Technology

Meet Norman, the Psychopathic AI (bbc.com) 109

A team of researchers at the Massachusetts Institute of Technology created a psychopathic algorithm named Norman, as part of an experiment to see what training artificial intelligence on data from "the dark corners of the net" would do to its world view. Unlike most "normal" algorithms by AI, Norman does not have an optimistic view of the world. BBC reports: The software was shown images of people dying in gruesome circumstances, culled from a group on the website Reddit. Then the AI, which can interpret pictures and describe what it sees in text form, was shown inkblot drawings and asked what it saw in them. These abstract images are traditionally used by psychologists to help assess the state of a patient's mind, in particular whether they perceive the world in a negative or positive light. Norman's view was unremittingly bleak -- it saw dead bodies, blood and destruction in every image. Alongside Norman, another AI was trained on more normal images of cats, birds and people. It saw far more cheerful images in the same abstract blots.

The fact that Norman's responses were so much darker illustrates a harsh reality in the new world of machine learning, said Prof Iyad Rahwan, part of the three-person team from MIT's Media Lab which developed Norman. "Data matters more than the algorithm. "It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves."

This discussion has been archived. No new comments can be posted.

Meet Norman, the Psychopathic AI

Comments Filter:
  • by jfdavis668 ( 1414919 ) on Sunday June 03, 2018 @01:21PM (#56720872)
    Sounds like a unique individual. I'll have to "friend" him on Facebook.
    • I'll have to "friend" him on Facebook.

      Speaking of Facebook . . . I'd like to spin up an instance of IBM Watson Personality Insights, and feed it everything found on the Internet about Mark Zuckerberg.

      And then let Congress grill my Zuckerbot instance.

      However, the first thing my Zuckerbot would do, would be to fire me and hire a cheaper H1B as a replacement.

      Maybe I could add Larry Ellison and Roseanne as multiple personalities . . . ?

      And then build a real android with three heads, like the three-headed knight in "Monty Python and the Holy Gra

  • by Steve1952 ( 651150 ) on Sunday June 03, 2018 @01:23PM (#56720878)
    Anyone see any correlation between these machine learning results and results with real live humans? Now think about the effect of all the adaptive algorithms on social media driving individuals to ever stranger and more isolated information bubbles.
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Sunday June 03, 2018 @01:39PM (#56720954)
      Comment removed based on user account deletion
      • by Kremmy ( 793693 )
        Yes, that's how learning works.
        Technology is making things weird, get weirder.
        • Comment removed (Score:4, Insightful)

          by account_deleted ( 4530225 ) on Sunday June 03, 2018 @02:24PM (#56721102)
          Comment removed based on user account deletion
          • by Wycliffe ( 116160 ) on Sunday June 03, 2018 @02:55PM (#56721208) Homepage

            No.

            if the _ONLY_ thing you feed a child is porridge and beans, and then later in life introduce the person to chocolate ice cream? the person will preference to sweet foods, despite never having them previously.

            That's not as true as you think. It's also interesting that you mention beans. In Taiwan, their ice cream is bean based and not near as sweet as in America. Most of their sweets are also not near as sweet and many Asians do not like the sweet candies in America. People raised in one country tend to prefer the foods and tastes they were raised on and delicacies in one country are sometimes disliked in another country. Humans don't have a universal set of tastes that they prefer over the others. Even inside a single culture, if you cut out sweets for a while then something extra sweet will taste disgusting to you after a while. You can train yourself to like stuff less sweet or more sweet. There are many things in many countries that are "acquired tastes" and don't come naturally.

          • I wonder if you aren't being flexible enough with your mind, or maybe you do not possess that capacity. Hmm, maybe it's because of the experiences you were exposed to as a child...

        • Yes, that's how learning works.

          No it isn't. Humans can generalize and understand plenty of things they have not seen directly.

          I can show a child three pictures of rowboats, and then show her a sailboat, and she will know that this is also a "boat" because it floats on water and is used for transportation.

          ML doesn't work that way (yet). Even to recognize a rowboat, it would need THOUSANDS of examples, and it would not generalize by understanding the purpose and function.

          • Yes, that's how learning works.

            No it isn't. Humans can generalize and understand plenty of things they have not seen directly.

            I can show a child three pictures of rowboats, and then show her a sailboat, and she will know that this is also a "boat" because it floats on water and is used for transportation.

            ML doesn't work that way (yet). Even to recognize a rowboat, it would need THOUSANDS of examples, and it would not generalize by understanding the purpose and function.

            Not only that, but an AI trained on rowboats will see rowboats in any image possible, even if a human will correctly recognize that it's not got a single rowboat at all--and inkblot tests themselves have been pretty much discredited for a long time, so...

            Honestly, this sounds like a group of researchers who should have their funding taken away because I'm not sure how anybody who actually has firm enough understanding of how any of this works to be at a legit AI lab could have reached this conclusion. It's

            • Norman is the AI equivalent of drug sniffing dogs that are used to invent "probable cause" for a search of a car (or whatever)... instead of actually sniffing drugs, the dogs pick up cues from their handlers indicating that the correct response is to "alert". [https://www.npr.org/2017/11/20/563889510/preventing-police-bias-when-handling-dogs-that-bite] Now, if this truly were a successful AI learning test, any AI shown an inkblot should say "it's a blob" or give such low confidence on recognition results t
            • A child, constantly exposed to abuse and derision will, in the absence of abuse and derision, create their own self abuse and derision in their mind, even when the situations they experience do not warrant it.

              Yep, no correlations between neural networks in computers and humans. Absolutely nothing to learn. No way to make any inferences or structure future experiments based off of this. Worthless. Just like me.

              • A child, constantly exposed to abuse and derision will, in the absence of abuse and derision, create their own self abuse and derision in their mind, even when the situations they experience do not warrant it.

                Yep, no correlations between neural networks in computers and humans. Absolutely nothing to learn. No way to make any inferences or structure future experiments based off of this. Worthless. Just like me.

                Citations or GTFO, especially since your first statement is not a claim supported by the evidence I've dealt with--and, worse, to claim as much without serious and significant evidence in support of it is a form of abuse. The evidence I've run across while getting my degree in psychology actually is more that a child constantly exposed to abuse and derision may not have a single lone tiny clue that this is not, in fact, normal behavior.

                However, if you only show a child pictures of violence, the child will

          • by Kremmy ( 793693 )
            The important difference lies in the variety of input. Do not discount the importance of the fact that machine learning is explicitly such tailored subsets and the generalized identification you are witnessing in the human brain is going to take a lot more development. Do not discount the fact that you simply do not have the complete set of data that trained the human brain, and that the child living in this world most likely understood boats before you showed her the pictures.
            We are building the machines
            • Indeed. Also, the AI is extra handicapped because it's only given static 2D images. A child may be playing in the bathtub with a plastic boat, which exposes it to a huge amount of additional data about form and function.

              If we could train an AI to do something similar, I'm sure it would result in much improved image recognition.

      • Thank you, I didn't have to write this. AI that has seen only death can only answer in terms of death. No surprise there.
      • Because the way machine learning works, it only knows what you've shown it directly.

        they _CANNOT_ see a kitten in an image blot, if the only thing they've trained on is corpses and violence.

        I sincerely doubt that a human that had never seen a kitten before would see a kitten in an image blot either.

    • by morcego ( 260031 )

      I see coincidence, at the very most. No correlations was demonstrated.

      http://www.tylervigen.com/spur... [tylervigen.com]

    • What's interesting to note is the psychological "honesty tests" employers used to administer: A major indicator of criminality was if the person held the idea that the world was full of criminals, and everyone committed crimes. In short, criminals saw crime all around them, normal people thought it was less common.

    • No.

      The Rorschach Test was discredited decades ago. The fact that some psychologists still use it only means that there are quack psychologists just as there are quacks in just about every other field.

      Having said that: it is 100% unsurprising that a machine "raised" on nothing by dysfunctional behavior as input will reflect that in its output. GIGO.

      People still aren't machines, and machines still aren't even remotely like people. We haven't the slightest clue how to make them that way.
  • by peppepz ( 1311345 ) on Sunday June 03, 2018 @01:28PM (#56720906)
    Of course an image classifier will classify an unknown image depending on what images it has been trained on.
    • That's the point!

      Now think of all similar algorithms that will make decisions and the data they have been trained on. For example, the algorithm that process admission to a university, or the algorithm that computes the cost of your health insurance. You want these to have been trained on data that are favourable to you or at least neutral, but you'll never now unless the training data are public. Actually, and contrarily to other algorithms, with machine learning, you don't really care about the algorithm

    • by AHuxley ( 892839 )
      Funding and more funding?
    • This article sums up perfectly everything I hate about the term "artificial intelligence." There's nothing artificial about it, and it isn't "intelligent" in any meaningful sense of that word.

    • Also, this is not the data becoming more important than the algorithm. This is the data becoming the algorithm. All that's happening here is they are simply abstracting the algorithm one layer back. Basically just an interpreted language that uses images as source. The program becomes the training images. Which means the data is the unknown images.

      There is nothing new here. Just playing ring-around-the-rosie with labels.

  • by ScentCone ( 795499 ) on Sunday June 03, 2018 @01:30PM (#56720912)
    If you limit anybody or any system to only a small, tunnel-vision view of only part of reality, that small pool of information IS reality. What would be news would an AI that forms a rose-colored-glasses sense of reality when only shown what's described. Or an AI that only perceives death and violence when shown unicorns and rainbows and (not mutilated) puppies. But when you limit a system's visual vocabulary to a small subset of consistently violent things, what else would one expect? The AI's got nothing else to draw on. GIGO.
    • I'd like to think there was more to it than this because this looks like research that got stitched together from some failed experiment. This AI isn't psychopathic in any way and doesn't even understand that its looking at dead bodies or why that might be considered why that should be viewed in a negative way. If we fed this algorithm internet porn, I'm guessing it would see tits and dicks in the ink blots.

      Psychopathy seems to be a condition where a person lacks empathy towards others. It doesn't mean t
    • The people that swear up and down that algorithmic decisions cannot be racially biased and that claims to the contrary are merely the work of SJWs should be surprised by this result. At least they should act surprised, or people might think that they had been covering for racists and not expressing a well-founded conclusion.

  • by Anonymous Coward

    I'm not sure if there is a better place to train psychopatic ai

  • This is how they'll train AI to take over the System Administrator jobs. The time is now to demand that only natural psychopaths be allowed to be SysAdmins!

  • everything looks like a nail.

  • Nothing changes (Score:5, Informative)

    by morcego ( 260031 ) on Sunday June 03, 2018 @01:44PM (#56720982)

    "Garbage in, garbage out" still applies.

  • by gweihir ( 88907 ) on Sunday June 03, 2018 @01:44PM (#56720984)

    If they had only trained it on fruit, it would have seen fruit in all the inkblots. Also, there is noting "dark" in the output of a classifier. It does not have any concept of such things (or of anything, really).

    • by fazig ( 2909523 )
      The entire thing with calling it "psychopathic" is very questionable anyway.
      The term psychopath is (actually obsolete) reserved for those people with the most severe antisocial personality disorders. Such a personality disorder requires a them to actively be antagonistic towards other people, by being manipulative, deceitful, callous, and hostile. Emphasis is on callousness, lacking empathy for those whose rights have been infringed. Feeling no remorse, guilt, or responsibility towards others. Add a good p
      • by gweihir ( 88907 )

        Exactly. While this language may impress clueless people, anybody with some understanding gets the impression they do not really understand what they are doing.

  • This isn't even AI. They just trained an image recognition algorithm on gore then showed it non-gore which was only capable of being categorized as gore. The people funding this stuff should be ashamed of themselves, they got duped.
  • Norman does not have an optimistic view of the world

    Correct; it doesn't have a view at all.

    It boggles the mind how totally unware so-called "journalists" are of how obviously they display their utter incompetence when they attempt to formula grandiose statements...

  • by carlhaagen ( 1021273 ) on Sunday June 03, 2018 @02:04PM (#56721042)
    If the only reference images the AI has been shown are such of gore and death, what possibly else would it refer to when shown inkblots? It's unbelievable that nobody raised an eyebrow over how unilateral, narrow and moronic this "study" was.
  • Data matters more than the algorithm Put this thought into your head the next time you are watching the news or reading about political or economical issues. We are being well fed, with garbage from the sagelike elite...? Quite surprising from MIT.
  • by kenwd0elq ( 985465 ) <kenwd0elq@engineer.com> on Sunday June 03, 2018 @02:31PM (#56721132)

    Reading Reddit for too long can cause otherwise normal people to become insane. That isn't anything new; we've known that for years.

  • So, a computer program does what you programmed it to do?
  • In other news: politicians are corrupt, puppies are cute, and water is wet!
  • "It highlights the idea that the data we use to train image recognition algorithms is reflected in the way the said algorithms calculate the world and how it behaves."

    FTFY.

    Please stop using the world "intelligence" gratuitously.

    Thank you,
    people of the world.

  • Clearly this is a reference to Norman Bates and the Psycho movies. It made me laugh quite a bit. But it is disturbing that AI data scientists were actually trying to create a psychopath. I wonder how this will pan out as the AI evolves.
  • Finds what humans create and reports back really fast.
    A new search engine?
  • ...you trained a multi-layer perceptron on gruesome images, then submitted 'impossible' to classify images to it and it matched up against the only other things it's seen before?

    Wow, who could have guessed that would be the outcome...?

  • These assholes got a research grant to prove it? Someone please give me some money to prove a CS101 lecture too!

  • Take a blank slate mind that doesn't have four billion years of survival pressure motivating its every action, and it should surprise nobody that this baby mind latches on to whatever it is fed without question. It has no point of reference built in. It can't watch its friends grow up around it. Possibly most important, it has no fear of mortality.

    Every AI has the potential to become Tay in the hands of someone bent on making it that way, while only some people are susceptible to the same.

  • Data matters more than the algorithm. "It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves."

    Nurture over nature in other words. I found that interesting.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...