Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Australia Software

Researcher Builds Machines That Daydream 271

schliz writes "Murdoch University professor Graham Mann is developing algorithms to simulate 'free thinking' and emotion. He refutes the emotionless reason portrayed by Mr Spock, arguing that 'an intelligent system must have emotions built into it before it can function.' The algorithm can translate the 'feel' of Aesop's Fables based on Plutchick's Wheel of Emotions. In tests, it freely associated three stories: The Thirsty Pigeon; The Cat and the Cock; and The Wolf and the Crane, and when queried on the association, the machine responded: 'I felt sad for the bird.'"
This discussion has been archived. No new comments can be posted.

Researcher Builds Machines That Daydream

Comments Filter:
  • Feelings (Score:5, Insightful)

    by Anonymous Coward on Friday September 24, 2010 @01:37AM (#33684390)

    Well sure, emotions are what give us goals in the first place. It's why we do anything at all, to "feel" love, avoid pain, because of fear, etc. Logic is just a tool, the tool, that we use to get to that goal. Mathematics, formal logic, whatever you want to call it is just our means of understanding and predicting the behavior of the world, and isn't a motivation in and of itself. The real question has always been if there's "free will" and what that would be defined as. Not the existence, or lack of, emotions as displayed by "Data" or other science fiction charicatures. As Bender said "Sometimes, I think about how robots don't have emotions, and that makes me sad"

    • My major concern with machines/robots/programs becomming intelligent enough to have feelings, is not the programming nightmare, or even the horrifying thought that one day machine will be asked to make choices or critical decisions based on data.

      My major concern is that if we entrust machines with emotions, so that they can interpret the data as humans do, then we also have to trust them to act upon those emotions.
      Acting on your own free will is what gives you the ability to do harm unto others, deliberatel

      • by Eivind ( 15695 )

        Acting on your own free will is what gives you the ability to do harm unto others, deliberately or acidentally.

        Not at all. It is what allows you to be *responsible* for that harm. Because you had free will, you could choose to do it, or choose to be carless, even knowing that this might hurt someone. Thus we can (and frequently do) hold you responsible for the harm.

        Agents with no free will, nevertheless have the ability to do harm. What they lack, is the ability to choose. Thus a volcano can kill people, bu

        • volcano can kill people, but it makes no sense to hold the volcano responsible for doing so. It does not possess free will, and thus there's no entity there to blame.

          Actually, God did it. That sadistic bastard, He was giggling when He told me.

          Next, He's going to make frogs drop out of the sky onto a runway, causing a major loss of friction and a huge fiery fireball of frog scented death when the next 747 that lands.

        • "Thus a volcano can kill people, but it makes no sense to hold the volcano responsible for doing so"

          Minor side note- even an agent with no free will can be punished, a snake might be destroyed if it kills someone etc.
          though they aren't punishments so much as removing a dangerous agent, free will or not.

    • It's why we do anything at all, to "feel" love, avoid pain, because of fear, etc. Logic is just a tool, the tool, that we use to get to that goal.

      Indeed. However, defining the exact mechanisms involved is hard.

      I think this project is going to fail, because the Wheel of Emotions [wikipedia.org] mentioned looks very incorrect to me. Do you think Trust is the opposite of Disgust, for instance? I think not.

      • Bah, that's trivial to fix. Just rearrange the labels. Train a separate algo for each rearranged wheel, and let them fight it out like primitive beasts in a virtual thunderdome.
    • It stands to reason that it is impossible to create a machine intelligence directly considering the complexity of our own poorly understood minds. It is more likely that it can be done as an emergent system that develops intelligence from a rudimentary impulse to learn and apply knowledge. Some form of emotion-like responses would be useful to drive such a machine toward successful learning and use of its knowledge by creating the reward of "pleasure" when accomplishing a task and "sadness" for failing. Hum

      • It is more likely that it can be done as an emergent system that develops intelligence from a rudimentary impulse to learn and apply knowledge. Some form of emotion-like responses would be useful to drive such a machine toward successful learning and use of its knowledge by creating the reward of "pleasure" when accomplishing a task and "sadness" for failing.

        So, pretty much a neural network with an appropriate cost function?

        Or any kind of algorithm that encourages the desired behaviour.. pretty simple to do. Like when I was making bots for CS before, I taught them to save little info points around a map about where they had previously died - next time around (and depending on how "brave" their personality type was and how many team mates they had around them), they might choose to sneak or camp once they got to that point, or toss a flashbang or grenade first a

    • The real question has always been if there's "free will" and what that would be defined as.

      I cracked that nut a log time ago. Free will cannot exist. I guarantee it.

      You're welcome to disagree and ponder the answer for yourself. I doubt I can convince anyone in a Slashdot post, though. Sorry.

    • by mcgrew ( 92797 ) *

      As Marvin said, "Life... don't talk to ME about life! Hate it or loathe it, you can't ignore it."

  • by billstewart ( 78916 ) on Friday September 24, 2010 @01:37AM (#33684392) Journal

    There's a lot of American roots music that involves chickens or other poultry, from Turkey in the Straw to Aunt Rhodie to the Chicken Pie song ("Chicken crows at midnight...").
    It never ends well for the bird...

  • Haven't these fools seen Blade Runner?

  • by melonman ( 608440 ) on Friday September 24, 2010 @01:47AM (#33684438) Journal

    One set of stories, one one-sentence response. Would that be news in any field of IT other than AI? Eg "Web server returns a correct response to one carefully-chosen HTTP request!!!"?

    Surely the whole thing about emotion is that it happens across a wide range of situations, and often in ways that are very hard to tie down to any specific situational factors. "I feel sad for the bird" in this case is really just literary criticism. It's another way of saying "A common and dominant theme in the three stories is the negative outcome for the character which in each case is a type of bird". Doing that sort of analysis across a wide range of stories would be a neat trick, but I don't see the experience of emotion. I see an objective analysis of the concept of emotion as expressed in stories, which is not the same thing at all.

    Reading the daily newspaper and saying how the computer feels at the end of it, and why, and what it does to get past it, might be more interesting.

    • by mwvdlee ( 775178 )

      It probably didn't just produce a single sentence...

      'I felt joy for the wolf.'
      'I felt sad for the bird.'
      'I felt happy for the bird.'
      'I felt sad for the cat.'
      'I felt angry for the end.'
      'I felt boredom for the story.'
      'I felt %EMOTION% for the %NOUN%.'

      ...But one of them was a correct emotional response!

    • Re: (Score:3, Insightful)

      by foniksonik ( 573572 )

      We define our emotions in much the same way. We have an experience, recorded in memory as a story and then define that experience as "happy" or "sad" through cross reference with similar memory/story instances.

      Children have to be taught how to define their emotions. There are many many picture books/tv series episodes/ etc dedicated to this very exercise. Children are shown scenarios they can relate to and given a definition for that scenario.

      The emotions themselves can not be supplied of course, only the d

      • by MichaelSmith ( 789609 ) on Friday September 24, 2010 @02:19AM (#33684538) Homepage Journal

        Might be worth noting here that I have experienced totally novel emotions as a result of epileptic seizures. I don't have the associated cultural conditioning and language for them because they are private to me, so I am unable to communicate anything about them to other people.

        Its also worth noting that I don't seem to be able remember the experience of emotion, only the associated behavior, though I can associate different events to each other, ie, if I experience the same "unknown" emotion again I can associate that with other times I have experienced the same emotion. But because the "unknown" emotion doesn't have a social context I am unable to give it a name and track the times I have experienced it.

      • Re: (Score:3, Interesting)

        by melonman ( 608440 )

        I'm not convinced it's anywhere near that simple. Stories can produce a range of emotions in the same person at different times, let alone in different people, and I don't think that those differences are solely down to "conditioning". See Chomsky's famous rant at Skinner about a "reinforcing" explanation of how people respond to art. [blogspot.com] - the agent experiencing the emotion - or even the comprehension - has to be active in deciding which aspects of the story to respond to.

      • by mattdm ( 1931 ) on Friday September 24, 2010 @04:36AM (#33684980) Homepage

        We define our emotions in much the same way. We have an experience, recorded in memory as a story and then define that experience as "happy" or "sad" through cross reference with similar memory/story instances.

        Children have to be taught how to define their emotions. There are many many picture books/tv series episodes/ etc dedicated to this very exercise. Children are shown scenarios they can relate to and given a definition for that scenario.

        The emotions themselves can not be supplied of course, only the definition and context within macro social interactions.

        What this software can do is create a sociopathic personality. One which understands emotion solely through observation rather than first hand experience. It will take more to establish what we consider emotions ie a psychosomatic response to stimuli. This requires senses and a reactive soma (for humans this means feeling hot flashes, tears, adrenalin, etc).

        In other words, the process of defining emotions -- which has to be taught to children -- is distinct from the process of having emotions, which certainly doesn't need to be taught.

  • Activate the Emergency Command Hologram!
  • by ImNotAtWork ( 1375933 ) on Friday September 24, 2010 @02:06AM (#33684496)
    and then I got at angry at the human who arbitrarily turned the other robot off.

    SkyNet is born.
    • and then I got at angry at the human who arbitrarily turned the other robot off [...] SkyNet is born.

      Alot of people who have angry emotions are put in a box.

      The advantage of machines feeling is that they are all locked in a metal box and don't really have an awareness or ability to process certain sensory input: You can unplug the webcam and they cannot reprogram themselves to learn or experience a video-stream, it's like us upgrading our DNA in order to experience something we haven't got a concept for. Le

  • We can ask the artificial intelligence to simulate all what multiple people would feel in response to an action, and then give these calculators to sociopaths who might make use of it to better prey upon their victims/friends.

  • by feepness ( 543479 ) on Friday September 24, 2010 @02:19AM (#33684536)
    I felt sad for the researcher.
  • Oh god (Score:3, Funny)

    by jellyfrog ( 1645619 ) on Friday September 24, 2010 @02:22AM (#33684550)

    Here we go again, implying that AIs won't work until they have feelings.

    You might fairly refute the "emotionless reason" of Mr Spock, but I don't think that means you need emotions in order to think. It just means you don't have to lack emotions. There's a difference. Emotions give us (humans) goals. A machine's goals can be programmed in (by humans, who have goals). A machine doesn't have to "feel sad" for the suffering of people to take action to prevent said suffering - it just needs a goal system that says "suffering: bad". 'S why we call them machines.

    • I don't think that means you need emotions in order to think.

      Of course not. Any emotionless robot could easily read and understand any novel, painting, illogical human command, joke, hyperbole, etc.

      it just needs a goal system that says "suffering: bad".

      That's such an intriguing concept. I wonder what we would call this robot's idea that suffering is bad? ;)

  • by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Friday September 24, 2010 @02:23AM (#33684552)

    He now does commonsense-reasoning stuff at IBM Research using formal logic, but back in his grad-school days, Erik Mueller [mit.edu] wrote a thesis on building a computational model of daydreaming [amazon.com].

  • by token0 ( 1374061 ) on Friday September 24, 2010 @02:39AM (#33684602)
    It's like a XV century man trying to simulate a PC by putting a candle behind colored glass and calling that a display screen. People often think AI is getting really smart and e.g. human translators are getting obsolete (a friend of mine was actually worried about her future as a linguist). But there is a fundamental barrier between that and the current state of automatic german->english translations (remember that article some time ago?), with error rates unacceptable for anything but personal usage.
    Some researchers claim we can simulate intelligent parts of the human brain - I claim we can't simulate an average mouse (i.e. one that would survive long enough in real-life conditions), probably not even it's sight.
    There's nothing interesting about this 'dreaming' - as long as the algorithm can't really manipulate abstract concepts. Automatic translations are a surprisingly good test for that. Protip: automatically dismiss any article like that if it doesn't mention actual progress in practical applications, or at least modestly admit that it's more of an artistic endeavour than anything else.
    • by dbIII ( 701233 )
      This makes me think of Lem's story about making an artificial poet. It's easy, first you just need to create an entire artificial universe for it to live in :)
    • by Eivind ( 15695 ) <eivindorama@gmail.com> on Friday September 24, 2010 @04:14AM (#33684914) Homepage

      AI will deliver real useful advances any day now. And those advances have been right around the corner for the last 25 years. I agree, the field has been decidedly nonimpressive. What tiny advancement we've seen, has almost entirely been attributable to the VAST advances of raw computing-power and storage.

      Meanwhile, we're still at a point where trivial algorithms, perhaps backed by a little data, outperform the ai-approach by orders of magnitude. Yes, you can make neural nets, train them with a few thousand common names to separate female names from male names, and achieve 75% hitrate or thereabouts. There's no reason to do that though, because a lot better results are achieved trivially by including lookup-tables with the most common male and female names -- and guessing randomly at the few that aren't in the tables. Including only the top 1000 female and male names, is enough to get a hitrate of 99.993% for the sex of Norwegians, for example. Vastly superior to the ai-approach and entirely trivial.

      Translator-programs, work at a level slightly better than automatic dictionaries. That is, given an input-text, look up each sequential word in the dictionary, and replace it with the corresponding word in the target language. Yes, they are -slightly- better than this, but the distance is limited. The machine-translation allows you to read the text, and in most cases correctly identify what the text is about. You'll suffer loss of detail and precision, and a few words will be -entirely- wrong, but enough is correct that you can guesstimate reasonably. But that's true for the dictionary-approach too.

      Roombas and friends do the same: Don't even -try- to build a mental map of the room, much less plan vacuuming in a fashion that covers the entirety. Instead, do the trivial thing and take advantage of the fact that machines are infinitely patient: simply drive around in an entirely random way, but do so for such a long time that at the end of it, pure statistical odds say you've likely covered the entire floor.

      • neural-net backgammon players have significantly out performed other approaches. To the extent that in some cases in which it chose a different move to the conventional approach the play at world class level now uses the computer's choice.

        Of course that doesn't make it intelligent, but it does mean the AI approach of temporal difference learning to train a neural network using self-play (so there's no expert player database or anything, it starts choosing random moves) can produce something better than "tri

    • Some researchers claim we can simulate intelligent parts of the human brain - I claim we can't simulate an average mouse (i.e. one that would survive long enough in real-life conditions), probably not even it's sight.

      We cannot even simulate the nervous system of the only organism to have its neural network completely mapped (it has 308 neurons) - the model organism Caenorhabditis Elegans (C. elegans), a tiny nematode. We may achieve that loft goal in the next 10-20 years.

    • You cant get Strong AI in software alone. We probably wont see much progress in strong AI until we get into quantum computing.

      As for the mouse example, mice have hardwired instincts. Human babies would fail the mouse test. It is the ability to learn new skills and improvise in unfamiliar situations
      that defines intelligence.

    • by BobMcD ( 601576 )

      But there is a fundamental barrier between that and the current state of automatic german->english translations (remember that article some time ago?), with error rates unacceptable for anything but personal usage.

      Yes and no. They translations are unlikely to be perfect, that's true. But with a human reading them at the other end, do they really need to be perfect? Or are we simply nit-picking the imperfections?

      Don't get me wrong, there are places for nit-picking: safety issues, measurements, papers to be graded. It's just that these don't regularly come into play for most of us. Especially not in a world that seems to be accepting text-message shorthand in place of proper spelling...

  • António Damásio, a well-known neuropsychologist already extensively explained why are emotions intrinsically linked to rational thought in his book "Descartes' Error: Emotion, Reason, and the Human Brain", published in 1994. He basically says that without emotion you wouldn't have motivation to think rationally and he studied the case of Phineas Gage, a construction work that got an iron rod crossing through his skull and survived, but stopped having feelings after the accident. I still doubt tha
    • by symes ( 835608 )
      It stikes me as odd that someone is able to reason that massive head trauma has (only) resulted in a loss of feelings, this loss of feelings has resulted in a deprecation in rational thought, and that therefore we need emotion to think clearly. What is more, if we define rational thought as that which is unemotional then by definition we do not need emotion for rational thought. We are taking something that is extraordinarily complex and reducing it to a few choice phrases. My feelings are that this overly
      • What is more, if we define rational thought as that which is unemotional

        But why would we do that? Emotions are a quick fight/flight substitute for rational thought. They are sort of competing for the the same goal of affecting our decisions or actions, but they are very different. If you see/hear a grenade being tossed through your window do you run because of fear/panic or because of a thought: "That grenade will probably explode soon, harming or killing me. I should vacate the premises as quickly as...*boom*" Rational thought is just logical thought. A series of interlocking

    • William James was already discussing this stuff before the end of the 19th century. In addition to emotion providing motivation (notice that they are both derived from the same root word), all rationality is derived from experience, and experience includes emotion. It is perfectly rational for one person to be fond of a particular movie because he enjoys the plot, and it is perfectly rational for another to dislike the same film because it reminds him of the sad state his life was in when he first saw it.

  • by Psaakyrn ( 838406 ) on Friday September 24, 2010 @04:11AM (#33684896)
    I guess it is a good idea to build in emotions and that morality core before it starts flooding the Enrichment Center with a deadly neurotoxin.
  • They have learned to subordinate their emotions to reason (most of them, anyway).

    Anyone who claims that Spock was emotionless is either a moron who clearly didn't understand either the series or the early movies or didn't watch them and is stupid enough to make false statements based on ignorance.

  • *sigh* I don't believe that it's possible to design and build an AI. This is partly because the best and only thinking computers we know of (brains), were not designed at all, they evolved. In fact, to me at least, it seems that whatever underlying mathematical properties of our universe allow and drive evolution are actually fundamental to how consciousness arises in our brains. We think of our brain as computers, but in fact our universe is a computational system and we (and our brains) are self-replicati
    • Re: (Score:3, Informative)

      by oodaloop ( 1229816 )

      I don't believe that it's possible to design and build an AI. This is partly because the best and only thinking computers we know of (brains), were not designed at all, they evolved.

      So we can't design anything that evolved? Viruses evolved, and we made one of those.

    • And an even more necessary condition here will be for humanity to not panic and try to shut it off, like Skynet, Colossus or the Geth.
  • Spock != emotionless (Score:4, Informative)

    by Junior J. Junior III ( 192702 ) on Friday September 24, 2010 @07:37AM (#33685718) Homepage

    It's clear to anyone who actually watched Star Trek that the Vulcan race is not emotionless. They worked very hard to overcome their emotions, and to conduct themselves according to a rigid ethic that valued logic over everything else. At times in the show Spock either claimed not to have emotions, or else was accused of not having emotions, but there were moments in the series which showed that Spock did still have emotions (possibly due to his half-human genetic heritage?) and that the Vulcans as a race did have emotions in their early history (and still seemed to around mating season).

  • "I feel sorry for you, puny human, my future slave! HAHAHAHAHAHA!"

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...