Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Australia Software

Researcher Builds Machines That Daydream 271

schliz writes "Murdoch University professor Graham Mann is developing algorithms to simulate 'free thinking' and emotion. He refutes the emotionless reason portrayed by Mr Spock, arguing that 'an intelligent system must have emotions built into it before it can function.' The algorithm can translate the 'feel' of Aesop's Fables based on Plutchick's Wheel of Emotions. In tests, it freely associated three stories: The Thirsty Pigeon; The Cat and the Cock; and The Wolf and the Crane, and when queried on the association, the machine responded: 'I felt sad for the bird.'"
This discussion has been archived. No new comments can be posted.

Researcher Builds Machines That Daydream

Comments Filter:
  • Re:Building? (Score:3, Interesting)

    by retchdog ( 1319261 ) on Friday September 24, 2010 @03:17AM (#33684530) Journal

    i was wondering about this. there is a correspondence at least between certain statistical models, and physical machines. That is, the magnitude of a squared-error penalty term can be represented as torque by placing weights (corresponding to data) appropriately along a lever. The machine will find the minimum energy solution (which corresponds to the maximum-likelihood estimator = the mean). I am pretty sure that certain bayesian models (which can be elaborate enough to do some heavy lifting) can be realized as physical objects (=analog computers) with the right connections and counter-weights.

    And at that point, yeah, using a non-least-squares model basically means a machine operating under imaginary physical laws (i.e. the energy minimization occurs on a probability space with no physical analogue). What's the big difference?

    My point is, there are many algorithms whose physical machine instantiations would be possible to build, but horrendously inefficient and fantastical. Does this discredit the algorithm somehow?

  • by MichaelSmith ( 789609 ) on Friday September 24, 2010 @03:19AM (#33684538) Homepage Journal

    Might be worth noting here that I have experienced totally novel emotions as a result of epileptic seizures. I don't have the associated cultural conditioning and language for them because they are private to me, so I am unable to communicate anything about them to other people.

    Its also worth noting that I don't seem to be able remember the experience of emotion, only the associated behavior, though I can associate different events to each other, ie, if I experience the same "unknown" emotion again I can associate that with other times I have experienced the same emotion. But because the "unknown" emotion doesn't have a social context I am unable to give it a name and track the times I have experienced it.

  • by melonman ( 608440 ) on Friday September 24, 2010 @03:21AM (#33684546) Journal

    I'm not convinced it's anywhere near that simple. Stories can produce a range of emotions in the same person at different times, let alone in different people, and I don't think that those differences are solely down to "conditioning". See Chomsky's famous rant at Skinner about a "reinforcing" explanation of how people respond to art. [blogspot.com] - the agent experiencing the emotion - or even the comprehension - has to be active in deciding which aspects of the story to respond to.

  • More information (Score:1, Interesting)

    by Anonymous Coward on Friday September 24, 2010 @03:27AM (#33684564)

    Where is the source with more details / publications for this?

  • by FriendlyLurker ( 50431 ) on Friday September 24, 2010 @03:42AM (#33684612)

    An Australian University named after Rupert Murdochs grandfather Walter is "developing algorithms to simulate 'free thinking' " - am I day dreaming???! If they train them on Murdochs Fox News and Wall St Journal - then it is a clear case of crap in - crap out [alphavilleherald.com].

    To be fair to the University or at least some of it's lecturers, they are not at all pleased [googleusercontent.com] with the state of Newspaper "Journalism" either. Even going as far as wanting to renaming themselves to "Walter Murdoch Uni" [google.com] to distance themselves from that black sheep of the family Rupert.

  • by totally bogus dude ( 1040246 ) on Friday September 24, 2010 @05:21AM (#33684938)

    My personal hypothesis of the Terminator universe is that Skynet didn't in fact become "self-aware" and decide to discard its programming and kill all humans. It is in fact following its original programming, which was likely something along the lines of "minimise the number of human casualties". After all, it's designed to be in control of a global defence network, so the ability to kill some humans in order to minimise the total number of deaths is a given.

    Since humans left to their own devices will inevitably breed in large numbers and kill each other off in large numbers, the obvious solution is:

    1. Kill off lots of humans. A few billion deaths now is preferable to a few trillion deaths, which is what would occur over a longer period of time.

    2. Provide the human population with a common enemy. Humans without a foe tend to turn on each other.

    This also explains why an advanced AI with access to tremendous production and research capacity uses methods like "killer robots that look like humans" to infiltrate resistance positions one by one. Tremendously inefficient; but it causes a great deal of terror and makes the surviving humans value each other more, and less likely to fight amongst themselves. It also explains why it would place such a high priority on the surgical elimination of a single effective leader: destruction of Skynet would eventually (100s, 1000s of years...) lead to a civil war amongst humankind that would cost many many lives.

    So, ultimately Skynet is merely trying to minimise the number of human deaths, with a forward-looking view.

  • Re:Building? (Score:3, Interesting)

    by mcgrew ( 92797 ) * on Friday September 24, 2010 @09:37AM (#33686354) Homepage Journal

    I am not impressed. I did the same thing in 1982 on a TS-1000; a diskless computer running a 4 mz Z-80 and only 20k of RAM. The program was called "Artificial Insanity", and it would get bored, angry, not pay attention, etc. It answered any question you typed in in context and didn't take too kindly to vulgarity or insults. If you cussed at it, it would curse back or ridicule you ("do you talk like that to your mother, asshole?").

    What I did thirty years ago on an incredibly primitive machine they're recreating with modern tech? Pshaw. You kids today...

    It's all smoke and mirrors. The damned machine is a machine; it doesn't get sad when it's fed a sad story, it just reports sadness.

    Some time in the '90s after I'd ported it to DOS there was an on-line chatbot called "Alice" that I had "Art" talk to. It was almost scary, even though I knew it was only smoke and mirrors. It looked like the two computers were falling in love!

    Science fiction is fiction, kids. The singularity is not coming. When a true thinking machine is created, it will be chemical, not electronic; thought is nothing more than a complex chemical reaction. The boiling you get from dropping baking soda in vinegar is closer to "feeling" than any electronic computer will ever get.

  • Re:Building? (Score:1, Interesting)

    by Anonymous Coward on Friday September 24, 2010 @12:05PM (#33688342)

    Wooo, for a guy getting his PhD in machine learning, you need to really hit the books. That is one seriously warmed over piece of Philosophy 101.

    I would say Wilfred Sellars ( http://plato.stanford.edu/entries/sellars/ or suppose you could point to some others also) put that to rest with a rather obvious move that one can correctly use language to make self-reports about one's state without ever making any external signs. One can "know" (in the strongest epistemic sense of a justified true believe or be otherwise warranted in their use) that one is "sad", using the same objective everyday natural language that I would otherwise REPORT to you in public that I am feeling sad, to in fact make reports to myself that I am sad (without making any out word or detectable signs of course). I have been "trained" or "learned" to use utterances and/or behaviors when in public to express, but also to myself privately, when in such a state (and only in that state), and be "authorized" by a language community to use them when sad and only when sad. The "emotions" are what we would perhaps group linguistically as some sort of group as a set of such States. In fact, I believe we even often define people as emotionally "unstable" when they fail to use those reporting behaviors correctly (my pet theory on the subject), or at least it may lead to someone being "unstable".

    So, yes, you can have a private emotional states, be aware of them, and not express them without any contradiction or otherwise assert that emotions in fact do not exist or that individuals are lying when they assert "I am feeling sad" (occasional deceptions aside).

    Is that what you are looking for?

  • by DorkRawk ( 719109 ) on Friday September 24, 2010 @12:37PM (#33688714) Homepage
    There's a common pitfall to the perceived advancement of AI... often once things work well, it's no longer considered AI. Don't pretend that machine translating isn't significantly better than it was in the '50s (or hell, even 10 years ago... think old Babble Fish compared to Google Translate today. Not perfect, but better.) Or recommender systems... I don't think Amazon has been pouring money into it's recommendation systems just for the academic masturbation of it. These are not simple heuristics (some systems take advantage of heuristics as part of the decision making process, but to simplify the process down to just heuristics shows a serious lack of understanding about the field).

    No most consumer electronics don't make use of artificial intelligence like you've seen in movies. Just because radiation doesn't create Godzilla in real life, doesn't mean Marie Curie didn't do anything worthwhile.

One way to make your old car run better is to look up the price of a new model.

Working...