Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Robotics

Artificial Life Forms Evolve Basic Memory, Strategy 206

Calopteryx notes a New Scientist piece on how digital organisms in a computer world called Avida replicate, mutate, and have evolved a rudimentary form of memory. Another example of evolution in a simulation lab is provided by reader Csiko: "An evolutionary algorithm was used to derive a control strategy for simulated robot soccer players. The results are interesting — after a few hundred generations, the robots learn to defend, pass, and score — amazing considering that there was no trainer in the system; the self-organizing differentiated behavior of the players emerged solely out of the evolutionary process."
This discussion has been archived. No new comments can be posted.

Artificial Life Forms Evolve Basic Memory, Strategy

Comments Filter:
  • by blahplusplus ( 757119 ) on Sunday August 08, 2010 @05:47AM (#33179106)

    "amazing considering that there was no trainer in the system;"

    Not really, it's merely selecting patterns it is not aware of if it's patterns are "successful" or not. If you run a pattern generator long enough you can get all possible patterns within a finite possibility space.

  • by lalena ( 1221394 ) on Sunday August 08, 2010 @08:15AM (#33179522) Homepage
    I read the article wanting to know how the Avida developed memory. Basically, the programmer included an instruction that said "Do what you did last time" It is not evolution if the programmer hands them the ability. Also, when the goal stays in the same location every time, your robots can develop "memory" through the program itself. Ex: To go 2 up & 3 left -> Forward, Forward, Turn Left, Forward, Forward, Forward. No intelligence in the search pattern. This is simply memorizing the location of the goal. I would not call this memory.

    I am very interested in this subject and get excited every time Slashdot posts a new story in this topic, but I never see any real advances vs. what I was doing in school 20 years ago. This doesn't mean advances aren't being made, but I think they are now at the level where they don't make simple easy-read stories. Real robots (not simulated ones) getting form point A to B (not just wanting to go from A to B) over rough terrain without help (mars rovers) is much more complicated and a required advance to put this technology into a real application. MIT, NASA, National Labs always seem to have interesting projects going on.

    We celebrate these simple outdated advances in AI when we have hundreds of programs out there now capable of playing World of Warcraft without help simply to collect virtual gold to sell for cash.

    Another reason I hate these articles is that they don't include any real specifics. You could learn more reading Wikipeida on GA, GP, ANN... It was a video of a Koza project that got me really interested in this topic. Why don't people include something like this in the article. A couple of years ago, I decided to rewrite one of my old projects so that people could easily run it online - Ant Simulator [lalena.com]. Watching the system quickly learn or solve a problem is much more satisfying than reading an article written by someone that doesn't actually understand the field.
  • by mdda ( 462765 ) on Sunday August 08, 2010 @09:12AM (#33179734) Homepage

    Memory for Genetic Programming was an interesting topic back in 1995 too... And the first Koza book was an inspiration.

    One way to test out 'memory' in an experimental was is to give the individuals some 'memory cells' (or internal preserved state) to work with, and then A/B test some of the good individuals vs. the same individuals with noise added to the memory cells. In that way, one can get a handle on whether/how they're really making use of the memory. Just like adding junk code into a buggy program to see what's actually getting executed.

    One of the problems for the Genetic Algorithm/Programming people is that this stuff simply *works too well*. It's difficult to test hypotheses because the evolutionary bit will simply 'work around' you own bad coding decisions : so often experimental results are 'this was slightly worse at first, but then something really interesting started to happen'. Designing a really clean experiment is difficult : these populations are devious...

  • by Dr. Manhattan ( 29720 ) <(moc.liamg) (ta) (171rorecros)> on Sunday August 08, 2010 @09:40AM (#33179858) Homepage

    While true, this is also completely meaningless. For even trivial pattern spaces of, say, 512 bits, "long enough" would be far longer than the current age of the Universe.

    Exactly, see here for an illustration: http://en.wikipedia.org/wiki/Weasel_program [wikipedia.org]

  • by whatajoke ( 1625715 ) on Sunday August 08, 2010 @10:11AM (#33180006)
    Was it Karl Sims [wikipedia.org]? Specifically this work [karlsims.com]
  • Re:Oh... (Score:4, Informative)

    by Mikkeles ( 698461 ) on Sunday August 08, 2010 @10:56AM (#33180236)

    A pre-publication (not behind a paywall) version of the Avida (PDF) paper is here [msu.edu].
    A good guide for those who don't welcome our new artificial, man-made overlords and wish to resist ;^)

  • Re:God (Score:3, Informative)

    by Enigma2175 ( 179646 ) on Sunday August 08, 2010 @12:03PM (#33180662) Homepage Journal

    If you look at the top scientists, most of them believe in God :)

    Do they? This page [nytimes.com] says that 40% of scientists surveyed believed in god (way less than the populace at large) and only 10% of "elite scientists" believe in god. I wouldn't consider 10% of scientists to be "most" of them.

  • by ph43thon ( 619990 ) on Sunday August 08, 2010 @04:49PM (#33183008) Journal

    Well.. if you read their pdf (linked at the bottom of the blog post), you see that they literally turn the Fitness Function into a trainer that leads teams to the most proper ways to play (see pg5 Section "4.5 Fitness evaluation").

    The first step in the learning process is that the teams should spread out on the field and be relatively evenly distributed.

    Second was, move players closer to the ball.

    Third, kicking the ball was given value.

    Fourth, getting the ball closer to the opponents goal.

    Then finally, the most weight was given to scoring goals.

    So.. they have a great system here.. but it is mainly suggesting that these fitness guidelines eventually lead to behaviours that we understand as playing good defense or whatever.

    To be fair, I'm guessing that maybe Mr. Elmenreich was using the word "trainer" in some literal sense.. They didn't need some trainer for other parts of the system since that was built into the fitness evaluation step.

    What I want to know is: Did they put in a penalty for offsides?

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...