Forgot your password?
typodupeerror
Software Robotics

Artificial Life Forms Evolve Basic Memory, Strategy 206

Posted by kdawson
from the but-not-as-we-know-it dept.
Calopteryx notes a New Scientist piece on how digital organisms in a computer world called Avida replicate, mutate, and have evolved a rudimentary form of memory. Another example of evolution in a simulation lab is provided by reader Csiko: "An evolutionary algorithm was used to derive a control strategy for simulated robot soccer players. The results are interesting — after a few hundred generations, the robots learn to defend, pass, and score — amazing considering that there was no trainer in the system; the self-organizing differentiated behavior of the players emerged solely out of the evolutionary process."
This discussion has been archived. No new comments can be posted.

Artificial Life Forms Evolve Basic Memory, Strategy

Comments Filter:
  • by 2phar (137027) on Sunday August 08, 2010 @04:28AM (#33179050)
    Wow look at that teamwork.. maybe those guys could represent England?
  • by blahplusplus (757119) on Sunday August 08, 2010 @04:47AM (#33179106)

    "amazing considering that there was no trainer in the system;"

    Not really, it's merely selecting patterns it is not aware of if it's patterns are "successful" or not. If you run a pattern generator long enough you can get all possible patterns within a finite possibility space.

    • Re: (Score:3, Insightful)

      by bakuun (976228)
      I don't get why this has been modded "funny". It's true. Just like monkeys tapping away at keyboards in order to generate the works of Shakespeare, a computer can generate player algorithm patterns that work well in this particular setting. The speed is just boosted by selectively choosing the ones that match whatever it is you want to get at the end.
      • by OeLeWaPpErKe (412765) on Sunday August 08, 2010 @06:07AM (#33179284) Homepage

        The fun thing is that these robots truly have a one-track mind. They do not learn -at all- within one generation, even if they have a brain that is relatively similar to ours. The brain is configured -entirely- at "birth" by the natural selection algorithm.

        And yet they display a few remarkably human traits, that seem to -but don't- indicate learning. Memory. Strategy. Having a strategy responding to the "enemy". Yet by most standards -they don't think during the game. This makes one wonder ... is the fact that humans have memory, adapt "somewhat", devise strategy really an indication of the level of thought we think humans have ?

        Makes one wonder just how one-track the human mind is. Everyone likes to always accuse everyone else of "not seeing the truth" about very nontrivial problems. Are people really "seeing the truth" or just repeating what they were programmed ?

        History of science definitely seems to agree with the "programmed" argument. Other histories ... even more. We are mindless automatons, we just like to think we aren't.

        • by hvm2hvm (1208954) on Sunday August 08, 2010 @06:37AM (#33179396) Homepage
          Depends on what level of perspective you want to look at. If you look at simple tasks and abilities, yes, a human will learn and think (some more than others) over the course of his life. It is evident if you take for example twins that grow in different environments, they get to have different abilities and understanding of the world.

          OTOH if you widen your view and look at how humans interact between each other (i.e. society), how they think (technology, culture), and other things like that they don't really learn anything during their life. That's where evolution kicks in, people born in different generations have different ways of interacting and thinking. Some are behind their times while others are ahead which I see as a normal mutation, if you will, that can be a succesful one or a failing one. But even revolutionary people become conservatives after a certain age. That's why people die, that's how society evolves.

          Yes, it's not all black and white like I made it sound, some things in the first category are inate and some in the secondary can still be modified by experience but I think my point was properly made.
          • Re: (Score:2, Interesting)

            by Rainulf (1678822)

            Speaking of twins, there are actually a lot of discoveries where twins get separated, lived and grew in a completely different environment but end up having the same traits such as habits, ways of thinking, etc.

            One of the most shocking findings is by Bouchard, Oskar and Jack ( http://www.psywww.com/intropsych/ch11_personality/bouchards_twin_research.html [psywww.com] ):
            1. Was raised in extremely different cultures; Oskar raised as Catholic; Jack as a Jew.
            2. Both were wearing wire-rimmed glasses and mustaches.
            3. Both spo

        • History of science definitely seems to agree with the "programmed" argument. Other histories ... even more.

          Er...how, exactly?

        • Makes one wonder just how one-track the human mind is. .

          I know how one track MY mind is... think I'll browse some pr0n now.

      • by mangu (126918)

        The speed is just boosted by selectively choosing the ones that match whatever it is you want to get at the end.

        There's a huge difference in purpose.

        You can move around at random, like a particle floating in the sea. If you follow that particle long enough you'll visit every port of the ocean.

        Now assume you want to go somewhere. Your movement will be constantly changed by random factors so you will need to make corrections but in the end you'll get to the place you wanted.

        No two ships follow exactly the sam

        • by Rockoon (1252108)
          Genetic/Evolutionary Algorithms is best (but not often) summed up as A Directed Random Search.
      • by ultranova (717540)

        I don't get why this has been modded "funny". It's true. Just like monkeys tapping away at keyboards in order to generate the works of Shakespeare, a computer can generate player algorithm patterns that work well in this particular setting. The speed is just boosted by selectively choosing the ones that match whatever it is you want to get at the end.

        And this teeny little boost is the difference between getting what you want before or after the monkeys and the computer disappear from proton decay [wikipedia.org].

        Seriously

      • I don't get why this has been modded "funny".

        Let's check out the parent post:

        Not really, it's merely selecting patterns it is not aware of if it's patterns are "successful" or not.

        Author ascribes awareness to the selection process. Grammatical ambiguity related to uses of "it's" some people may also consider to be funny--change the first "it's" to "its" for comical effects.

        If you run a pattern generator long enough you can get all possible patterns within a finite possibility space.

        Now that's just plain LOL. Even if we assume that the pattern space is finite, which is not clear at all, given that we're dealing here with velocities, possibly even classical chaos, the dimension of the space must be humongous, thus evolutive algoritms.

      • by bussdriver (620565) on Sunday August 08, 2010 @10:01AM (#33180268)

        The problem space is so vast when you get into the necessary details humans take for granted:
        Its so vast that it makes secure passwords look simplistic - this is far beyond brute forcing AES encryption. Even a simplified problem space is usually quite large in terms of possible combinations the only advantage AI work has is that there are no singular solutions but a large fuzzy set of solutions that are reasonably acceptable.

        Say a monkey typed 99% of Shakespeare but it was wrong only for 1% of it: next attempt being random, the monkey would likely have 0% Shakespeare! There would be no convergence towards the answer. Even bruteforcing encryption rules out past attempts to avoid repeating itself but a random search does not. Furthermore, say the problem space is random - so then a 99% Shakespeare is light years away from the 100% Shakespeare, then no matter what the process for convergence (ie evolution) it is not going to converge which effectively puts you into the same situation as a random search.

        The monkey typing thing is a silly way to state the obvious and sound good while doing so. "Its POSSIBLE but impractically time consuming" doesn't sound as good. These AI problems are nothing like monkey's typing - they learn and progress towards competency which is totally different! Again, they do this quite quickly since anything near the monkey approach wouldn't get there in our lifetimes (winning the lotto is more likely.)

        Just because it is mindbogglingly complex does not mean it is intelligent...or that it has something we'd normally think of as a "memory" either. Its possible our brains are just pattern matching machines - and since we can only understand the most simple of such things we'll never figure it out (but could build a brain which could figure it out eventually and perhaps our brains are just an extremely fuzzy non-linear pattern match for #42.)

    • Re: (Score:2, Insightful)

      by TranceThrust (1391831)
      The question of course is how large this search space is in comparison to the samples tried from it, to determine whether it really is amazing or not.
    • Re: (Score:2, Insightful)

      by metageek (466836)

      In evolution what is important is selection, as long as there is selection (based on fitness) and variability the system will adapt to the environment (the things that shape fitness). So there is a trainer, it is called selection.

      • "In evolution what is important is selection, as long as there is selection (based on fitness) and variability the system will adapt to the environment (the things that shape fitness). So there is a trainer, it is called selection."

        Not exactly. Unless variability is driven, selection and variability *may* press the system to fit the environment. But there's no security: the system may be destroyed as well.

    • by Trepidity (597) <delirium-slashdot@@@hackish...org> on Sunday August 08, 2010 @05:32AM (#33179202)

      You do have to be a bit careful, though--- sometimes there is a hidden trainer in the system. In evolutionary algorithms, there are often a lot of parameters and data structures to tweak at the beginning, e.g., what kinds of crossover and mutation operators do you have, and what's your bit-string encoding? There are a whole lot of ways to slip in human domain knowledge of which things are important into the up-front engineering.

      • Re: (Score:3, Interesting)

        by mdda (462765)

        Not really. While the literature makes a lot of fine distinctions between the various cross-over methods/rates etc., in reality it's pretty academic.

        Getting the genetic process going on a population is a really small amount of code, and there's a huge payoff to seeing it work for yourself (rather than using someone else's Black Box code).

        The real key is that 'mashing' two individuals together to create a 'child' (evolution) is a whole lot better than creating a child as a random variation of one of those i

        • by Trepidity (597)

          Statistical ML is one of my areas of research, so I'm fairly familiar with the basics. ;-)

          Generally mashing two individuals together is only better than hill-climbing if you have a useful bit-string encoding and crossover operator that results in the mashing operating on nice units. The vast majority of published successful GA results I've seen have quite heroically engineered encodings that include a lot of human domain knowledge. If you just take some random data structure and serialize it to bits directl

    • by Anonymous Coward on Sunday August 08, 2010 @05:42AM (#33179232)

      Saying there wasn't a trainer in the system is a bit of a misunderstanding really.

      Evolutionary algorithms always makes use of a fitness function to define which generations are to survive and evolve and which are to die off, this is the case in the presented setup as well. Without knowing the project i'd guess they let the "teams" play against each other and let the winners survive.

      If there wasn't a fitness function it wouldn't really be an evolutionary algorithm, evolution sorta implies "survival of the fittest" and all that you know :) The interesting part is observing the emergent behavior, in other words what we were not expecting to get out of the system. When the system doesn't have any knowledge of what a "defender" is, or what "passing the ball" means, it's interesting to see these well-known patterns evolve even when they are not specified, this is what matters to the AI researcher.

      Other implementations of evolutionary algorithms may be fun (http://rogeralsing.com/2008/12/07/genetic-programming-evolution-of-mona-lisa/) but are not showing emergent behavior because you are asking for a specific output through the fitness algorithm. That is the main difference.

      • Re: (Score:3, Informative)

        by ph43thon (619990)

        Well.. if you read their pdf (linked at the bottom of the blog post), you see that they literally turn the Fitness Function into a trainer that leads teams to the most proper ways to play (see pg5 Section "4.5 Fitness evaluation").

        The first step in the learning process is that the teams should spread out on the field and be relatively evenly distributed.

        Second was, move players closer to the ball.

        Third, kicking the ball was given value.

        Fourth, getting the ball closer to the opponents goal.

        Then finally, the

    • by khallow (566160)

      Not really, it's merely selecting patterns it is not aware of if it's patterns are "successful" or not. If you run a pattern generator long enough you can get all possible patterns within a finite possibility space.

      It's selecting working patterns faster than random selection would.

    • by ultranova (717540) on Sunday August 08, 2010 @07:44AM (#33179630)

      If you run a pattern generator long enough you can get all possible patterns within a finite possibility space.

      While true, this is also completely meaningless. For even trivial pattern spaces of, say, 512 bits, "long enough" would be far longer than the current age of the Universe.

      • Re: (Score:3, Informative)

        by Dr. Manhattan (29720)

        While true, this is also completely meaningless. For even trivial pattern spaces of, say, 512 bits, "long enough" would be far longer than the current age of the Universe.

        Exactly, see here for an illustration: http://en.wikipedia.org/wiki/Weasel_program [wikipedia.org]

        • by Glonoinha (587375)

          I was wondering where the offshore guys got the algorithm for the sort routine they used in our last outsourced project.
          Thanks.

      • You've missed the point of course, not all information (or patterns) need to be generated, only a finite subset will ever needs to be found to be useful, to put it another way you can make a million variations of a fork, but it's still useful as a fork.

    • by Glonoinha (587375)

      I bet it plays a mean game of chess.

    • by ph43thon (619990)

      Well, it's even a little less amazing than you suggest.

      If you read the pdf that the blog post links to ("Evolving neural network controllers for a team of self-organizing robots"), on Page 5 in section "4.5 Fitness Function"... they discuss how they decompose fitness down from simply "Score the most goals" to little component tasks that "ensure a smooth learning process assuming some preliminary knowledge or ideas about the solution."

      They practically lead the algorithms to better solutions along certain pat

    • Re: (Score:3, Funny)

      by hawk (1151)

      >"amazing considering that there was no trainer in the system;"

      Execution of the less-skilled makes up for a lot of training . . .

  • Hooligans (Score:2, Funny)

    by qpawn (1507885)

    The study also found that the artificial fans of the losing team started to riot on their own.

  • What's the news? (Score:3, Insightful)

    by synoniem (512936) on Sunday August 08, 2010 @05:24AM (#33179186)

    When you program some evolutionary theory in your digital world and your digital world is developing some evolutionary lifeform that is news?

    • Someone mod parent up. This reminds me of the automated mathematician: if it's given rules that encourage discovering the Goldbach conjecture, and you spend enough time tuning it, then it's no surprise that it will eventually discover the Goldbach conjecture. Some debate whether AM actually discovered anything, or just found the stuff it was designed to discover (seeing as it stopped finding interesting conjectures after rediscovering all the known ones). But that's getting into philosophy.
      • Re: (Score:3, Interesting)

        by OeLeWaPpErKe (412765)

        You don't have to go into philosophy to get these. Theoretical mathematics will help you out here.

        Suppose you had a "perfect" learner. One that tries every theoretically possible analytical technique. And then it manages to surprise you : it discovers existing mathematics, and perhaps a bit more, but nothing truly remarkable. That would simply be the result of a mathematical property of the "mathematical space" (the set of all possible mathematical knowledge, of, say all Godel-sentences) : that would simply

        • The subset of correct mathematical theory cannot be the empty set, because if it were empty, set theory (which clearly is part of mathematics) would be flawed, and therefore there wouldn't be a well defined notion of empty set, making the statement "the subset of correct mathematical theory is the empty set" meaningless. On the other hand, logic also is part of mathematics, and therefore my argument may not hold in that case.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Life in general is not much different. The environment/nature/the universe sets rules that encourage the creation of lifeforms, encourages them to replicate and improve their chance of survival. It's no surprise that life evolves and creatures develop memories, intelligence etc. The whole system is setup in a way that it is bound to happen.
        Whether evolution in nature or evolution on a computer, the underlying principles at work are similar.

        The main difference between nature and your goldbach conjecture exam

      • by selven (1556643)

        This program was not designed to discover passing, defending and scoring. It was designed to win at soccer. The program on its own realized that passing, defending and scoring are good strategies for winning at soccer. The rules of the simulation do encourage this behavior, but they were not designed to - the fact that the rules of the simulation create this result is a perfectly valid discovery, even though it's a discovery that humans made thousand of years ago.

  • by carlhaagen (1021273) on Sunday August 08, 2010 @05:34AM (#33179210)

    A bit more than 15 years ago I saw a documentary on Discovery Channel featuring identical work being made by a brittish scientist / computer programmer. His software spawned simple "lifeforms" made up by basic 2D and 3D geometrical objects - cubes, cylinders, flat triangles etc., - that were then trying to evolve methods of how to most efficiently move and travel in the simulated environment they were put in - sometimes an airy environment with ground underneath them, and gravity, and sometimes an "ocean" in which the "lifeforms" swam. Minute after minute the "lifeforms" jiggered and bounced around like broken machinery, but slowly developing a method for moving and navigating that was the most efficient for their particular shape. He spawned caterpillar-like animals made up from chains of cubes, that slowly learned how to wriggle and crawl just like catterpillars and snakes do. He spawned randomized "freaks" that learned that sometimes managed to learn how to walk with their disfiguring, and sometimes learning that the only way was to throw some bodypart around to pull themselves forward. He spawned biped animals that slowly learned how to jump to move forward, an triped animals that learned how to skip from one leg to the other, to the third. He spawned lifeforms in a watery environment that learned how to rhythmically oscillate their bodyparts to create propulsion in order to swim forward and turn around. To me, this was just as impressive, if not more, than the featured story. As a curious detail to it all, the programmer developed his software in BlitzBasic, running on a heavily accelerated Amiga 1200.

  • We've been applying genetic algorithms with ANNs for quite a while now, quite often also making groups of them cooperate. yawn?
    • by lalena (1221394) on Sunday August 08, 2010 @07:15AM (#33179522) Homepage
      I read the article wanting to know how the Avida developed memory. Basically, the programmer included an instruction that said "Do what you did last time" It is not evolution if the programmer hands them the ability. Also, when the goal stays in the same location every time, your robots can develop "memory" through the program itself. Ex: To go 2 up & 3 left -> Forward, Forward, Turn Left, Forward, Forward, Forward. No intelligence in the search pattern. This is simply memorizing the location of the goal. I would not call this memory.

      I am very interested in this subject and get excited every time Slashdot posts a new story in this topic, but I never see any real advances vs. what I was doing in school 20 years ago. This doesn't mean advances aren't being made, but I think they are now at the level where they don't make simple easy-read stories. Real robots (not simulated ones) getting form point A to B (not just wanting to go from A to B) over rough terrain without help (mars rovers) is much more complicated and a required advance to put this technology into a real application. MIT, NASA, National Labs always seem to have interesting projects going on.

      We celebrate these simple outdated advances in AI when we have hundreds of programs out there now capable of playing World of Warcraft without help simply to collect virtual gold to sell for cash.

      Another reason I hate these articles is that they don't include any real specifics. You could learn more reading Wikipeida on GA, GP, ANN... It was a video of a Koza project that got me really interested in this topic. Why don't people include something like this in the article. A couple of years ago, I decided to rewrite one of my old projects so that people could easily run it online - Ant Simulator [lalena.com]. Watching the system quickly learn or solve a problem is much more satisfying than reading an article written by someone that doesn't actually understand the field.
      • Re: (Score:2, Informative)

        by mdda (462765)

        Memory for Genetic Programming was an interesting topic back in 1995 too... And the first Koza book was an inspiration.

        One way to test out 'memory' in an experimental was is to give the individuals some 'memory cells' (or internal preserved state) to work with, and then A/B test some of the good individuals vs. the same individuals with noise added to the memory cells. In that way, one can get a handle on whether/how they're really making use of the memory. Just like adding junk code into a buggy progra

      • by drinkypoo (153816)

        If you've seen any of the truly massive demos done in Conway's game of life you will rapidly see that actually modeling a physical mechanism for memory based on simple principles is going to take a metric assload of computing time. Actually, I think the actual value is somewhere between an assload and a fuckton. At this point it seems more like a useful separate experiment.

  • by somersault (912633) on Sunday August 08, 2010 @05:43AM (#33179238) Homepage Journal

    In the late 1980s, ecologist Thomas Ray, who is now at the University of Oklahoma in Norman, got wind of Core Wars and saw its potential for studying evolution. He built Tierra, a computerised world populated by self-replicating programs that could make errors as they reproduced.

    When the cloned programs filled the memory space available to them, they began overwriting existing copies. Then things changed. The original program was 80 lines long, but after some time Ray saw a 79-line program appear, then a 78-line one. Gradually, to fit more copies in, the programs trimmed their own code, one line at a time. Then one emerged that was 45 lines long. It had eliminated its copy instruction, and replaced it with a shorter piece of code that allowed it to hijack the copying code of a longer program. Digital evolvers had arrived, and a virus was born.

    Avida is Tierra's rightful successor. Its environment can be made far more complex, it allows for more flexibility and more analysis, and - crucially - its organisms can't use each other's code. That makes them more life-like than the inhabitants of Tierra.

    Actually, organisms using each others code sounds way more like our world than ones that can't leech off each other. They already pointed out viruses, and plenty of species exist today that need other species to continue to survive.. in fact pretty much all animals need to eat other lifeforms because we can't draw energy from the sun directly.

    • by Dr. Manhattan (29720) <sorceror171.gmail@com> on Sunday August 08, 2010 @07:03AM (#33179484) Homepage

      In the late 1980s, ecologist Thomas Ray, who is now at the University of Oklahoma in Norman, got wind of Core Wars and saw its potential for studying evolution. He built Tierra, a computerised world populated by self-replicating programs that could make errors as they reproduced.

      I was so amazed by the results claimed for Tierra that I went and reimplemented it myself [homeunix.net]. And damned if I didn't get similar results [homeunix.net]. At the time, it blew me away that such a system could come up with novel solutions I hadn't expected or 'programmed in'. Indeed, a couple times it took me a while to even figure out how the things worked.

    • by _Knots (165356)

      I'm not sure where the claim about "can't use each other's code" comes from. Perhaps a subtle misunderstanding. While Avida does keep each virtual machine fully isolated from the others, Avida _does_ have explicit support for parasitic behaviors, in the form of code injection into neighboring organisms.

  • by PolygamousRanchKid (1290638) on Sunday August 08, 2010 @06:20AM (#33179320)

    The robots need to become spoiled, overpaid millionaires, who refuse to train (France). Brag a lot (England) that their opponent is a bunch of "boys" (Germany), who are afraid of them. Then take a 4-1 shellacking from the "boys." And despite being the defending champions, and having a world class league in their country, bow out early. Because all of the players in their first class league are from South America (Italy), and the they have no good domestic players.

    Robots with vuvuzelas? No, thanks. My next nightmare.

  • by vlm (69642)

    Eh, they can play soccer, not too impressive. Check back when they evolve their own religion, that would be impressive.

  • I'm always confused if these discoveries are supposed to show that we'll someday have sentient robots that will rule the world a la every sci-fi for the past decade or if they are trying to model biological evolution in a meaningful way. Personally, I hope the sentient robot thing is NP-complete. :P

    For modeling biological evolution, any in silico organism model needs to incorporate the fact that most mutations are "nearly neutral" (some might say slightly deleterious) with respect to the scoring algorit
  • .... uh, hmmm..... what was I on about?

Maybe Computer Science should be in the College of Theology. -- R. S. Barton

Working...