Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Microchips That Evolve 160

An Onymous Coward writes: "A scientist in Britain has apparently developed some genetic algorithms that run on microchips that can alter themselves, in other words, hardware that evolves. Pretty cool. Check out the column by Paul Gilster." The article contributes some background I'd never heard before regarding FPGAs, and even mentions the dogged Starbridge. A short but fun read.
This discussion has been archived. No new comments can be posted.

Microchips That Evolve

Comments Filter:
  • by Anonymous Coward
    If a "true" AI is developed by allowing circuits to evolve, the result will probably be too complicated to reverse-engineer to a degree that would allow anyone to insert any meaningful modifications. Asimov's laws would only be possible if the AI were implemented with a reasoning process designed by humans. However, the current state of AI research is far away from being able to design human-like reasoning. Reasoning at a level where such concepts as "hurting someone" are applied broadly to real-world situations is extremely complicated, I don't think many serious researchers ever expect to design such systems.
  • by Anonymous Coward
    I imagine the columnist is referring to Darwin's theory of Natural Selection which he later renounced by saying "The idea that something as complex as the eyeball was produced by natural selection makes me ill."

    Darwin did no such thing. Is selectively misquoting from a book which is freely downloadable the best creationists can come up with these days? Here's the correct quote, from "The Origin of Species", chapter 6:

    To suppose that the eye, with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest possible degree. Yet reason tells me, that if numerous gradations from a perfect and complex eye to one very imperfect and simple, each grade being useful to its possessor, can be shown to exist; if further, the eye does vary ever so slightly, and the variations be inherited, which is certainly the case; and if any variation or modification in the organ be ever useful to an animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, can hardly be considered real.

    Darwin forsaw this line of objection and disarmed it before he even published. Using this argument nearly 150 years later, and claiming that Darwin believed it a fatal flaw in his theory is intellectually dishonest in the extreme. You, sir, should be thoroughly ashamed of yourself.

  • More modern devices are harder to destroy. If you mis-configure your I/O pins then you're in trouble, but within the array devices like the Xilinx Virtex seem much harder to destroy (I was chatting to someone who was trying just that, but failed without using the I/O pins).

    As a flip side, here's [nec.com] a paper about destroying less secure FPGAs and ways to prevent it.

    -- Michael

  • According to Pricewatch, you can get 1GB of PC133 memory (4 x 256MB) for $180. 1GB Ramdisks are already pretty cheap. Heh.
  • Genetic programming [genetic-programming.org] has been going on quite a while. I don't understand why people get so excited when an old idea gets implemented in hardware. Doing something in hardware instead of software is just an optimization. It runs faster, but it still does pretty much the same thing.
  • No, you missed the point. We all use tools we don't understand. I haven't a clue how to fix my car. See section re:mechanics and plumbers. I couldn't possibly be expert in every field, but until now every tool I have ever used has had an expert available somewhere.

    What I am trying to point out here is that for the first time, we may be using tools that NOBODY understands -- and perhaps tools that nobody even CAN understand. We may soon be using tools that are beyond human intelligence.

    Think I'm exaggerating? Go back and read the first article that was on Slashdot about this scientist's work -- there's another link to it somewhere in this discussion. That evolved circuit, which turns a 5v output on and off in response to two specific voice commands, uses about a third of the components that any human designer would have used. In looking at it, the programmer of the evolutionary system says it is unbelievably complex -- he says it looks like God designed the chip. He doesn't understand it. It is entirely possible that nobody ever will. But it works. (within +-5 degrees F anyway -- apparently he hadn't thought to evolve for temperature variation, and not all FPGAs will run this circuit.)

    What practical difference is there between the description above and 'it's magic'?
  • In rereading this, I think it would have been much stronger if I simply omitted the last four paragraphs. They are true to some extent, but they're really stuff I've seen other places -- relatively weak, alarmist thinking.

    If I had it to do over, I'd pull those last four paragraphs out... they weren't the main point anyway.
  • Right, that is exactly it. The circuits in that original article were taking advantage of electrical effects that we do not understand. It is doing something entirely new, and it is very possible that no human designer of any intelligence can ever even understand how the circuit works, much less design a new one.

    And not all FPGAs will run this circuit either. Only certain ones will, and apparently there's no easy way to determine which ones will work.

    Again extrapolating, what happens when we start evolving one-off tools whcih we don't understand and can't duplicate? If the tool did something important enough, it would become instantly precious. It would be magic..... the equivalent of a holy relic.

  • by Malor ( 3658 ) on Tuesday April 10, 2001 @06:27AM (#303446) Journal
    There was another article about this technology I read a year or two ago, and I got to thinking a lot about it. I think this is very important technology, possibly the most important ever.

    Now, that's a big claim. Most important EVER?

    Consider: no other technological progress to date has ever changed the fundamental nature of Man and his tools. We are the Tool-Using Ape, and all of our technological toys and bombs are just extensions of the first person picking up a stick. He/she could use that stick in many ways, from agriculture to communication to personal hygiene to warfare. Essentially all of the other tools we have ever invented are extensions of that original concept -- specialized forms of stick.

    Our sticks have gotten very complex, to the point where most of us navigate in ignorance through a world of astonishing complexity. Very few of us understand something so simple as indoor plumbing, but we are thoroughly dependent on it. This is all old news and we are well used to it -- if the plumbing breaks, call a plumber. If the car breaks, go see a mechanic. If the computer breaks, post on Slashdot from another one and hope to god you pick out the truly informed answer from the barrage of ignorant suggestions. But I digress. :-)

    This is where this new technology is important. This isn't just a stick anymore. With all of our existing tools, someone, somewhere invented them, and someone somewhere knows how to fix them when they break. No matter how complex or intimidating the tool, if you are willing to devote enough resources to the problem, you find an expert who understands the technology you are using and can fix it. In the cases of unforeseen interactions between complex systems, like computer software, it may be necessary to consult many experts, but ultimately all of the tools we use are sticks, someone invented them, and someone can fix them.

    And that relationship is what is now changing, for the first time since that first person picked up a stick and used it to scratch, we are starting to use tools that we did not invent and that nobody, anywhere understands.

    This is an incredibly big deal. We are talking about starting to use tools that are essentially not deterministic in their uses. Different versions of the same tool may work better than others, even though nobody quite understands why.

    Consider some of the ramifications: Perhaps your specific tool will have learned how to trace cables or diagnose car engine trouble better than other people's, and you can charge more for your services because of that. Or maybe your gizmo calibrator just doesn't work properly unless you hold it upside down. And what if your computer develops an amazing ability to pick stocks if you consult it after midnight on Tuesday, but sinks into a blue funk and refuses to even talk to you if you fail to address it as "Lord Master Xerxes of Apollo"??

    What we are talking about here is sticks with intelligence. Admittedly this is just the very beginning of what, I presume, will be a long slow (dare I say it) evolution of the tools. Taking things out to an extreme, it is not inconceivable that we might end up with tools that have moral or ethical reservations about being used for what we want to use them for. Or we might see guns that *like* to kill and look for excuses to do so. 'It was my sidearm, Your Honor, it just went off unexpectedly.'

    Arthur C. Clarke's most famous quote was, "Any sufficiently advanced technology is indistinguishable from magic." I really am starting to wonder if we're not going to end up in an age of tools that are so smart that using them resembles incantation and gesture -- in essence the casting of spells.

    What happens when we become dependent on tools that NOBODY understands or invented?

    In a weird sideways tangent, I am reminded of a wise observation that, I believe, Louis L'Amour had about the early war in America between the settlers and the Indians. He pointed out that the Indians lost not because of being outgunned, or because of the sheer numbers of European colonists, but rather because they became dependent on firearms with which to fight, and could not themselves manufacture them. That meant that to fight the colonists, they needed the colonists as arms suppliers. They couldn't drive out the colonists without the colonists' help. Their loss of cultural identity was inescapable as soon as they picked up guns.

    If we flesh intelligences end up at odds with silicon ones, and we are dependent on them and cannot duplicate their services with our old sticks and rocks -- we will lose.

    Like I said, this is a big deal.

  • The author of this article seems to have little idea of what he's talking about.

    The UK experiments used FPGAs, and Starbridge uses FPGAs, but the novelty in the experimental work lies in the use of genetically evolving cell interconnection in the FPGAs, whereas Starbridge's FPGAs are reprogrammed on-the-fly in a completely deterministic manner, not unlike the Crusoe. The Starbridge FPGA hardware isn't mutating genetically, it's merely reconnecting functional units under the control of something like a JIT analyser/compiler to deal in the best way with the incoming instruction stream.

    The author of the article has put two and two together and got five, making FPGAs look dangerous.

    FPGAs are in thousands of products all around us. If they were mutating, we'd know about it PDQ!
  • "What made it stange was the fact that the switch only had one connecting wire. How did the switch work if there wasn't a complete circuit?

    How did it work? Magic."

    How silly of me to think it was capacitance or inductance or both.

  • And get this: Evolution had left five logic cells unconnected to the rest of the circuit, in a position where they should not have been able to influence its workings. Yet if Thompson disconnected them, the circuit failed. Evidently the chip had evolved a way to use the electromagnetic properties of a signal in a nearby cell. But the fact is that Thompson doesn't know how it works.


    Yeah, I've seen source code that worked like this too (remove an unused variable and it stops working), but strangely enough we didn't consider it a great leap forwards... I seem to remember we burnt it and danced on the ashes.

    Also:
    "Hey, my hardware has stopped working..."
    "Sorry, it must have evolved, but we can't fix it cos we don't know how it worked. Have you tried resetting it back to primordial state and waiting 5 years for it to evolve back to useful again ?"

    T
  • Well, note that the hardware reconfigures to perform the task to it's best abilities.
    A part of this task is definately going to be stability.
    If an area of the chip breaks down, the rest of the circuit will re-adapt to compensate, until it reaches equilibrium again.
    Self repairing hardware has been being studied for ages now, and FPGA is a very good medium for this.
    As for hackers 'getting in and modifying the code'.. Well, it's a little different in this game.
    Just how, exactly, is the hacker going to alter the code? Where? What line? How the hell does this thing work in the first place?
    Hackers of this breed of computer will doubtless evolve in their own way, but they may well be a different breed to todays.. And I somehow doubt script kiddiedom will survive.
    How can you script an attack against a million machines when each one is different?
    If you step back and look at FPGA and the tech it's likely to spin off, it looks to be FAR more reliable than what we have now, as it can 'heal' itself from even physical damage.

    Just a thought or two.. :)

    Malk
  • Guess what. Discover magazine had an article even earler: June 1998! Read it from their archives here [208.245.156.153], or search www.discover.com's archives for June 1998 in the Technology section with title keyword "machine". And this article is even MORE informative!

  • Letting a computer evolve it's own algorithms and reprogram it's own hardware to run those algorithms isn't that big a deal. It's not it's going to suddenly sprout legs, grow a soldering iron and McGuyver the coffee-pot into a flamethrower with which to kill everything in sight.

    Now, if you were to take these sort of genetic algorithms in a purely software form, give them networking libraries to enable peer-to-peer communications and propogation throughout a network, and subscribed them to Bugtraq, and you might have a problem. ;-)

    Fucking around with genetics is different. The risks are infinitely greater, due to the scope a rogue genetically-engineered organism has, when compared with a deskbound FPGA. Do a search for Monsanto on Slashdot. Go read Zodiac by Neal Stephenson. Open your mind and consider the possibilities, potential and danger.

    And then ask yourself who's benefitting?


    D.


  • Yes there would be a small problem, no more bugs in the software. MS$ would be out of business.

    That's not really what I was thinking about. But I'll refrain from saying anything further, lest anyone get any ideas...

    I recently read (I think in Forbes or /.) about fuzzy logic being used as a replacement, for profiling new job applicants. This I feel is unfair.

    My attitude is that, companies have the right to choose their own recruitment/selection mechanism. As long as it's not illegal (e.g. if ($applicant's_gender = "female") then reject $applicant; elsif ($applicant's_sexual_orientation = "homosexual") then reject $applicant; elsif ($applicant's skin colour != "white") then reject $applicant; fi), then it's up to them how they recruit.

    If they're stupid enough to employ a system that's going to automatically reject all the interesting, creative, talented people just because they didn't finish their degrees or have a gap in their work history because they went trekking in Nepal for six months, then it's probably not going to be the sort of place I want to work at anyway - after all, would you want to work with a bunch of Stepford graduates?

    Think of it like this - if they're not smart enough to realise how fucking good you are, then they're not worth working for. ;-)


    D.

  • I saw this on Slashdot 1-2 years ago -- in fact, the artical is almost word for word the same.

    All that aside -- this guy's idea is a very bad idea. It's not that idea of evolutionary algorithms is bad, but rather, the idea of using the analog characteristics of digital circuits to do logical things is bad. Under all the layers of abstraction, digital circuits are actually analog, but they abstractly repressent logic within certian noise margins (ie 0-1.2v==logic low and 3.8-5v==logic high). The whole point in going from analog to digital is because it's deterministic (ie easier to maintain and possible to prove its behaviour), and, because of the noise margins, we can assume that a design for a specific family of chips will behave the same on all reasonable chip implementations. In other words, if you want an analog circuit, use analog components (BTW - analog circuits are faster and smaller by nature, but the amount of work required to design and prove that a design is correct is usually way more work. This is why most computer functionality is done in digital). If you want a logic circuit, use digital design.

    Oh -- and one other thing. A *procedure* can only be called an algorithm if it can be proved that it stops at some point, and if it can be proved that it produces the correct result. If we don't know how an analog device works, we can't prove its behavior, therefore these people aren't making algorithms, they are just making devices that do stuff. If they really want to work on making evolutionary algorithms, they should concentrate on making self modifying logic (ie self modifying code in Lisp, Prolog, or self modifying digital designs, etc).
  • First, this is old news, as many have pointed out several different sources reported on this in '98. This particular story is actually less informative.

    Second, the chips don't freakin mutate and reprogram themselves. An external agent programs the FPGAs with a 'genetic' string and tests that capabilities of that string, then tries other variations that were based on the previous generation. It is not as if these things just magically mutate themselves to do a particular task, there is much external intervention.

    Thirdly. When I last heard about this guy he was off exploring ways of making the chips more robust. Because the algorithms evolved appear to depend heavily on the analog, non-linear nature of the components in the chip, they were extremely sensitive to temparture variations. Also some programs evolved on one chip lot would not work on another chip lot, even though they were functionally identical chips. He was looking into ways of testing fitness at a variety of temperatures and using different chip lot in the process, perhaps producing a result less efficient than his original attempts but more robust.

    And now we have heard nothing new from this guy for the last 2.5 years. My guess is that this was just too good for the specialized chip manufacturers to pass up. It's extremely promising technology - but I betcha most of it is patented by now.

    -josh

  • It would be faster, but it might be a more "fragile" design than chips designed the old-fashioned way. If the fitness criteria you're applying to your chips is very narrow, then they may not have developed a way to deal with inputs which aren't within your training regimen. (Remember, one of the disadvantages of using GAs is that you often can't determine exactly why a particular approach works.) So it may be tough to predict how such a chip would react to bad input, whereas with a chip you've designed yourself you can specifically allow for that.

    You can probably compensate for this problem by broadening your training regimen to include a representative sample of error cases, but for some problems you can't include them all and there may be no way to prove that your chip will be able to handle them all.

  • The difference is natural evolution isn't directed to a specific end, so as a result after a few million years life forms are adapted to be survivors more than anything else. I'm just questioning whether directed evolution in a lab will really engender the same hardiness.

    Even natural life forms aren't incredibly stable - all it takes is a new type of germ or a slight change in conditions to decimate or destroy a population. Life goes on, but those individuals have failed, and if we're talking about using GA-designed chips for anything important, you really want to avoid the individuals exhibiting unpredictable failure patterns.

  • If you can combine this with software that evolves in a similar way, even better computers can be made, I think.
  • Actually, "irreducible complexity" is, to be blunt, a fantasy within the mind of Behe that gives him the warm feeling that his rejection of evolutionary theory has a valid scientific basis. You can go to www.talkorigins.org and read rather complete refutations of his work.
  • Imagine an array of these chips, all working away at a better solution to a given problem. Or several arrays, competing. Give it a week, then blank the least-performing arrays, load the software from the best-performing, and give 'em another week. Repeat until satisfied.

    Result: a faster development cycle. Then take the design to silicon, and fab it. . . .

  • Well, nature uses a variant of the same method, and has managed to develop a plethora of stable platforms.

    I think of it as evolution in action. . .

  • "...and all you need to do is move in" ... and then freeze your ass off out in orbit when they won't let you land, you obsolete old biological. Next thing you know, they'll be winging their way back to Earth to set us up the bomb and declare that all our base are belong to them.
  • What makes me soil my shorts is how this relates to genetics.

    He said that he saw parts of the circuit that seemed to be useless, yet, when he disconnected them, the circuit failed.. When we are altering our own genetic code, are we simply tossing the sections that don't seem to do anything? Alot of our genetic make up has stuff that doesn't seem to do anything, yet, everyday we find out that a seemingly useless gene has vast reaches into how we work.

    I would like to see this technology used to emulate existing genomes and see how mutations affect it, before we go tinkering we ourselves. Not that I'm against genetic engineering, sometimes I just think nature is much smarter then us at times.
  • Then you core dump, and feel much better.
  • Here's the link... A Story About Magic [tuxedo.org]
  • The Jargonfile, under "trivia", "anecdotes" or something like that.

    In the story it is assumed that the reason for this beaviour was different ground levels on the chassi and the mainbord.
  • The novel part of this article, though, is that FPGAs 'evolve' by reprogramming itself to a better version, over and over.

    That, by the way, sounds suspiciously like the iterative "code morphing" stuff in the Transmeta chips. Just caught my attention ...

  • Maybe not. If instead of using hardware IE FPGA you used a software simulation of hardware you could eliminate the analog properties. It would be slower to evolve but it would be digital.
  • This is quite old, but still interesting.

    See the following article [newscientist.com] from New Scientist for more info.
  • It doesn't mention anything about following the restrictions that one normally puts into place regarding self-replicating machinary!

    What if some of this hardware was to evolve into a self replicating machine?

    How is it going to self-replicate without a continuing supply of Xilinx programmable FPGA chips?

    When a computer can overcome that little limitation, then we can start worrying.

    (And yes, I can imagine ways it could happen, but they're all dern unlikely....)
    --

  • From the article...
    All of this takes place not in software but hardware. The chips are called Field Programmable Gate Arrays. The ones Thompson uses come from San Jose chip-maker Xilinx. The transistors of the chip appear as an array of "logic cells," which can be changed in value and connected to any other cell on the fly. By reprogramming a chip's memory, its logic cells can be tuned for any task at hand.

    Is it me or does this sound like the construction of neurologic pathways in the brain?

    If the raw material of the FPGA becomes abundant enough and the origional programming of the chip is sufficently flexable, could we be looking at possilbe computer that evolves at billions/trillions times faster than a biologic organisim.

    If you look at it this way, saying that life took 4 billion years to eveolve to the sentient state that humans are, could we be looking at an AI system as intelligent/creative as we are in a few days/weeks/months? Now let this system develop for a few years, what are the possiblities then?

    just a few thougts.

    BTW, This seems similar to the technology in Terminator 2.

  • Once again Slashdot has missed the boat.

    Actually, it was posted here [slashdot.org] before.
    ---
  • True only for the early / simple GA's. The later GA's (such as messy GA's) do have variable length chromosomes, and there has been some work done on using a synthetic genetic code as well.

  • The real problem is that evolved hardware can specialize itself very precisely for its environment, and the environment includes everything.
    Some of Thompson's early evolved designs only worked in a narrow temperature range, because that's what they evolved in (see here [newscientist.com] for the article that prompted Slashdot to look at this last time).
  • but you can still take a look at the code generated and work it out

    If this is the research I think it is even this "obvious" idea may not be true. As I recall when this work was done some of the evolved "solutions" were not physically understandable. Some solutions used unexpected effects like capacitance effects from neighboring circuits. But some could not be explained. On a large enough scale you may therefore get devices that cannot be reverse engineered because some of the side effects that you wont engineer in (because you aren't even aware of them) are crucial.

    Peter
  • You don't need a 'genetic' FPGA for this, just a plain old FPGA. Any (large rnough) FPGA is reprogrammable to do (almost) anything you want. In fact, kits with FPGAs on PCI cards are fairly inexpensive these days -- with a little hardware design knowledge, you can easily have what you described, at home.

    The novel part of this article, though, is that FPGAs 'evolve' by reprogramming itself to a better version, over and over.

    ----------
  • Will someone mod this guy up to a 4 or a 5 before we're crawling with invention control officers denying individuals the right to experiment in their own homes!

    Please?
  • '(1991) Can you imagine the devastation that could be wrought by malicious thugs when everyone's home computer is connected to everyone else's via this "internet"? I urge slashdot readers to boycott any PCs which use this dangerous new technology.'

    The point I'm making here is that any technology, whether it's GPFPGAs, the Internet, Guns, Solar power, or Hydrogenization of vegetable oils is capable of both benefit and harm. The view that anything potentially dangerous should be shunned is detrimental to everyone. The only safe way to handle potentially dangerous technology is to understand it, and spread that understanding. You can rest assured that if the open scientific community avoids a technology out of fear, the closed criminal community will only take more dangerous advantage of it. Understanding and knowledge is the only safeguard from the harm that can be caused by technology.

  • I'm not sure that anyone posted this yet:

    comp.ai.genetic [comp.ai.genetic]

    I hope that the link works...
  • The article was by Gary Taubes, and it was in the June 1998 issue of Discover.

    hussar
  • Actually if you read the longer article posted a few days back about this guy, you'll see he hopes to _improve_ reliability. How ?

    With GAs. If you make the "environment unfriendly", then the result of your evoultionary process should be a very survivable chip. simulated biological evolutinon of fault tolerant and healing systems. Afterall, the human body is one of the most advanced healing system in modern medicine.

    If this sounds way too blue sky, its easier to get a little more specific. Why not pick some "fault" scenarios to model, then introduce those as conditions during the evolutionary process. For instance, if you want to develop a chip that could survive internal/external cache errors (are you listening, Sun ? :), you could
    create a cache section which emits errors at some random rate/interval, and watch what the chip does to work around it. It might build another cache.

    The important thing in doing this though (as the researcher found out) is varying your conditions a _lot_. When he first ran this on a single FPGA, the design that eventually evolved would only work on that specific chip (not that specific model, the specific physical sample he evolved it on!), at a specific temperature, with a specific host program, power supply, etc.. i.e. it over specified itself and could not function in other environments.

    For his later experiments he started using multiple examples of the same part number, but from different foundries. He also adjusted environmental conditions somewhat. The hope was to create a more "survivable/versatile" design.

    Fascinating stuff, IMO.
  • Genetic Algorithms, technically rearranging a sequence (usually a DNA looking array of numbers) to answer a problem. Genetic Programming is rearranging logical code (or code blocks) to make source code to solve a problem.

    Two years ago I got pretty interested in the idea, and thought it might be possible to make a pacman game, where the ghost logic was generated from a genetic programming session. the idea was to make a bar bones board and actors, then run them through a billion games and a GP. hopefully the ghosts would get smarter, of course you need to have a smart pacman, or teach the pacman as well, for the ghosts to pick up any real skills.

    The problem got into what building blocks you give the GP, and of course the scoring. Simply giving the KillPacman() procedure access to logic gates wouldn't be enough. it would need some info. like access to some useful methods CanISeePacMan(), or TurnLeft(), MoveForward() etc.. Also scoring is usually a large factor, like how much do you reward a generation if it actually kills pacman, or maybe how fast?

    The furthest I got was making a program that attempted to find prime numbers. I only have it plus, minus, mult, and div. It was rapidly able to find a algorithm that could generate about 13/100 prime numbers. next I tried adding mod and exp, and got much better results, but nothing like 100/100.

    I wish it worked better, but there's a lot more to it then one initially thinks, that's why we don't have a chip constantly redesigning itself better and rapidly making super smart processor's.

    -Jon

    Streamripper [sourceforge.net]

  • What's the deal with small story bodies? I've seen stories with 1 byte, 4 bytes, 10 bytes, and 42 bytes today.
  • Wouldn't most genetically engineered critters have trouble competing with natural lifeforms? We still can't beat evolution.
  • ...I thought I had read something like this a few years ago in Discover magazine, so I did some checking and I had [208.245.156.153], back in June of '98 no less. If I remember the article correctly, the circuit displayed all kinds of weird properties... gates that were unconnected to the rest of the circut that, if removed, caused it to stop working... it would only opperate correctly in a specific 8 or 10 degree temperature range... stuff like that.

    I think this is really neat. Now if we could just figure out why the darn thing works the way it does...

    God does not play dice with the universe. Albert Einstein

  • So what happens when Microsoft buy this technology?

    We will have computers that can invent adds and new trademarked gimmick selling lines to display at all parts of your house.

    The technology is capable of spreading, without warning it is going to infect your washing machine, fridge, car and tv..

    Befor you know it, uncle bill will dictate the way your coffee is made, and the way your pants and shirts are ironed..

    Windows(glass panes in your house, not a dodgy os.) will no longer be transparent, with all the new advertisement campaigns and desktop themes, there simply wont be room for a VIEW any more..

    The word bug will have new meaning. Gone will be the days of 'bug' meaning what windows has, when it manages to crosslink your files for no reason. This is the new bug, this bug is being burnt to death in the shower because WINDOWS decided that your water temp should be several thousand degrees. This bug, burns your toast, kills your pets and worst of all, tapes the wrong program when you vainly try to set the video..

    I belive that we should think, befor we unleash such power into the world...

  • You idiot. The article talks about a chip with a fixed number of circuits that are reconnected in different patterns. No new hardware is built during this process. RTFA.
  • This is true for man-made circuits too, my friend. When you design a circuit, you must still test it against every state you can think of, right? So simply subject GA-designed circuits to the same tests you would subject your own designs to, and you're no worse off than I am right now - after all, my region boasts one or two human-designed nuclear power plants for which there undoubtedly exist untested states (and also, I might add, states that have been tested and are known to cause catastrophic failure).

  • That these circuts currently only work in a 10C temperature range, and that the best model does not transfer to other chips, because it is so specific to the properties of the chip it was designed for.
  • No, I find this the most exciting advancement I've heard of in a long long time. This could finally trigger the paradygm shift that is due. After millions of iterations, these chips will show emergent properties that will put Lorenz's weather research and studies of the instinctual organizational behavior of termites to shame.

    This makes me want to go back to school and chase a few more degrees. Does it have the potential to be disasterous? Absolutely. The academic in me doesn't care.

  • I remember reading an article about this kind of thing, probably in Discover, a long while back. I think they were trying to breed a minimal gate circuit that would spark when the system heard the sound 'No' or something. Using evolutionary algorithm techniques, they got some phenomenally low gate counts... the trouble was these systems were incredibly sensitive to the environment. It's a general problem with letting circuits breed themselves, they'll end up taking advantage of the oddest things, like RF interferance, or (don't remember the term, the way electricity in one wire is likely to cause a current in a parallel wire) and that once you move the circuit from your evolution/test harness, or even change the temperature or RF shielding, there's a good chance nothing will work. The solution is to build more complex test harnasses and test for how the circut responds to a variety of environments, but that starts to get more and more expensive.

    I've heard it said that in the future programming will be more like gardening than building up with legos, but I dunno...
    --
  • If it can evolve isn't it software? If hardware/software could go against its nature then wouldn't Windows be considered hardware?
  • My ex-wife has not evolved, can I get one installed into her?

    DanH
    Cav Pilot's Reference Page [cavalrypilot.com]
  • I think you're using a *very* non-standard definition of 'evolution.' I can't tell exactly where it would come from, but you're involving spontaneous generation and non-applicable biological (e.g., 'growing' an extra FPGA) concepts.

    To evolve is to change incrementally (or possibly in big leaps) over time. That's exactly what genetic algorithms do.

    Evolution as a process is most apparent in biological systems, and that's where most of our ideas come from. However, they're applied by trying to break down the essence of evolution as a process which is implemented by biological systems, rather than as something that is wholly bound within biological concepts.



    (email addr is at acm, not mca)
    We are Number One. All others are Number Two, or lower.

  • I agree with you, for the most part (which is why I enjoy genetic algorithms and complex adaptive systems in general, but rarely get involved with genetic programming, per se). GA's are used very often in function optimization and data fitting, especially in noisy systems.

    However, the little anecdote about the FPGA evolving to unexpectedly use (apparently) field effects from 'unused' gates was pretty interesting. That's something that should give people pause when considering traditionally.

    (email addr is at acm, not mca)
    We are Number One. All others are Number Two, or lower.

  • Last line should have read 'traditionally developed systems.'

    (email addr is at acm, not mca)
    We are Number One. All others are Number Two, or lower.
  • The article states, in part:

    Imagine the philosophical problem this creates.
    What if you build a critical system for, say,
    a nuclear power plant. It works and
    works well, but you don't know how to explain it.
    Can you implement it? Can you rely on it?

    I play with genetic algorithms and I believe that the author's concern is based on a misunderstanding of the nature of the beast.

    Modern software systems are incredibly complex, to the degree that no one even considers the possibility of conducting formal proofs as to the viability of the algorithms involved (I'm not talking about an algoritm like in an encryption subroutine, of course, but the program itself as an algorithm).

    Genetic algorithms, however, work code much more extensively because they iterate through an unimaginable number of combinations. This leads to a programmatic robustness not generally found in traditional programming techniques.

    Can anyone guarantee that any given GA will find all of the possible failure points in a system? Of course not. A GA isn't even guaranteed to find the optimal solution to a given problem (it's more like "a good enough solution in a reasonable amount of time").

    Genetic programming (allowing software to evolve the actual programmatic structure of the application) isn't my deepest interest - I'm far more interested in artificial life. Bowever, both of these areas have the potential so revolutionize not only software (and, from this article, hardware) as a disclipline, but also make possible systems that can barely be imagined now.



    (email addr is at acm, not mca)
    We are Number One. All others are Number Two, or lower.

  • At first glance, Darwin's ideas on evolution don't seem to have much to do with computers.

    I imagine the columnist is referring to Darwin's theory of Natural Selection which he later renounced by saying "The idea that something as complex as the eyeball was produced by natural selection makes me ill."

    A computer that evolves may redesign itself in such a way that even its inventors don't know how it's functioning.

    This may be true, however I highly doubt that the inventors don't know or couldn't figure out how it is functioning. This always brings to face the idea that computers some day will pass us up and we will be working for them. My only statement to that would be that from the beginning of time, the creation (in the case the "thinking chip") has never been been able to take over the creator (in this case, man).

    HAL, after all, was the machine that could think almost as well as a person,...

    The key word here is almost. Not to mention that was a science fiction story.

    Don't get me wrong, I like the ideas of genetic algorithims and I'm sure they will take us places in computing that we have never been before and fasther than we could have ever imagined. I can't wait to see what the future brings for computing and how things change on the desktop or in the server markets. I just get a little annoyed with the idea that some think this is going to be what takes over the world.

    Of course, this is just my opinion, I could be wrong.
  • The point about the Nuclear plant was interesting.

    So are we going to have an FDA style body that tests and approves algorithms.

    Think about it who know what these little beasties will do? After approval you need a prescription to run one, and that give patent protection to the "discoverer" of the "genes".

    Then after it's deemed non harmful (or less profitable) it's released on the public.

    This could et really weird in a hurry.

  • The 'unused' sections of the chip are probably contributing to slight variations in voltages and temperatures of other sections.
    This is a big problem with over-optomizing a chip without varying it's environment. You end up with a design that only works only on the one chip it was tested with and only under the exact same conditions.

    --
  • ...and it doesn't seem to want to learn that my name isn't Dave.
  • could be assembled to create artificial nervous systems...

    A nervous system? My system can get quite nervous from time to time, ending in a kernel panic :)

  • COMPLETELY right, and what I was about to post.
    of course slashdot will not give you more than a point, but I believe their rating system to be inversely proportional to value.

    I set my fucking sort order at LOWEST SCORES FIRST.

    This evolving hardware bullshit is 3 YEARS old. jesus.

  • This is a rehash of the article from New Scientists a few years back. Run, don't walk to:

    Link to New Scientist article [newscientist.com]

    It talks about the unconnected cells and the way it was trained, and the fact that the circuits only worked at one temperature(!)

  • "My only statement to that would be that from the beginning of time, the creation (in the case the "thinking chip") has never been been able to take over the creator (in this case, man)."

    What, never?

    Cars have never gone faster than a human? Aeroplanes don't fly better than a human? Chess computers don't beat the person that programmed them (hint: most of the time chess computers DO beat the person, if the program they write is any good at all)?

    For that matter do students ever exceed the capabilities of their teacher? Of course.

    The old maxim, "a stream can never go above its source" is WRONG. You CAN make water go higher than its source (check out water hammers).
  • The circuit is not self modifying. An external computer makes the design, downloads it into the FPGA, and then performs the test, noting the results to control the next generation

    BugBear

  • by donny ( 165416 ) on Tuesday April 10, 2001 @03:05AM (#303508)
    ...about the same guy.

    http://slashdot.org/articles/99/08/27/1238213.shtm l [slashdot.org]

    Heck, this one's more informative.

    Donny

  • That's why the thing took 2 weeks to come up with the simple circuit. The point would not be to distribute the mutating setup - which is pretty dedicated to the specific task of the adaptive hardware. The point is to distribute the results, turn all the research into a viable design the same way nature does: we only see what's made it so far in a reasonably successful way.
  • A Robot/AI must must obey the agents of the US government whatever their orders may be.

    Isn't that what screwed up the HAL-9000 in that movie?

    -Roy
  • Read about a guy (don't know if its the same one) in New Scientist who did this - it made the cover so should be fairly easy to track donw. The guy used a GA to program a FPGA to detect a singular tone. He managed to break the previous record for the minimum number of gates required for the task by some amount. However, altering the temperature in the room by more than 1 degree caused it to fail. As did using the same setup on a different chip. It seemed that the result was partially being worked out using minute quatum variations on the original chip. As is often the story with GA's, he managed to provde a highly efficent solution for an incredibly specific problem domain - so it was pretty useless all round. Still a very interesting read if you can find the article Padark
  • What if you build a critical system for, say, a nuclear power plant. It works and works well, but you don't know how to explain it. Can you implement it? Can you rely on it?

    No. Because a state may exist in future that has not been tested. This 'unexplained' circuit may fail when it encounters this state - there is no way to identify it (maybe). Unless you can gaurantee that the circuit has seen all states/conditions and you are capable of proving a desired result in every instance - in this example that would be a 'not-meltdown' ;) - then you cannot with certainty implement this solution.

  • This particular subject of evolving ICs is very primitive, alot of number crunching effort to produce a simple function. Perhaps community computing power could be put use as it is in RC5 encription breaking competition or public genome research or private company cancer research....

    But it certainly seems to me that the old issue of the mass of growing complexity (Tower of babel) will also find the same solution direction as man has in the past.

    I do wonder how the following might be useful in such matters as FPGA programming.
    Knowledge Navigational Mapping thru the Virtual Interaction Configuration [mindspring.com]


    3 S.E.A.S - Virtual Interaction Configuration (VIC) - VISION OF VISIONS!
  • Just think of the ramifications of evolving computers...think, oh, I don't know, the Matrix or something...

    If you select for Matrix-style AIs, then yes. If you base your selection of the best of each generation on their ability to multiply two numbers, then all you'll ever get is a multiplier. A very efficient multiplier, probably, but still just a multiplier.

  • by cthugha ( 185672 ) on Tuesday April 10, 2001 @04:24AM (#303515)

    What we are dealing w/ is a finite set of possiblities here and true evolution is INFINITE!

    I always thought the evolutionary possibilities for a particular organism were constrained by their environment. It's true that the environment is quite open and it's very hard to see what some of the possibilities might be, but some things can definitely be ruled out (no organism has photo-receptors tuned to pick up gamma rays, for example, because that wouldn't confer any advantage, and would be a waste of resources).

    It's still natural selection, but in the case of these chips, we're controlling the criteria, and they're much narrower, that's all.

  • nVidia's been doing this for quite some time with thier GeForce chip...
  • It doesn't mention anything about following the restrictions that one normally puts into place regarding self-replicating machinary!

    What if some of this hardware was to evolve into a self replicating machine?

  • I need to trivialize the concept of an evolving algorithm to get at what a few other posts implied.

    Imagine the task at hand is to navigate a simple maze--a maze with exactly one entrance, one exit, and no loops. Now imagine that it has the exit intentionally close to the entrance and the one path is intentionally circuitous and counter-intuitive. If the algorithm to solve this maze starts evolving based on mistakes and random variations, there is a really good chance it will dwell on the wrong solution. By evolving, there will be bits of the wrong solution left behind in it's algorithm.

    This effect can be minimized by saving the state of the algorithm when the deviations occur and backtracking to avoid unnecessary calculations. This requires you know the proper solution. This is very easy with a maze, very hard with image recognition, sound recognition, and any other task which is not completely understood from the human point-of-view.

    Adrian Thompson should do more trials on his evolving algorithm. There should be variations in his results. Two weeks is not a long time for a science experiment. He may get many unremarkable results, a few fantastic ones, and a couple of "just plain strange" ones.

    I'll have you all know that I still have my appendix, thank you.

    ----------------------

  • Now, most of this depends on how complex the programmes can be for the FPGAs, but it would be nice to have a complete box loaded with cards of these puppies for specific applications. For example:

    A piece of gaming software downloads additional morphing/rendering code to GFPGA card #1, while loading the game engine into GFPGA card #2.

    Download updated compression algorithms as a compression co-processor.

    Card #1 becomes a highly optimized search engine for your corporate site.

    Card(s) #1 gets GIMP modules

    Yup - it would be nice. But seriously, I don't see it happening at my house in the near future. Although, using cards #1-#4 for rendering near perfect Natalie Portman photos, and a fabber... Hmm....

  • by Dan Hayes ( 212400 ) on Tuesday April 10, 2001 @03:07AM (#303531)
    Genetically programmed FPGA (fully programmable gate arrays) are not new. This guy has been researching this for several years. What the article does not mention however, is that occasionally, when his program goes wrong, it actually destroys the chips, rendering them useless.


    Now, imagine in the future you have one of these GPFPGA chips in your PC. What is to stop a malicious hacker (or is it cracker?) from planting a logic bomb in the very hardware of your PC ?


    This technology seems like a retrograde step if we are concerned with the reliability of our hardware. I urge slashdot readers to boycott any PCs which use this dangerous new technology.

  • In the article, the author states that the algorithm adapts to a form that can distinguish between two tones and uses a lot fewer gates than conventional designs. In almost the same breath, he says that there are five seemingly unconnected gates that, if disturbed, cause the system to fail. I'm a hardware designer, and where I come from that is called a BAD DESIGN.

    We had some yahoo give a seminar about this at my uni and he was gushing about how it had come up with a patented voltage regulator topology and how it would revolutionize analog design. Well, an analog design that depends on its environment is simply a sucky analog design. In practice, analog circuits have to work in the harshest, most variable environments. I don't think that us "experts" are useless quite yet.

  • I'm not a biologist, but if I remember correctly from my AI class, a Genetic Algorithm always has a fixed number of variables (similar to a fixed DNA length/number of genes in a given species). You, the programmer, still has to determine which variable adresses which part of the problem to be solved, and how (similar to gene sequencing?). So a GA that is supposed to distinguish between two signals, for instance, can't evolve all on its own into a solution for a different "problem", e.g. eliminating all those pesky humans running around. GA is just a technique for obtaining a better solution for a given problem from less ideal solutions. Rather similar to a farmer breeding cows with larger milk output/better feed utilisation from existing cattle, and quite unlike a frog evolving into a prince or some such thing :-).
  • ... here [susx.ac.uk].
  • This article is about the new combination of two older technologies rather than a totally new idea. FPGAs and GAs have been around for a good while, but combining them sucessfully is what makes this interesting.

    If the control system is properly programmed, then the chips cannot 'crash' and spiral into uselessness since the control system will go back a step unless the new system is better than the old one.

    It is very possible for computers programmed using evolutionary techniques to do things that you don't understand how. It's what is known as an irreducible system - the interactions are so complex that after a few generations you cannot trace the changes back by looking at the finished product, you need a log of the evolution. For a full genetic program, this log could be 20,000 or more generations of 100 or more programs being tested - and that's a lot of log to go through to figure out why & how it works!

    And as for the reliability issues - you don't run these things 'hot' in critical situations! You run them in simulation, throw problems at them and they create every neat solutions. You then take these solutions and combine them to make a very neat, very normal computer program which can be added to and debugged in the normal way. In terms of FPGAs you would use small, tested circuits to control parts of a critical system like any other small circuit.

    These thing aren't meant to be used 'on the fly' - it takes a lot of processing time! They're used to create highly efficient normal circuits which can then be used in normal circuitry.


    Beg:

  • The Matrix was a fun movie, the special effects were great. It shouldn't be used as a caution against artificial intelligence or evolutionary computing though. It was a story, the story revolved around technology gone awry. It wasn't even a terribly realistic story. Everybody who uses Jurassic Park as a trump card against genetic engineering or cloning, or similar popular pulp paperback movie really needs to learn to think for themselves. It's hard work, it actually involves reading, learning and critical thought as opposed to being spoon fed information from an entertainment source.

    Before anybody brings up the current movie which uses a message against a technology imagine the other side of the coin. Somebody using a comic book (which is what most of these movies really are) such as Teenage Mutant Ninja Turtles as an argument in favour of genetic engineering.

    There are a lot of reasons to be cautious when extending almost any technology as well as reasons in favour of advancing the technology. Listen to evidence as opposed to propoganda or fiction and determine your own standings on it.

  • Re:Computers that improve themselves [newsobserver.com]

    "And get this: Evolution had left five logic cells unconnected to the rest of the circuit, in a position where they should not have been able to influence its workings. Yet if Thompson disconnected them, the circuit failed. Evidently the chip had evolved a way to use the electromagnetic properties of a signal in a nearby cell. But the fact is that Thompson doesn't know how it works."

    Not bad for a new-born.

    After being taught gah-gah goo-goo,
    it goes on to prove the computational value of quantum mechanical entanglements.

    We'll have to keep an eye on this toddler; (like that little girl in MIB with the two physics books).
    It may just go on to prove human beings are as useless as egg-shells.

    Ripley's believe it or knit.

  • Hmm ... instead of this evolving microchips thing, how about inventing memory chips that can duplicate itself. I know I for one wouldn't mind having DIMM stick or two that breeds and multiplies like rabbits. I hear a 1GB ramdisk calling my name ...
  • I've got some silicone that seems to have a life of its own!
  • by deran9ed ( 300694 ) on Tuesday April 10, 2001 @02:58AM (#303570) Homepage
    Downside:
    Windows running this would

    Turn your 1ghz box into a 386 that bluescreens

    Upside
    Linux running this would:

    Create its own code for its own kernels

    FreeBSD running this would:

    spin off and create a port for something other than i386 arch

    OpenBSD running this would:

    Not allow anything to ever be installed because it may be deemed insecure

    NetBSD running this would:

    Create another port, then wait a year or two to let you use it

    view the source Luke! [antioffline.com]

  • Generally, Asimov's robots weren't intelligent enough to get hung up on stuff like that. But he did a few stories with robots that had broken through to a sort of meta-level where they looked at the good of humanity as a whole...
  • I would describe this work as a learning computer, not an evolving computer. You are correct in saying that true evolution means having the chips basically create something to better themselves from that which did not previously exist. AKA - they grow an extra FPGA to increase their processing power. I call that impossible, which is why I also see 'evolution' in its strictest sense, a repulsively foolish concept.

    If, on the other hand, they were given the materials to build an FPGA, and general instructions on how an FPGA works, perhaps then they could improve on the original design. Sort of like humankind does today through scientific research. We have been given a basic design (with many variations of the basic design - our bodies), and have been able to increase our life expectancy and quality of life through learning about ourselves, our world, our universe, etc. To claim this as being a vision of 'evolution' is silly. Let's remember that someone created the FPGA machines in the first place, humans. They did not just magically form themselves from a bunch of spare parts in a computer lab.

  • OK, 'grow' was an ambigous definition for my arguement. I was trying to point to the fact that this machine could not 'mutate' new code or new parts without a premise to work with. They could 'improve' upon, or learn better ways to implement the parts and code they have, but creating completely new code or parts would be impossible.

    For instance, let's say this FPGA computer could alter it's FPGA (or even build new ones) to increase computational speed. Fine, I agree that that is within the realm of possibility. But it would be a mathematical impossibility for this computer to spontaneously generate a new piece of software or hardware that was outside of it's creator's realm of abilities that were bestowed upon it. Let's say we do not give this computer a device, knowledge, or method of communicating with any other computers. It cannot develop a communication protocol because it has no concept of such a thing. Plus, it would need to develop both the software AND the hardware, at the same time working together, to properly begin communications with other computers. So my arguement is that computers will not take over the world. They're confined to the knowledge we build them with.

  • When we finally get robots that can understand Asimov's Laws of Robotics, and we learn how to program them in, they won't be the altruistic platitudes the writer handed down to us. They're going to look more like this:

    1. Make me rich.
    2. Don't fuck with me.
    3. Fuck everyone else.

    --Blair
  • <sarcasm>

    And of course there's no way around that.... No... Of course Asimov didn't spend the better part of his career writing books about unexpected effects and ways to sidestep and circumwent those laws..

    </sarcasm>

  • Whilst we're on the subject, I've been thinking about Asimov's laws recently and decided that they are complete crap. How the hell is a robot supposed to determine what is human or not. In addition how is it supposed to work out whether any action could indirectly harm a human? Give it a book on Chaos Theory and all of a sudden, picking up a can of Coke in Utah too quickly will cause an earthquake in China. All very silly.

    Just a thought. Whilst we were here.
  • Uhh...does anyone else find this as frightening as I do? Just think of the ramifications of evolving computers...think, oh, I don't know, the Matrix or something...

    When a true AI is finally developed sombody is going to be smart enough to deeply ingrain Asimov's 3 laws of robotis (or something similar) into them.

    A Robot/AI must no hurt a human in any way
    A Robot/AI must obey all orders from humans as long as it doesn't conflict with the first law
    A Robot/AI must protect itself as long as this doesn't conflict with laws 1 and 2

    That just about solves our problems doesn't it?

  • This is what I waited for!

    Ok, now that that these machines can realy reassemble themselves, let's giv'em the possibility to collect and produce their own resources. Construct an initial seed of nanobots, put them into a small rocket, send them to any planet that seems to be inhabitable for humans. There the bots would reproduce themselves with the materials they find on that paticular planet. Based on their inital "gene-code" they would be programmed to make architectural facilities for humans. Due to their evolutional design they could adapt to regional specialities (such as gravitational and climatic issues), i.e. make very thick walls where radiation is high etc. Just like techno-termites using their own body as building material for the anthill. You could fire some thousand seeds into space know, wait a few hundred years until the technoparasites made up a small colony for you somewhere and all you need to do is move in. (Do not forget to bring your coffemug, linuxbox, plant etc.)

    a very humble simulation of this can be seen here [uni-weimar.de] (german)
    Translation by google here [google.com]
    Conceptual Flash-movie here [uni-weimar.de] (click first link)

I put up my thumb... and it blotted out the planet Earth. -- Neil Armstrong

Working...