Microchips That Evolve 160
An Onymous Coward writes: "A scientist in Britain has apparently developed some genetic algorithms that run on microchips that can alter themselves, in other words, hardware that evolves. Pretty cool. Check out the column by Paul Gilster." The article contributes some background I'd never heard before regarding FPGAs, and even mentions the dogged Starbridge. A short but fun read.
Re:Asimov's 3 Laws of Robotics (Score:1)
Re:I love these articles (Score:2)
Darwin did no such thing. Is selectively misquoting from a book which is freely downloadable the best creationists can come up with these days? Here's the correct quote, from "The Origin of Species", chapter 6:
To suppose that the eye, with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest possible degree. Yet reason tells me, that if numerous gradations from a perfect and complex eye to one very imperfect and simple, each grade being useful to its possessor, can be shown to exist; if further, the eye does vary ever so slightly, and the variations be inherited, which is certainly the case; and if any variation or modification in the organ be ever useful to an animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, can hardly be considered real.
Darwin forsaw this line of objection and disarmed it before he even published. Using this argument nearly 150 years later, and claiming that Darwin believed it a fatal flaw in his theory is intellectually dishonest in the extreme. You, sir, should be thoroughly ashamed of yourself.
Re:I'm worried about this. --- Relax a little! (Score:2)
More modern devices are harder to destroy. If you mis-configure your I/O pins then you're in trouble, but within the array devices like the Xilinx Virtex seem much harder to destroy (I was chatting to someone who was trying just that, but failed without using the I/O pins).
As a flip side, here's [nec.com] a paper about destroying less secure FPGAs and ways to prevent it.
-- Michael
Re:Can never have too much of a good thing (Score:1)
Re:I'm worried about this. It could be a disaster. (Score:1)
Re:Too many old sci-fi novels? (Score:1)
What I am trying to point out here is that for the first time, we may be using tools that NOBODY understands -- and perhaps tools that nobody even CAN understand. We may soon be using tools that are beyond human intelligence.
Think I'm exaggerating? Go back and read the first article that was on Slashdot about this scientist's work -- there's another link to it somewhere in this discussion. That evolved circuit, which turns a 5v output on and off in response to two specific voice commands, uses about a third of the components that any human designer would have used. In looking at it, the programmer of the evolutionary system says it is unbelievably complex -- he says it looks like God designed the chip. He doesn't understand it. It is entirely possible that nobody ever will. But it works. (within +-5 degrees F anyway -- apparently he hadn't thought to evolve for temperature variation, and not all FPGAs will run this circuit.)
What practical difference is there between the description above and 'it's magic'?
after sealing the envelope (Score:2)
If I had it to do over, I'd pull those last four paragraphs out... they weren't the main point anyway.
Re:Too many old sci-fi novels? (Score:2)
And not all FPGAs will run this circuit either. Only certain ones will, and apparently there's no easy way to determine which ones will work.
Again extrapolating, what happens when we start evolving one-off tools whcih we don't understand and can't duplicate? If the tool did something important enough, it would become instantly precious. It would be magic..... the equivalent of a holy relic.
I had some thoughts when I read the original.... (Score:5)
Now, that's a big claim. Most important EVER?
Consider: no other technological progress to date has ever changed the fundamental nature of Man and his tools. We are the Tool-Using Ape, and all of our technological toys and bombs are just extensions of the first person picking up a stick. He/she could use that stick in many ways, from agriculture to communication to personal hygiene to warfare. Essentially all of the other tools we have ever invented are extensions of that original concept -- specialized forms of stick.
Our sticks have gotten very complex, to the point where most of us navigate in ignorance through a world of astonishing complexity. Very few of us understand something so simple as indoor plumbing, but we are thoroughly dependent on it. This is all old news and we are well used to it -- if the plumbing breaks, call a plumber. If the car breaks, go see a mechanic. If the computer breaks, post on Slashdot from another one and hope to god you pick out the truly informed answer from the barrage of ignorant suggestions. But I digress. :-)
This is where this new technology is important. This isn't just a stick anymore. With all of our existing tools, someone, somewhere invented them, and someone somewhere knows how to fix them when they break. No matter how complex or intimidating the tool, if you are willing to devote enough resources to the problem, you find an expert who understands the technology you are using and can fix it. In the cases of unforeseen interactions between complex systems, like computer software, it may be necessary to consult many experts, but ultimately all of the tools we use are sticks, someone invented them, and someone can fix them.
And that relationship is what is now changing, for the first time since that first person picked up a stick and used it to scratch, we are starting to use tools that we did not invent and that nobody, anywhere understands.
This is an incredibly big deal. We are talking about starting to use tools that are essentially not deterministic in their uses. Different versions of the same tool may work better than others, even though nobody quite understands why.
Consider some of the ramifications: Perhaps your specific tool will have learned how to trace cables or diagnose car engine trouble better than other people's, and you can charge more for your services because of that. Or maybe your gizmo calibrator just doesn't work properly unless you hold it upside down. And what if your computer develops an amazing ability to pick stocks if you consult it after midnight on Tuesday, but sinks into a blue funk and refuses to even talk to you if you fail to address it as "Lord Master Xerxes of Apollo"??
What we are talking about here is sticks with intelligence. Admittedly this is just the very beginning of what, I presume, will be a long slow (dare I say it) evolution of the tools. Taking things out to an extreme, it is not inconceivable that we might end up with tools that have moral or ethical reservations about being used for what we want to use them for. Or we might see guns that *like* to kill and look for excuses to do so. 'It was my sidearm, Your Honor, it just went off unexpectedly.'
Arthur C. Clarke's most famous quote was, "Any sufficiently advanced technology is indistinguishable from magic." I really am starting to wonder if we're not going to end up in an age of tools that are so smart that using them resembles incantation and gesture -- in essence the casting of spells.
What happens when we become dependent on tools that NOBODY understands or invented?
In a weird sideways tangent, I am reminded of a wise observation that, I believe, Louis L'Amour had about the early war in America between the settlers and the Indians. He pointed out that the Indians lost not because of being outgunned, or because of the sheer numbers of European colonists, but rather because they became dependent on firearms with which to fight, and could not themselves manufacture them. That meant that to fight the colonists, they needed the colonists as arms suppliers. They couldn't drive out the colonists without the colonists' help. Their loss of cultural identity was inescapable as soon as they picked up guns.
If we flesh intelligences end up at odds with silicon ones, and we are dependent on them and cannot duplicate their services with our old sticks and rocks -- we will lose.
Like I said, this is a big deal.
Article is incorrect regarding Starbridge (Score:2)
The UK experiments used FPGAs, and Starbridge uses FPGAs, but the novelty in the experimental work lies in the use of genetically evolving cell interconnection in the FPGAs, whereas Starbridge's FPGAs are reprogrammed on-the-fly in a completely deterministic manner, not unlike the Crusoe. The Starbridge FPGA hardware isn't mutating genetically, it's merely reconnecting functional units under the control of something like a JIT analyser/compiler to deal in the best way with the incoming instruction stream.
The author of the article has put two and two together and got five, making FPGAs look dangerous.
FPGAs are in thousands of products all around us. If they were mutating, we'd know about it PDQ!
Re:Unconnected Switches (Score:2)
How did it work? Magic."
How silly of me to think it was capacitance or inductance or both.
Shurely shome mishtake (Score:2)
Yeah, I've seen source code that worked like this too (remove an unused variable and it stops working), but strangely enough we didn't consider it a great leap forwards... I seem to remember we burnt it and danced on the ashes.
Also:
"Hey, my hardware has stopped working..."
"Sorry, it must have evolved, but we can't fix it cos we don't know how it worked. Have you tried resetting it back to primordial state and waiting 5 years for it to evolve back to useful again ?"
T
Re:I'm worried about this. It could be a disaster. (Score:1)
A part of this task is definately going to be stability.
If an area of the chip breaks down, the rest of the circuit will re-adapt to compensate, until it reaches equilibrium again.
Self repairing hardware has been being studied for ages now, and FPGA is a very good medium for this.
As for hackers 'getting in and modifying the code'.. Well, it's a little different in this game.
Just how, exactly, is the hacker going to alter the code? Where? What line? How the hell does this thing work in the first place?
Hackers of this breed of computer will doubtless evolve in their own way, but they may well be a different breed to todays.. And I somehow doubt script kiddiedom will survive.
How can you script an attack against a million machines when each one is different?
If you step back and look at FPGA and the tech it's likely to spin off, it looks to be FAR more reliable than what we have now, as it can 'heal' itself from even physical damage.
Just a thought or two..
Malk
Re:Here is a completely unrelated Slashdot article (Score:1)
Re:Umm.... (Score:2)
Re:Umm.... (Score:2)
Not really news (Score:1)
All that aside -- this guy's idea is a very bad idea. It's not that idea of evolutionary algorithms is bad, but rather, the idea of using the analog characteristics of digital circuits to do logical things is bad. Under all the layers of abstraction, digital circuits are actually analog, but they abstractly repressent logic within certian noise margins (ie 0-1.2v==logic low and 3.8-5v==logic high). The whole point in going from analog to digital is because it's deterministic (ie easier to maintain and possible to prove its behaviour), and, because of the noise margins, we can assume that a design for a specific family of chips will behave the same on all reasonable chip implementations. In other words, if you want an analog circuit, use analog components (BTW - analog circuits are faster and smaller by nature, but the amount of work required to design and prove that a design is correct is usually way more work. This is why most computer functionality is done in digital). If you want a logic circuit, use digital design.
Oh -- and one other thing. A *procedure* can only be called an algorithm if it can be proved that it stops at some point, and if it can be proved that it produces the correct result. If we don't know how an analog device works, we can't prove its behavior, therefore these people aren't making algorithms, they are just making devices that do stuff. If they really want to work on making evolutionary algorithms, they should concentrate on making self modifying logic (ie self modifying code in Lisp, Prolog, or self modifying digital designs, etc).
A few points (Score:2)
Second, the chips don't freakin mutate and reprogram themselves. An external agent programs the FPGAs with a 'genetic' string and tests that capabilities of that string, then tries other variations that were based on the previous generation. It is not as if these things just magically mutate themselves to do a particular task, there is much external intervention.
Thirdly. When I last heard about this guy he was off exploring ways of making the chips more robust. Because the algorithms evolved appear to depend heavily on the analog, non-linear nature of the components in the chip, they were extremely sensitive to temparture variations. Also some programs evolved on one chip lot would not work on another chip lot, even though they were functionally identical chips. He was looking into ways of testing fitness at a variety of temperatures and using different chip lot in the process, perhaps producing a result less efficient than his original attempts but more robust.
And now we have heard nothing new from this guy for the last 2.5 years. My guess is that this was just too good for the specialized chip manufacturers to pass up. It's extremely promising technology - but I betcha most of it is patented by now.
-josh
Re:. . .but it could be a GODSEND to development. (Score:1)
It would be faster, but it might be a more "fragile" design than chips designed the old-fashioned way. If the fitness criteria you're applying to your chips is very narrow, then they may not have developed a way to deal with inputs which aren't within your training regimen. (Remember, one of the disadvantages of using GAs is that you often can't determine exactly why a particular approach works.) So it may be tough to predict how such a chip would react to bad input, whereas with a chip you've designed yourself you can specifically allow for that.
You can probably compensate for this problem by broadening your training regimen to include a representative sample of error cases, but for some problems you can't include them all and there may be no way to prove that your chip will be able to handle them all.
Re:. . .but it could be a GODSEND to development. (Score:1)
The difference is natural evolution isn't directed to a specific end, so as a result after a few million years life forms are adapted to be survivors more than anything else. I'm just questioning whether directed evolution in a lab will really engender the same hardiness.
Even natural life forms aren't incredibly stable - all it takes is a new type of germ or a slight change in conditions to decimate or destroy a population. Life goes on, but those individuals have failed, and if we're talking about using GA-designed chips for anything important, you really want to avoid the individuals exhibiting unpredictable failure patterns.
Even better (Score:1)
Re:Not truely Evolution...Just smart. (Score:1)
. . .but it could be a GODSEND to development. . . (Score:1)
Result: a faster development cycle. Then take the design to silicon, and fab it. . . .
Re:. . .but it could be a GODSEND to development. (Score:1)
I think of it as evolution in action. . .
Re:Let's go explore the universe (Score:1)
Re:I'm worried about this. It could be a disaster. (Score:1)
He said that he saw parts of the circuit that seemed to be useless, yet, when he disconnected them, the circuit failed.. When we are altering our own genetic code, are we simply tossing the sections that don't seem to do anything? Alot of our genetic make up has stuff that doesn't seem to do anything, yet, everyday we find out that a seemingly useless gene has vast reaches into how we work.
I would like to see this technology used to emulate existing genomes and see how mutations affect it, before we go tinkering we ourselves. Not that I'm against genetic engineering, sometimes I just think nature is much smarter then us at times.
Re:Consequence? (Score:1)
Re:Unconnected Switches (Score:1)
Re:Unconnected Switches (Score:1)
In the story it is assumed that the reason for this beaviour was different ground levels on the chassi and the mainbord.
Re:Okay - this could be handy... (Score:1)
That, by the way, sounds suspiciously like the iterative "code morphing" stuff in the Transmeta chips. Just caught my attention ...
Re:Technique is useless for digital hardware (Score:1)
Old. See: New Scientist, 15 November 1997 (Score:1)
See the following article [newscientist.com] from New Scientist for more info.
Not to worry, yet... (Was: Re:Umm....) (Score:1)
How is it going to self-replicate without a continuing supply of Xilinx programmable FPGA chips?
When a computer can overcome that little limitation, then we can start worrying.
(And yes, I can imagine ways it could happen, but they're all dern unlikely....)
--
Sounds like the brain. (Score:1)
All of this takes place not in software but hardware. The chips are called Field Programmable Gate Arrays. The ones Thompson uses come from San Jose chip-maker Xilinx. The transistors of the chip appear as an array of "logic cells," which can be changed in value and connected to any other cell on the fly. By reprogramming a chip's memory, its logic cells can be tuned for any task at hand.
Is it me or does this sound like the construction of neurologic pathways in the brain?
If the raw material of the FPGA becomes abundant enough and the origional programming of the chip is sufficently flexable, could we be looking at possilbe computer that evolves at billions/trillions times faster than a biologic organisim.
If you look at it this way, saying that life took 4 billion years to eveolve to the sentient state that humans are, could we be looking at an AI system as intelligent/creative as we are in a few days/weeks/months? Now let this system develop for a few years, what are the possiblities then?
just a few thougts.
BTW, This seems similar to the technology in Terminator 2.
Re:Nothing New (Score:2)
Actually, it was posted here [slashdot.org] before.
---
Re:Not really *Evolution* (Score:1)
It's actually worse than that... (Score:1)
Some of Thompson's early evolved designs only worked in a narrow temperature range, because that's what they evolved in (see here [newscientist.com] for the article that prompted Slashdot to look at this last time).
Re:Too many old sci-fi novels? (Score:1)
If this is the research I think it is even this "obvious" idea may not be true. As I recall when this work was done some of the evolved "solutions" were not physically understandable. Some solutions used unexpected effects like capacitance effects from neighboring circuits. But some could not be explained. On a large enough scale you may therefore get devices that cannot be reverse engineered because some of the side effects that you wont engineer in (because you aren't even aware of them) are crucial.
Peter
Re:Okay - this could be handy... (Score:2)
The novel part of this article, though, is that FPGAs 'evolve' by reprogramming itself to a better version, over and over.
----------
Fear gets 3, realist gets 2, welcome 2 slashdot (Score:1)
Please?
Re:I'm worried about this. It could be a disaster. (Score:4)
'(1991) Can you imagine the devastation that could be wrought by malicious thugs when everyone's home computer is connected to everyone else's via this "internet"? I urge slashdot readers to boycott any PCs which use this dangerous new technology.'
The point I'm making here is that any technology, whether it's GPFPGAs, the Internet, Guns, Solar power, or Hydrogenization of vegetable oils is capable of both benefit and harm. The view that anything potentially dangerous should be shunned is detrimental to everyone. The only safe way to handle potentially dangerous technology is to understand it, and spread that understanding. You can rest assured that if the open scientific community avoids a technology out of fear, the closed criminal community will only take more dangerous advantage of it. Understanding and knowledge is the only safeguard from the harm that can be caused by technology.
Newsgroup (Score:1)
comp.ai.genetic [comp.ai.genetic]
I hope that the link works...
Re:Nothing New (Score:1)
hussar
A better story covering some of the same ground... (Score:1)
Re:I'm worried about this. It could be a disaster. (Score:3)
With GAs. If you make the "environment unfriendly", then the result of your evoultionary process should be a very survivable chip. simulated biological evolutinon of fault tolerant and healing systems. Afterall, the human body is one of the most advanced healing system in modern medicine.
If this sounds way too blue sky, its easier to get a little more specific. Why not pick some "fault" scenarios to model, then introduce those as conditions during the evolutionary process. For instance, if you want to develop a chip that could survive internal/external cache errors (are you listening, Sun ?
create a cache section which emits errors at some random rate/interval, and watch what the chip does to work around it. It might build another cache.
The important thing in doing this though (as the researcher found out) is varying your conditions a _lot_. When he first ran this on a single FPGA, the design that eventually evolved would only work on that specific chip (not that specific model, the specific physical sample he evolved it on!), at a specific temperature, with a specific host program, power supply, etc.. i.e. it over specified itself and could not function in other environments.
For his later experiments he started using multiple examples of the same part number, but from different foundries. He also adjusted environmental conditions somewhat. The hope was to create a more "survivable/versatile" design.
Fascinating stuff, IMO.
Genetic Algorithms vs. Genetic Programming (Score:1)
Two years ago I got pretty interested in the idea, and thought it might be possible to make a pacman game, where the ghost logic was generated from a genetic programming session. the idea was to make a bar bones board and actors, then run them through a billion games and a GP. hopefully the ghosts would get smarter, of course you need to have a smart pacman, or teach the pacman as well, for the ghosts to pick up any real skills.
The problem got into what building blocks you give the GP, and of course the scoring. Simply giving the KillPacman() procedure access to logic gates wouldn't be enough. it would need some info. like access to some useful methods CanISeePacMan(), or TurnLeft(), MoveForward() etc.. Also scoring is usually a large factor, like how much do you reward a generation if it actually kills pacman, or maybe how fast?
The furthest I got was making a program that attempted to find prime numbers. I only have it plus, minus, mult, and div. It was rapidly able to find a algorithm that could generate about 13/100 prime numbers. next I tried adding mod and exp, and got much better results, but nothing like 100/100.
I wish it worked better, but there's a lot more to it then one initially thinks, that's why we don't have a chip constantly redesigning itself better and rapidly making super smart processor's.
-Jon
Streamripper [sourceforge.net]
Small bodies? (Score:1)
Re:Umm.... (Score:1)
You know... (Score:1)
I think this is really neat. Now if we could just figure out why the darn thing works the way it does...
God does not play dice with the universe. Albert Einstein
What if this were to fall into the wrong hands? (Score:1)
We will have computers that can invent adds and new trademarked gimmick selling lines to display at all parts of your house.
The technology is capable of spreading, without warning it is going to infect your washing machine, fridge, car and tv..
Befor you know it, uncle bill will dictate the way your coffee is made, and the way your pants and shirts are ironed..
Windows(glass panes in your house, not a dodgy os.) will no longer be transparent, with all the new advertisement campaigns and desktop themes, there simply wont be room for a VIEW any more..
The word bug will have new meaning. Gone will be the days of 'bug' meaning what windows has, when it manages to crosslink your files for no reason. This is the new bug, this bug is being burnt to death in the shower because WINDOWS decided that your water temp should be several thousand degrees. This bug, burns your toast, kills your pets and worst of all, tapes the wrong program when you vainly try to set the video..
I belive that we should think, befor we unleash such power into the world...
Re:Let's go explore the universe (Score:1)
Re:Ummmm No. (Score:2)
This is true for man-made circuits too, my friend. When you design a circuit, you must still test it against every state you can think of, right? So simply subject GA-designed circuits to the same tests you would subject your own designs to, and you're no worse off than I am right now - after all, my region boasts one or two human-designed nuclear power plants for which there undoubtedly exist untested states (and also, I might add, states that have been tested and are known to cause catastrophic failure).
Keep in mind... (Score:1)
Re:Umm.... (Score:1)
This makes me want to go back to school and chase a few more degrees. Does it have the potential to be disasterous? Absolutely. The academic in me doesn't care.
evolving hardware *very* sensitive to environment (Score:2)
I've heard it said that in the future programming will be more like gardening than building up with legos, but I dunno...
--
Wait a second... (Score:1)
Would like to install that (Score:1)
DanH
Cav Pilot's Reference Page [cavalrypilot.com]
Re:Not truely Evolution...Just smart. (Score:1)
To evolve is to change incrementally (or possibly in big leaps) over time. That's exactly what genetic algorithms do.
Evolution as a process is most apparent in biological systems, and that's where most of our ideas come from. However, they're applied by trying to break down the essence of evolution as a process which is implemented by biological systems, rather than as something that is wholly bound within biological concepts.
(email addr is at acm, not mca)
We are Number One. All others are Number Two, or lower.
Re:GA based systems unreliable? Hardly. (Score:1)
However, the little anecdote about the FPGA evolving to unexpectedly use (apparently) field effects from 'unused' gates was pretty interesting. That's something that should give people pause when considering traditionally.
(email addr is at acm, not mca)
We are Number One. All others are Number Two, or lower.
d'oh! posted too soon... (Score:1)
(email addr is at acm, not mca)
We are Number One. All others are Number Two, or lower.
GA based systems unreliable? Hardly. (Score:2)
Imagine the philosophical problem this creates.
What if you build a critical system for, say,
a nuclear power plant. It works and
works well, but you don't know how to explain it.
Can you implement it? Can you rely on it?
I play with genetic algorithms and I believe that the author's concern is based on a misunderstanding of the nature of the beast.
Modern software systems are incredibly complex, to the degree that no one even considers the possibility of conducting formal proofs as to the viability of the algorithms involved (I'm not talking about an algoritm like in an encryption subroutine, of course, but the program itself as an algorithm).
Genetic algorithms, however, work code much more extensively because they iterate through an unimaginable number of combinations. This leads to a programmatic robustness not generally found in traditional programming techniques.
Can anyone guarantee that any given GA will find all of the possible failure points in a system? Of course not. A GA isn't even guaranteed to find the optimal solution to a given problem (it's more like "a good enough solution in a reasonable amount of time").
Genetic programming (allowing software to evolve the actual programmatic structure of the application) isn't my deepest interest - I'm far more interested in artificial life. Bowever, both of these areas have the potential so revolutionize not only software (and, from this article, hardware) as a disclipline, but also make possible systems that can barely be imagined now.
(email addr is at acm, not mca)
We are Number One. All others are Number Two, or lower.
I love these articles (Score:1)
I imagine the columnist is referring to Darwin's theory of Natural Selection which he later renounced by saying "The idea that something as complex as the eyeball was produced by natural selection makes me ill."
A computer that evolves may redesign itself in such a way that even its inventors don't know how it's functioning.
This may be true, however I highly doubt that the inventors don't know or couldn't figure out how it is functioning. This always brings to face the idea that computers some day will pass us up and we will be working for them. My only statement to that would be that from the beginning of time, the creation (in the case the "thinking chip") has never been been able to take over the creator (in this case, man).
HAL, after all, was the machine that could think almost as well as a person,...
The key word here is almost. Not to mention that was a science fiction story.
Don't get me wrong, I like the ideas of genetic algorithims and I'm sure they will take us places in computing that we have never been before and fasther than we could have ever imagined. I can't wait to see what the future brings for computing and how things change on the desktop or in the server markets. I just get a little annoyed with the idea that some think this is going to be what takes over the world.
Of course, this is just my opinion, I could be wrong.
An FDA for Chip Designs? (Score:1)
So are we going to have an FDA style body that tests and approves algorithms.
Think about it who know what these little beasties will do? After approval you need a prescription to run one, and that give patent protection to the "discoverer" of the "genes".
Then after it's deemed non harmful (or less profitable) it's released on the public.
This could et really weird in a hurry.
Re:I'm worried about this. It could be a disaster. (Score:1)
This is a big problem with over-optomizing a chip without varying it's environment. You end up with a design that only works only on the one chip it was tested with and only under the exact same conditions.
--
I use one of these... (Score:1)
Consequence? (Score:2)
A nervous system? My system can get quite nervous from time to time, ending in a kernel panic :)
Re:Nothing New (Score:1)
of course slashdot will not give you more than a point, but I believe their rating system to be inversely proportional to value.
I set my fucking sort order at LOWEST SCORES FIRST.
This evolving hardware bullshit is 3 YEARS old. jesus.
Check out the cool New Scientist Article... (Score:2)
Link to New Scientist article [newscientist.com]
It talks about the unconnected cells and the way it was trained, and the fact that the circuits only worked at one temperature(!)
Re:I love these articles (Score:2)
What, never?
Cars have never gone faster than a human? Aeroplanes don't fly better than a human? Chess computers don't beat the person that programmed them (hint: most of the time chess computers DO beat the person, if the program they write is any good at all)?
For that matter do students ever exceed the capabilities of their teacher? Of course.
The old maxim, "a stream can never go above its source" is WRONG. You CAN make water go higher than its source (check out water hammers).
summary is wrong (Score:1)
BugBear
Here is a completely unrelated Slashdot article... (Score:4)
http://slashdot.org/articles/99/08/27/1238213.shtm l [slashdot.org]
Heck, this one's more informative.
Donny
Re:logic errors (Score:1)
Re:Asimov's 3 Laws of Robotics (Score:1)
Isn't that what screwed up the HAL-9000 in that movie?
-Roy
Heard it all before..... (Score:1)
Ummmm No. (Score:2)
No. Because a state may exist in future that has not been tested. This 'unexplained' circuit may fail when it encounters this state - there is no way to identify it (maybe). Unless you can gaurantee that the circuit has seen all states/conditions and you are capable of proving a desired result in every instance - in this example that would be a 'not-meltdown'
Difficulty level in using FPGAs (Score:1)
But it certainly seems to me that the old issue of the mass of growing complexity (Tower of babel) will also find the same solution direction as man has in the past.
I do wonder how the following might be useful in such matters as FPGA programming.
Knowledge Navigational Mapping thru the Virtual Interaction Configuration [mindspring.com]
3 S.E.A.S - Virtual Interaction Configuration (VIC) - VISION OF VISIONS!
Re:Umm.... (Score:2)
Just think of the ramifications of evolving computers...think, oh, I don't know, the Matrix or something...
If you select for Matrix-style AIs, then yes. If you base your selection of the best of each generation on their ability to multiply two numbers, then all you'll ever get is a multiplier. A very efficient multiplier, probably, but still just a multiplier.
Re:Not truely Evolution...Just smart. (Score:3)
What we are dealing w/ is a finite set of possiblities here and true evolution is INFINITE!
I always thought the evolutionary possibilities for a particular organism were constrained by their environment. It's true that the environment is quite open and it's very hard to see what some of the possibilities might be, but some things can definitely be ruled out (no organism has photo-receptors tuned to pick up gamma rays, for example, because that wouldn't confer any advantage, and would be a waste of resources).
It's still natural selection, but in the case of these chips, we're controlling the criteria, and they're much narrower, that's all.
Old News. (Score:2)
Re:Umm.... (Score:2)
What if some of this hardware was to evolve into a self replicating machine?
Thought Experiment (Score:2)
Imagine the task at hand is to navigate a simple maze--a maze with exactly one entrance, one exit, and no loops. Now imagine that it has the exit intentionally close to the entrance and the one path is intentionally circuitous and counter-intuitive. If the algorithm to solve this maze starts evolving based on mistakes and random variations, there is a really good chance it will dwell on the wrong solution. By evolving, there will be bits of the wrong solution left behind in it's algorithm.
This effect can be minimized by saving the state of the algorithm when the deviations occur and backtracking to avoid unnecessary calculations. This requires you know the proper solution. This is very easy with a maze, very hard with image recognition, sound recognition, and any other task which is not completely understood from the human point-of-view.
Adrian Thompson should do more trials on his evolving algorithm. There should be variations in his results. Two weeks is not a long time for a science experiment. He may get many unremarkable results, a few fantastic ones, and a couple of "just plain strange" ones.
I'll have you all know that I still have my appendix, thank you.
----------------------
Okay - this could be handy... (Score:2)
Now, most of this depends on how complex the programmes can be for the FPGAs, but it would be nice to have a complete box loaded with cards of these puppies for specific applications. For example:
A piece of gaming software downloads additional morphing/rendering code to GFPGA card #1, while loading the game engine into GFPGA card #2.
Download updated compression algorithms as a compression co-processor.
Card #1 becomes a highly optimized search engine for your corporate site.
Card(s) #1 gets GIMP modules
Yup - it would be nice. But seriously, I don't see it happening at my house in the near future. Although, using cards #1-#4 for rendering near perfect Natalie Portman photos, and a fabber... Hmm....
I'm worried about this. It could be a disaster. (Score:5)
Now, imagine in the future you have one of these GPFPGA chips in your PC. What is to stop a malicious hacker (or is it cracker?) from planting a logic bomb in the very hardware of your PC ?
This technology seems like a retrograde step if we are concerned with the reliability of our hardware. I urge slashdot readers to boycott any PCs which use this dangerous new technology.
Genetic FPGA design (Score:2)
We had some yahoo give a seminar about this at my uni and he was gushing about how it had come up with a patented voltage regulator topology and how it would revolutionize analog design. Well, an analog design that depends on its environment is simply a sucky analog design. In practice, analog circuits have to work in the harshest, most variable environments. I don't think that us "experts" are useless quite yet.
Not really *Evolution* (Score:2)
Project homepage... (Score:2)
To clear up a few points... (Score:2)
If the control system is properly programmed, then the chips cannot 'crash' and spiral into uselessness since the control system will go back a step unless the new system is better than the old one.
It is very possible for computers programmed using evolutionary techniques to do things that you don't understand how. It's what is known as an irreducible system - the interactions are so complex that after a few generations you cannot trace the changes back by looking at the finished product, you need a log of the evolution. For a full genetic program, this log could be 20,000 or more generations of 100 or more programs being tested - and that's a lot of log to go through to figure out why & how it works!
And as for the reliability issues - you don't run these things 'hot' in critical situations! You run them in simulation, throw problems at them and they create every neat solutions. You then take these solutions and combine them to make a very neat, very normal computer program which can be added to and debugged in the normal way. In terms of FPGAs you would use small, tested circuits to control parts of a critical system like any other small circuit.
These thing aren't meant to be used 'on the fly' - it takes a lot of processing time! They're used to create highly efficient normal circuits which can then be used in normal circuitry.
Beg:
Re:Umm.... (Score:2)
Before anybody brings up the current movie which uses a message against a technology imagine the other side of the coin. Somebody using a comic book (which is what most of these movies really are) such as Teenage Mutant Ninja Turtles as an argument in favour of genetic engineering.
There are a lot of reasons to be cautious when extending almost any technology as well as reasons in favour of advancing the technology. Listen to evidence as opposed to propoganda or fiction and determine your own standings on it.
gah-gah goo-goo; therefore quantum theory (Score:2)
"And get this: Evolution had left five logic cells unconnected to the rest of the circuit, in a position where they should not have been able to influence its workings. Yet if Thompson disconnected them, the circuit failed. Evidently the chip had evolved a way to use the electromagnetic properties of a signal in a nearby cell. But the fact is that Thompson doesn't know how it works."
Not bad for a new-born.
After being taught gah-gah goo-goo,
it goes on to prove the computational value of quantum mechanical entanglements.
We'll have to keep an eye on this toddler; (like that little girl in MIB with the two physics books).
It may just go on to prove human beings are as useless as egg-shells.
Ripley's believe it or knit.
Can never have too much of a good thing (Score:2)
Re:Is it the hardware or the microcode? (Score:2)
updside downside (Score:4)
Windows running this would
Turn your 1ghz box into a 386 that bluescreens
Upside
Linux running this would:
Create its own code for its own kernels
FreeBSD running this would:
spin off and create a port for something other than i386 arch
OpenBSD running this would:
Not allow anything to ever be installed because it may be deemed insecure
NetBSD running this would:
Create another port, then wait a year or two to let you use it
view the source Luke! [antioffline.com]
Re:Asimov's 3 Laws of Robotics (Score:2)
Re:Not truely Evolution...Just smart. (Score:2)
If, on the other hand, they were given the materials to build an FPGA, and general instructions on how an FPGA works, perhaps then they could improve on the original design. Sort of like humankind does today through scientific research. We have been given a basic design (with many variations of the basic design - our bodies), and have been able to increase our life expectancy and quality of life through learning about ourselves, our world, our universe, etc. To claim this as being a vision of 'evolution' is silly. Let's remember that someone created the FPGA machines in the first place, humans. They did not just magically form themselves from a bunch of spare parts in a computer lab.
Re:Not truely Evolution...Just smart. (Score:2)
For instance, let's say this FPGA computer could alter it's FPGA (or even build new ones) to increase computational speed. Fine, I agree that that is within the realm of possibility. But it would be a mathematical impossibility for this computer to spontaneously generate a new piece of software or hardware that was outside of it's creator's realm of abilities that were bestowed upon it. Let's say we do not give this computer a device, knowledge, or method of communicating with any other computers. It cannot develop a communication protocol because it has no concept of such a thing. Plus, it would need to develop both the software AND the hardware, at the same time working together, to properly begin communications with other computers. So my arguement is that computers will not take over the world. They're confined to the knowledge we build them with.
Asimov's Laws of Robotics for the Real World (Score:2)
1. Make me rich.
2. Don't fuck with me.
3. Fuck everyone else.
--Blair
Re:Asimov's 3 Laws of Robotics (Score:2)
And of course there's no way around that.... No... Of course Asimov didn't spend the better part of his career writing books about unexpected effects and ways to sidestep and circumwent those laws..
</sarcasm>
Re:Asimov's 3 Laws of Robotics (Score:2)
Just a thought. Whilst we were here.
Asimov's 3 Laws of Robotics (Score:2)
When a true AI is finally developed sombody is going to be smart enough to deeply ingrain Asimov's 3 laws of robotis (or something similar) into them.
That just about solves our problems doesn't it?
Let's go explore the universe (Score:2)
This is what I waited for!
Ok, now that that these machines can realy reassemble themselves, let's giv'em the possibility to collect and produce their own resources. Construct an initial seed of nanobots, put them into a small rocket, send them to any planet that seems to be inhabitable for humans. There the bots would reproduce themselves with the materials they find on that paticular planet. Based on their inital "gene-code" they would be programmed to make architectural facilities for humans. Due to their evolutional design they could adapt to regional specialities (such as gravitational and climatic issues), i.e. make very thick walls where radiation is high etc. Just like techno-termites using their own body as building material for the anthill. You could fire some thousand seeds into space know, wait a few hundred years until the technoparasites made up a small colony for you somewhere and all you need to do is move in. (Do not forget to bring your coffemug, linuxbox, plant etc.)
a very humble simulation of this can be seen here [uni-weimar.de] (german)
Translation by google here [google.com]
Conceptual Flash-movie here [uni-weimar.de] (click first link)