Autonomic Computing 152
pvcpie writes: "The New York Times has a story today about Autonomic Computing, which is described as "a biological metaphor suggesting a systemic approach to attaining a higher level of automation in computing;" and they published a paper (pdf) on the topic. Apparently there are already some universities signed up on Autonomic Computing projects, more info was available on the website and in the nyt article. It also appeared in CNET."
Karma Whoring (Score:2, Informative)
"Civilization advances by extending the number of important operations which we can perform without thinking about them." - Alfred North Whitehead"
Anyways, here is the link to their Autonomic Computing R&D site:
http://www.research.ibm.com/autonomic/ [ibm.com]
Re:Karma Whoring (Score:2)
(sorry, had to make the lame Civ joke
Sounds familiar... (Score:2, Funny)
Can you say "SysAd"?
Re:Sounds familiar... (Score:1)
sounds like just good oo development to me
*shrugs*
Re:Sounds familiar... (Score:1)
IBM, however, is a different animal. They work on stuff like this, while MS develops the X-Box.
It this merely a Quest? (Score:1)
Re:It this merely a Quest? (Score:1)
The nuts and bolds of computing should be out of sight from these users. MS came up from computer hobbyists, geeks who like to play with computers. IBM has always been about business.
In the beginning, users had to learn software, as in how to respond to dumb prompts to make it work correctly. We still follow that model, even if the choices are more varied and powerful. It's past time that the software should be able to adapt itself to the user, not the other way around. Just having to know how to type is more than computers should require.
It may not be possible, but it's certainly worth researching.
The future (Score:5, Interesting)
The more that can be done automatically, the more of the IT staff's precious time can be dedicated to more complex tuning tasks, and/or new development. This will make IT more effective, not obsolete.
Re:The future (Score:2)
"Let's do a reinstall."
"No, we need a better machine."
"I think we shoulda stuck with Access."
"Call tech support."
Re:So what happens to society? (Score:1)
A new society will replace it, and we will laze around all day. The machines will provide. And who fixes the machines? Why the priesthood!
Re:So what happens to society? (Score:2)
Sounds great! Count me in!
Of course, the Morlocks will get hungry from time to time...
Re:So what happens to society? (Score:1)
Re:There is no shortage (Score:3, Interesting)
Art is a fairly rare field, and the pay generally sucks. Acting, music too. Sucks, unless you bedome popular, then you become a natural monopoly on "you" and your pay skyrockets.
Is there a life expectancy? (Score:3, Interesting)
For that matter would there be analogous doctors, hospitals and life support systems? How about gymnasiums for keeping in shape? (and I ask that last one only half-jokingly...)
Gordon
What do you think you are doing, Dave?
single cell systems, etc. (Score:4, Insightful)
This means that we would need modular units in a network, say, that would be autonomic. The desktop PCs would have to be autonomic before you could bet the network to truly be so.
It would be a whole new way of computer design for software, and I doubts that some of the OSs out there would have code bases that could be viable in this regard.
Note that you can do this sort of thing as an optical illusion. You can pretend that everything is doing all right, when in fact it is going to hell in a hand basket. The vaporware diagnostic that merely pretends everything is all right, or the repair that cause more damage than was present in the first place.
But I think we have had enough of that over the past decade or two to know to avoid it. And, of course, the guilty have not been named because everyone knows who they are already.
Re:single cell systems, etc. (Score:1)
Infinite Diversity, Infinite Combinations (Score:1)
Actually the shear diversity is excellent proof that there is no way that any one single entity could have masterminded it all.
You would have to have an almost infinite diversity of intelligent beings planning and taking advantadge of the opportunities to account for what is around you.
Somehow I do not think that this is where you are going, however...
Re:single cell systems, etc. (Score:2)
things tend towards chaos, not the order displayed in even the tiniest living cell.
This issue is dealt with very well in information theory in which error correction methods are used to ensure that information is copied accurately from one generation to the next.
For instance, if you make a copy of a page on a photocopier and then copy the copy and so forth, the copy will be unreadable within about 10 generations because there is no error correction. However, if you make a copy of a file on disk and then copy the copy and so forth, the copies will be identical practically forever.
Study of DNA shows that error correction methods are employed when cells divide, thus defeating your theory that they should tend toward chaos.
Actually, I think chaos is the wrong term anyway. Systems tend toward disorder, not chaos, through the Second Law of Thermodynamics. Chaotic systems are highly organized but unpredictable. Like living organisms.
Re:single cell systems, etc. (Score:1)
Re:Is there a life expectancy? (Score:3, Interesting)
The computer's life-expectancy doesn't change much due to the self-tuning properties, but of course these self-tuning properties will stress more the machine (more usage of CPU, disk and I/O in general), and hardware fails after some time. It may take long, but it fails.
Now consider a self tuning database system which includes a shelf of backup tapes and a robot arm to switch tapes (or CDs) as part of its maintenance. Moving parts add to the stress, which reduces "life expectancy".
Just to add to the mess, human life expectancy is also related to environment conditions. Being hit by a meteor or burned in a fire is just as bad for a computer as it is for a human.
Re:Is there a life expectancy? (Score:2, Interesting)
There is also a phenomenon called apoptosis, which is the spontaneous death of seemingly healthy cells. It is part of the body's self-regulation -- cancer seems to be, in some sense, a failure of the apoptosis mechanism.
So we may have software vendors building, instead of planned obsolescence, apoptosis into products. They could even make it a feature -- if nothing ever dies, evolution stops.
Re:Is there a life expectancy? (Score:2, Informative)
Yes, continued breeding into the 40's, 50's, 120's, etc. would indeed add to the age via genetics, but it is relatively minor and must overcome the energetic, wanting-the-females up and coming next generation to actually get the females.
Furthermore, we do probably gain the slow advantage of increased age from evolution. Do any other mammals live as long as humans, even given ideal nutrition? Close, yes, but we should be dying in our 40's or 50's of old age after a healthy life based on our size, not 70's and 80's. For thousands of years, some people, mainly kings and the wealthy, have lived to old age even by modern standards, and they have continued to breed up into that great age, passing along their genes. I will bet this is the source of what appears to be the relatively unnaturally long healthy lifespan vis-a-vis other mammal species.
We'll probably get the first big forays into extended life (well, second after good nutrition) via replacement parts. More $$$ for acephalous cloning experiments now! After that, chemistry (or other stem cell research) into preventing/reversing brain breakdown. More $$$ for cloning research now!
Re:Is there a life expectancy? (Score:2)
As for gymnasiums, I can sympathize with future autonomous systems..."Aw, do I have to go to the health club and defragment?"
Will I dream? (Score:1)
Seriously, back in the early 70's I was working for some Ph.D. types whose goal was to mimic the human brain in a computer. I almost got fired for asking if that meant the computer would have to be offline 8 hours a night to dream, so it could reorganize its thoughts the way our brains apparently do during sleep.
Microsoft's "self-healing" server (Score:4, Funny)
"Autonomic Computing": bad name? (Score:1)
Marketing alert (Score:3)
While I think that designing system components with feedback capabilities is a neat idea, remember that doing this in a safe way requires actual computer science. Or at least extensive modeling. It's cool, sure, but it's going to be a while.
Re:Marketing alert (Score:2)
That's why it's encouraging that IBM is doing it, not some BS startup.
Re:Marketing alert (Score:1)
The focus on interactive computing in the last fifteen years has just kept attention away from the ongoing work in this area.
But then, maybe people don't really want automation, maybe they'd really rather participate more, not less.
Hmmm... (Score:1)
So... they want to cut on I/T staff by letting the computers manage themselves? This seems a very bad idea to me.
I currently work as a sysadmin and I also have a lot of contact with end-users. Users like it when they have a problem, someone comes over, talks a little, fixes the probem.
They want to be helped by a person that acts nice to them, that's why I think you'll probably always will need I/T staff.
and from a more technical point of view: What if the computer fucks up? All these systems will probably "think alike" and if one Computer thinks that considering the managment of the PC it's a good idea to whipe the harddrive... more will probably follow...
must...go....home....to...much...work...
Re:Hmmm... (Score:2)
They're saying that they want the computers to handle the mundane tasks so that the IT people can have more time for the more important tasks, like the person-to-person tasks. Do you know any IT worker who isn't overworked? This is designed to help alleviate their stress and give them more time for the more important things in life.
At least in theory =)
Re:Hmmm... (Score:5, Informative)
In my opinion, part of a autonomous PC is to be self-sufficient, not act like a lemming and follow other PC's just to follow.
Plus, just as human have a basic survial instinct to survive, I think you'd write this instinct into the PC as well and not have it destroy itself (unless it was doing major harm to its master, etc. Remember Asimov's rules for robots).
Finally, I agree humans will never be replaced as the final decision maker in fixing and running PC's, servers, networks, etc., but when I was a sys admin, I'd have killed for PC's to be smart enough to do some of the basic, mundane, man-hour, labourious tasks such as upgrading Service Packs if I told them all to do it, install programs, etc. Then I could have done more fun stuff. Plus when I had to fix a problem, people weren't glad to see me, because I was only there when something went wrong. Granted they were happy someone was there to fix the problem, but all would have preferred that there was no problem in the first place (PC fixed itself).
Re:Hmmm... (Score:1)
> for a PC, and would not write these instructions
> into the PC.
Hence any number of OS's where the equivalent of "rm *.* -R" never even so much as warned you before forging ahead. I concurr. They were not experts -- they couldn't be. They were forging new ground. They were those who thought performing the first trapeze triple was a fancy thing for the future.
Re:Hmmm... (Score:1)
Sigh. They want to just pull it out of the box and plug idiot-proof plugs into various wall recepticals and just turn the damned thing on and be on the net and be able to send E-mail. [amazon.com]
Evolution proceeds towards what works... (Score:3, Insightful)
The body works so well because it's highly highly highly redundant at the cellular level, not because there is a brilliant master control program controlling the most efficient implementation. You can't even imagine a number as big as the number of hormone receptors in your body.
That kind of duplication in a computer system costs real money, and while a noble goal, people don't spend money on reliable systems, they buy Windows.
This is a terribly useful approach in the battlefield, and the right thing to do once bandwidth and computational power are practically unlimited, but we're still in the stage of computing where people just want more features, reliability be dammed. After all, Nimda follows an autonomic behavior.
Re:Evolution proceeds towards what works... (Score:3, Insightful)
There's a lot to that, actually. In all of computing so far, virii are the only programs that effectively self-maintain. This is of course due to their unique environment - not just indifferent but positively hostile humans and sentries to evade and destroy.
People make a lot of fuss about things like this, and doubtless IBM will make real advances here due to just their huge resources, but most of these concepts are not new. Ditto these new automated soldier-things the Army are developing. Yes, doing it in a more complex and mission-critical environment is far more prone to error (whereas when a virus fails to replicate, it's not the end of the world for it - there are another few million still going), but we are not looking at a paradigm shift here.
Virus writer have a lot to teach us about self-maintaining and -tuning programs - while despising the destruction they cause, I can't help but admire their design prowess.
What virus writers have to teach.... (Score:2)
> -tuning programs - while despising the destruction they cause, I can't help but
> admire their design prowess.
It seems, you never looked at the code of virus programs. They are not self-maintaining or seld-tuning. Most of the time, they are even written very badly and tend to crash in unknown environments. But for a virus, that doesn't matter, as long as it performs well enough in a commmon environment. The author usually doesn't care it it crashes the current host, ruins it or just doesn't work if the virus gets confused. If a virus happen to find enough friendly computing environments, that offer it the exact condition it needs, we'll hear about it on CNN. If not, it just won't spread.
All in all, calling a virus self-tuning or self-maintaining is utter crap, usually the kind found in many articles about artifical life by Katz.
Re:What virus writers have to teach.... (Score:4, Interesting)
Actually, I have, and I know that many are very amateurish, but you come across the occasional gem - I once found a very cunning polymorphic macro virus lurking round. Funnily enough, those ones are the ones that tend to do the least damage - correlation?
Redundancy isn't the only thing. (Score:1)
Well, yes and no. There is lots of redundancy in the human body (even perhaps too much for typical conditions nowadays in the case of the liver and kidneys) but there is also the autonomic nervous system, which is an active controller, and it's at that level that this is being proposed.
To get serious reliability you do need redundancy (RAID, failover, etc.) but you also need load balancing and monitoring software which works at a higher level. If your disk system fails you're screwed, but if you get hit with a Slashdot effect you can also be screwed unless you can dynamically shift resources around.
Of course, that kind of stuff is hard. Look at all the problems Linux itself has with out-of-memory conditions and all the different approaches that have been proposed for dealing with it. None is obviously the best way, and each has advantages and disadvantages. Of course, in all of them there's at least some overhead, so your point about capacity isn't completely irrelevant even at this level.
Re:Evolution proceeds towards what works... (Score:2, Interesting)
My high school biology teacher must've said a thousand times, "Evolution proceeds towards what works, not towards what is best."
Erm, sort of true. Evolution actually works to do what is 'best' in terms of the fitness function... i.e. seeks to maximise or minimise the result of some metric. If you pick your fitness function correctly, you can make the system optimise towards any required goal.
Just make sure you don't have any bugs - because GA's and GP's will find and exploit bugs that give higher fitness metrics faster than the programmer.
Control (Score:2, Interesting)
I would hate to see my web server decide to bump up the number of allowed simultaneous connections in response to a denial of service attack, or decide that the ogg encoder in the background is indeed more important than domain control services.
... and of course the manditory gripe - that my system decides that it doesn't like my pirated MP3s and deletes them automatically.
If computers become smarter than the people who design their software, how are they any use as a tool anymore?
Repackaging the future (Score:5, Interesting)
In the end it turns out that the most complex problem arise in trying to coordinate a collection of "autonomic" (?) components. Distributed systems with unrully objects... This is what the autonomous agent community is mainly concerned with ( see the UMBC [umbc.edu] agents page or this very useful overview paper [liv.ac.uk] for example).
Of course IBM pushing this it might mean a kick up rear for the academic to actually get some of this potentially cool stuff working. Chances are you never want the end user to know how it works anyway.
Does this strike a familiar chord? (Score:2)
Remember that one ST:TNG... (Score:4, Insightful)
sorry for the run-on, but that was my immediate thought when i read Paul Horn's statement that the creation of "computer systems and software that can respond to changes in the digital environment, so the systems can adapt, heal themselves and protect themselves" is the only thing which will reduce the need for "constant human maintenance, fixing and debugging of computer systems." freeing humans for higher pursuits sounds good, but is probably only likely in a utopia. Horn goes on to say "The only way to get efficiency gains in information technology is to take some of the people out." This trend sounds like the steel industry - we'll have more cost efficient processes in providing IT services, but all those educated in that field will end up working at mc donald's.
so what happens when we all forget exactly how this "autonomic software" regulates itself? i guess this is the final word in proving the importance of documentation! : ) ** begging for a flamebait mod**    or we could skip the documentation and just kidnap the children of visiting alien starships when we eventually start dying of radiation poisoning from our super-self-configuring systems.
the infamous penn state stalker server! [psu.edu]
Re:Remember that one ST:TNG... (Score:1)
We're not talking about software that's self-modifying, though that may be the next step. Even self-modifying software will only be partially self-modifying anyway. The code will stay the same, the program will just make more decisions for itself.
Re:Remember that one ST:TNG... (Score:1)
this episode was rated 102 out of 178 on a best to worst list [geocities.com]. the episode was aired on February 15, 1988. it was episode #17 of the first season. you could read this more detailed here [geocities.com].
Re:Remember that one ST:TNG... (Score:2)
Re:Remember that one ST:TNG... (Score:2)
I can certainly see your point [slashdot.org] (callmegracie [slashdot.org]), but I think that as long as there is some human diversity, we are likely going to survive just about any apocalyptic event.
Re:Remember that one ST:TNG... (Score:1)
Your "utopia" would be hell to me.
Re:Remember that one ST:TNG... (Score:2)
Welcome to utopia.
Recall that once upon a time, electronic computers were unwieldy contraptions that required the user to reprogram them in hardware for every new application. von Neumann overcame that obstacle, just as later scientists overcame additional barriers to complexity by building operating systems, high-level language compilers, text terminals, GUIs, and so on.
The point is that the entire venture of computing has been one of bootstrapping additional levels of complexity, since its very inception. As Dykstra has put it, there is only one fundamental problem in computer science, and that is that computers are too hard to use. We have slowly eroded that barrier over time, but a lot more needs to be done to allow humans to think at even higher levels, similar to how they would work with an intelligent colleague rather than an idiot savant. This notion of "autonomous" computing is simply one more step in that direction
Bob
This reminds me... (Score:2, Funny)
We can already do this (Score:4, Informative)
"The only way to get efficiency gains in information technology is to take some of the people out."
They're called managers.
as a contract programmer (Score:2, Funny)
ka-ching!![$$]
lots of overtime
Yeah, but.... (Score:4, Funny)
Yeah...but does the University know about it? :-)
Read carefully (Score:2)
Derisive laughter coming from the Mac lab techs (Score:2, Interesting)
Some academic thoughts... (Score:4, Insightful)
The problems I discovered were:
* The building blocks of the software itself, are human optimized algorithms and datastructures;
* In order to improve human optimized algorithmns (meta-optimization?), one could develop some form of trial and error optimization algorithmn, but this would complicate things even more (it's hard to determine whether the searchdirection makes any sense); designing such algorithmn is very hard, because, how long do we search before we give up? This is like the chess game, certain move may look silly in the first place, but it could be a very good move in the end...
* If the program is to optimize smart, it will need to use *known* optimizations, and be unable to improve human optimized algorithmns... Introducing the factor of meta-optimization gives the problem of CPU-time distribution: how much CPU time may content optimization take, and how much time may met-optimization take??
* If only known algorithmns are used, the program is bound to a limited level of complexities. Meaning that: lot's of human comprehension has high complexity, which is yet not very well understood by science; the "Perfect Human Interface" is likely to fail in this area - it's the area the user (again) needs to adapt to the machine.
But if these guys actually succeed in their quest... brilliant!!
complexity not limited (Score:1)
Further, these "mutations", provided they score well, would later be combined with each other as well as the best performers from the original seed material.
The end result would be something we could never have predicted from the outset (or else why bother), and in the worst case might be so complex that we don't even understand HOW it works even though we may be able to satisfy ourselves that it DOES work. That will be the beginning of just trusting that the machine will work it out, and our role will gradually be reduced to just "checking that the answer sounds reasonable".
Of course we will have to be careful then, but the potential benefits are so great that we won't stop pushing things forward.
Re:complexity not limited (Score:2)
When I wrote a B-Tree, as an improvement on a Insertion sort, I wondered whether I could think of an algorithm, fed with meta-semantics on the insertion sort, and designing the B-Tree as a evolutionary successor. It took me quite some time to understand the algorithm myself, and implementing this was even harder - semantic comprehension of the building blocks "in english" resulted in many more lines of C++ code. Because of the recursion, the B-Tree code needed, except for the actual B-Tree code, the following elements:
* execution environmental code - organisation and anticipation of the recursive characteristics of the algorithmn; for example a messaging system to tell the calling function what we'd expect from it...
* datastructure environmental code - understanding the benefits of using multiple data structures to be able to perform the algoritmn; but this would be related to the B+ tree even more; using 1 data structure (BTree) was harder than using 2 data structures (B+Tree)...
* state determination code - needing to know where we stand and what to do when a (recrsive) function call returns; for the trees: do we need to rebalance?
* borderline code - dealing with those borderline cases; what if I'm in a left node, having no left sibbling, while the right sibbling is not an option?
Stepping from the B-Tree to the B+Tree, the data structures are quite simular to those used in the B-Tree, slight differences make the B+Tree algorithmn easier to implement, and better in throughput for table-indexing.
Ofcourse, this is just a simple example where we could try to think of a meta-optimizer. One can extrapolate this to R-Trees and X-Trees, still, B, B+ R and X trees won't have to be the panacea to organize complex information, perhaps a computed "tree" or indexing structure would even perform better in that case. Oracle's database (and others) are optimized to detect their environment and to act as effectively as possible, although their algorithmns are basically human optimized. Even using "Fuzzy Logic" to determine the best balance between the tasks of caching, defragmenting, reorganizing (ie hashtables and 'clusters'), the main way to store the data is predetermined. In pure (non ORDBMS) SQL based systems, the IT-specialist has a hard time designing flexible datastructures with hierarchy; one meets that point-of-no-return very quickly. So in a system which has the elements of organizing hierarchy and repeatance, the perfect storing algorithm should be computed, perhaps going beyond human comprehension (I agree with you).
Another problem with meta-optimization is that the system must try to model the way the user thinks - which is the only way for a system to act whether it understands the user. If you've ever studied the discipline of "Neuro Linguistic Programming" (I believe they call these guys 'motivation-guru's' in english), then you understand that everyone has personal sub-concious truths. A schema of reasoning is very personal, because every individual has learnt different things. The point is, that the system will be able to adapt to one person only would that be userfriendly? The system wouldn't be very suitable for public use.
Still, as I said, I'm playing the mind game, and if someone is to overcome these problems... brilliant!
Reacting to Complexity We Didn't Need (Score:3, Insightful)
Anyway, the idea of Autonomic Computing is hardly new (consider plug-and-play and autoinstallers). The really, really hard part of it is to impose autonomic computing on a system that was not designed for it. It is very difficult to make a complex system "simple" without redesigning the complex system.
Amen (Score:1)
Personally, I use a text editor for almost everything that nobody else needs to read. And a lot of what they do need to read. I won't say which, I may get flamed. I don't care if they gripe about the fact that it's only a text file and they don't like it. If it needs to look pretty, then I'm not the man to talk to anyway.
Re:Reacting to Complexity We Didn't Need (Score:1)
Surely you are correct that complexity is often superflous, but the problem that these folks are trying to solve is inherently complex enough to call for a complex solution. Its similar to the networking in an OS. Networking makes up an astounding large and complicated part of any modern operating system, but its only needed unless you require a network connection.
Bold text that looks "pretty" is awfully hard in a text file...
What's next? (Score:2, Informative)
I think most of you, so far, are missing the idea here. I also think the good Dr. from IBM is too, but that is beside the point. The point here in redesigning the way systems work from the ground up is to make them more capable of doing what YOU as users/admins actually want them to do. The idea being that YOU set the policy and the computer learns how best to implement it.
I, personally, don't like this very much. It sounds like the next step in closing off the workings of the "operating system" from the user. What happens to Linux and open source when Windows starts to dynamically rearrange it's code to optimize for your preferences and specific uses? It gets left behind is what.
I've been thinking about where operating systems are headed and what I want in an operating system, lately. I had pretty much defined what I wanted, when I started to run across projects like this: TUNES [tunes.org], and ideas like this: Flow-Based Programming [http]. I then realized that I wasn't entirely original. People have been thinking about the same things and trying to work them out for some time. But there has been little mainstream work done to get things to happen.
In my opinion, the design of TUNES and the ideas expressed about Flow-Based programming are a perfect fit for open source programming. And, there's no reason that autonomic computing couldn't fit right into the mix as well, as long as it's an open-source feature rather than a built in proprietary unified piece of the system.
The new system I'd like to see would be completely dynamically restructurable, and reprogrammable from the ground up. I think this would be a prerequisite for full-blown autonomic computing, but I have a feeling that the corporates are going to slip it into Windows in such a way that Windows stays the same on the surface, but just tells you less and makes more decisions for you than it already does. Problem is, that's what most users think they want. What I suggest is doing it in such a way that each user has total choice about how his system is designed and operated. Of course there would be predefined templates for certain types of systems (web servers, web/e-mail clients, gaming system, desktop publishing workstation, etc). So a user could pick one or more open source templates on which to base his system and then modify it to his needs as he goes. These templates would define what optimum scheduling and resource allocation should be done for specific tasks and merge this at the lower level with the needs of other tasks and the priorities set by the user or learned dynamically by the system.
I think we'll see some very interesting advances in the next 10-15 years. Let's hope the open-source community doesn't miss the boat. Microsoft sure as hell won't.
Re:What's next? (Score:1)
However, you refer operating systems as the basic component of the "Autonomic System" the article proposes. This, I think, is a little off the point. Individual operating systems should be nearly invisible in such a system. Instead, more abstract entities, running on hetergenious operating systems, will be the basic components. The templates that you refer to will be things like a particular data source, or an email service, or a voice communication entry point. These components will distributed and independently maintained. They will, however, be locatable via a single search mechanism.
That probably didn't make sense. I'm late for a meeting and I'm typing faster than I'm thinking
I also think that Sun's JINI is worth mentioning. I think that JINI's creators intended it to be the commication backbone for systems exactly like this one.
AND for the only thing that I can really contribute: I think that such systems will need to be very dependent on a consistent error system, where components can reliably propagate errors in such a way that other components can "understand" and act accordingly. Also, its seems that formal specifications (languages such as Z) could be used to specify the interfaces between shared resources.
Blah! on to my meeting.
Re:What's next? (Score:1)
Re:What's next? (Score:1, Interesting)
Stafford Beer has been writing about this topic for three decades. There was the Chilean Experiment in 1973, which was an experiment in autonomic computer control systems. This is nothing new, it's just something that almost everyone is ignorant of. Perhaps because the US government staged a coup in Chile in order to stop the experiment. In all my years on Slashdot I have yet to mention Stafford Beer and have anyone say, "Yeah, I've heard of him".
No, it's really nothing new, and it's not as complex as the IBMers would have you believe [and it has nothing to do with XML!]. It's not some new way of writing software - Beer's system was implemented I believe in straight COBOL. It's a new way of designing software, and it is indeed a paradigm shift in the true sense of the word - which is how the research has gone unnoticed for 30 years.
ooh, IBM is so very totally (Score:1)
There have been various proposals over the years to emulate biological systems. Cybernetics in fact was all about self-regulating systems -- at a lower level than is proposed here.
I'm so glad they've got such a handle on things. I'll take shipment of the new autonomic computers in January.
Cool Stuff... (Score:2, Funny)
Joe, the world's leading Cyberneticist boards a plane bound for Athens, Greece. This flight is the maiden flight of American Airline's first totally automated flight.
As he walks to his seat, he is greeted by a slick looking robotic flight attendant of his design. After sitting down, another attendant of the same design brings him a scotch and water (just the way he likes it) and says in a tinny voice "Good Morning Dr. Davidson, I hope you enjoy the flight."
Settling back in his seat with his drink in hand, he thinks about the many thousands of hours he has put into the autonomic systems that entirely control this plane.
As he goes to give his empty glass back to the robotic attendant, the plane pushes back from the gate. After a short while, he hears a much smoother robotic voice come over the intercom; "Good Morning, ladies and gentlemen and welcome to American Airlines Flight 1644 from Los Angeles to Boston. This will be a 2 1/2 hour flight. We at American Arlines would like to take a moment and point out that this is the first trip made completely under the control of the latest IBM 36000 Autonomic Robotic Piloting Computer. Every aspect of this flight, from the attendants serving you drinks to myself, the pilot, have been developed with safety in mind."
As the flight trundles down the runway, picking up speed, the voice continues on; "So you can sit back, relax and enjoy the flight, secure in the knowledge that absolutely nothing can go wrong, go wrong, go wrong, go wrong......"
Seriously, we're not very far from this. Flights routinely take off and land with only the most minor human intervention, and cars are being developed which use visual cues to pilot themselves down the road (a company in Australia has converted a Humvee for a test bed).
Re:Cool Stuff... (Score:1)
With a Humvee, they could get away with putting a Club on the wheel and a brick on the gas pedal. If they really believed that their software could do the job, they'd be installing it in a Geo Metro.
Re:Cool Stuff... (Score:1)
But seriously, I don't have any fears about fully automated flights. Concerns, yes. Ie, I'm fine as long as they code it right, As opposed to some ppl who'd freak out at the thought of not having a human pilot.
If anyone hasn't noticed, humans aren't designed to fly. A computer system specially designed to do this would eventually be more skillful.
I'm especially interested in having cars that drive themselves. This could add a real safty factor, seeing as 95% of autos are driven by poorly skilled drivers. It seems western society has forgotten that we place ourselves in mortal danger every time we hit the road...
Re:Cool Stuff... (Score:1)
> of autos are driven by poorly skilled drivers.
More than a few Sci-Fi stories have proposed as part of the background that human-driven vehicles will be outlawed on the general roads simply because they are the only ones causing any accidents anymore.
Open the door Hal. (Score:3, Insightful)
Even limited self replication coupled with the ability to process information so rapidly and powerfully seems like borderline sentience to me. What happens when you attempt to replace an autonomic router and the computer as a whole deceides thats really something you shouldn't do, because the router is so useful. Can this be coded around or avoided altogether? The people that develop this technology are going to have to be weary of creating something that cares more about its own processes than the user trying to make use of them.
Welcoming the 'new children' (Score:1)
I don't know about you, but I'm not afraid of this. This is the next step to transcendence for the human race. The goal of (wo)man is, or should be, to increase in power over his domain, both to preserve it and to reconstruct it.
Technology serves to save the environment and destroy it, and it's all part of the balance in which we live. Primitive peoples live in harmony with what they're given; technologically advanced people make their own terms of harmony and strive to live in those terms.
For an example of this, look around you. Chances are the environment you live in is not natural. Streets do not grow from seeds, buildings were not created by geological activities. Construction technology serves to create a new environment for you. We model the Earth after our desires, and thus, we have dominion over it.
If we create artificial life, then we have achieved a sort of godhood, and as long as we keep
Heavy questions for a tuesday, but AI needs to be thought of spiritually now, not just in terms of the technology, if we want to really advance it. IMO, of course.
- Josh
This is a bad thing (Score:1)
If theres no maintnance, what happens to us? Its our jobs to maintain computers.
Gee, that's original... (Score:1)
Cynicism apart, it's a laudable initiative if it results in a large kick to existing research in these areas. SAGE [sage.org] are also turning their attention to the process of automating and scaling system administration tasks (see recent discussion on sysadmin "research" on sage-members list).
OTOH, I can think of a large part of the IT industry - those vendors with profitable integration services business units - who possibly won't be throwing their lot in with IBM on this one.
Ade_
/
No Silver Bullet (Score:2)
Not only are there no silver bullets now in view, the very nature of software makes it unlikely that there will be any--no inventions that will do for software productivity, reliablity, and simplicity what electronics, transistors and large-scale integration did for computer hardware. We cannot expect ever to see two-fold gains every two years.
It is curious that the paper mentioned in the article does not deal explicitly with Brooks' objections since they are the best known statements of the problem of complexity in software.
Among Brooks solution for reducing complexity is to use great design: Whereas the difference between poor conceptual designs and good ones may lie in the soundness of design method, the difference between good designs and great ones surely does not. Great designs come from great designers. Software construction is a creative process. Sound methodology can empower and liberate the creative mind; it cannot enflame or inspire the drudge.
I would also add that simplicity engenders complexity. Simplified systems become subsystems for more complex designs. When the complexity of a system becomes a barrier to its further enhancement, simplifying it only allows its complexity to continue to increase.
It is the objectives of the system that creates complexity, not the development techniques. The only way ultimately to reduce complexity is to artificially constrain the requirements.
Constrain the elements, not the requirements (Score:2)
Of course, there are other important chemicals in the body (both organic and inorganic) -- lipids, carbohydrates, salts, etc. But the "simplicity" of the protein "alphabet" is a starting point.
Re:Constrain the elements, not the requirements (Score:1)
Maybe this is an example of what you mean by "Simplicity engenders complexity."
It's one example. Carrying it forward, a living organism with autonomic systems will form complex societies of individual organisms. As soon as these societies evolve autonomic functions, they form complex alliances or symbiotic relationships. And so forth.
I'm also thinking of the rules of chess that allow for a vast number of games, simple mathematical formulas that generate fractals, rules of grammar that produce language.
There seems to be some kind of recursive process at work that tends toward greater complexity. As a result, the autonomic theorists may not be tackling complexity but simply moving it to a higher level.
Unfortunately, I'm out of my depth here.
Autonomic Virus? (Score:2, Interesting)
1) An autonomic virus, written with the capability to "heal itself" once installed. Does this make sense? It seems to me that some existing virii already have some self-healing properties, such as those that hide a copy of themselves on a user's HD and insert a registry key in a Winodws registry to have themselves restored at reboot time. Thoughts?
2) A virus designed to insert itself into an autonomic system would conceivably be able to use the system's "self-healing" properties to protect itself (a funny memory springs to mind. I went to remove Outlook Express from my Win2K box at work, and discovered that Win2K does not have the option to uninstall Outlook Express. Undaunted, I went into the folder the executable was in and deleted it. Within five seconds, the system detected my "user error" in deleting a system file, and restored it. It took me a while to figure out how to prevent this, but it really threw me for a loop when I first saw it happen).
Barnum science (Score:3, Insightful)
The popular press quickly grasps on to it, as these magic bullets increase the circulation of OMNI and Scientific American. Eventually the politicians hear about them and allot untold amounts of money to these efforts.
After 5-10 years nothing much comes out of this, and the snake oil peddlers move on to another area.
Among the thusly overinflated areas we have:
- AI
- neural networks
- expert systems
- nanotechnology
- chaos theory
- e-commerce
- parallel computing
- distributed computing
- complexity (a la Santa Fe Institute) theory
- logic programming
the latest two additions are
- the semantic web
- autonomic systems
/.ers are well advised to apply a healthy dose
of skepticism to any such magic bullet claim.
And what about us? (Score:1)
What will we do once the world is run by self programming self healing self repairing and maintaining machines.
What then?
Autonomic responses are based on microbiology (Score:2)
While the idea has the power of lucidity, it's not clear whether it can be implemented effectively. The development of autonomic responses in complex animals developed over millions of years using sexual reproduction as a means of pruning less viable branches while introducing sufficient variation to ensure species-level survival.
At the organism level, autonomy is accomplished through a combination of neural, hormonal, and physiological responses to external stimuli and internal state changes. How many proteins and hormones, etc are required for this? How many combinations of signals are there? Clearly the number of combinations is large enough to be considered "countless". That an organism as complex as a human being works at all is an impressive feat of biochemical integration and regulation.
"...we don't really need... sentient machines and androids programmed to love and laugh -- to overcome the largest obstacle standing in our way."
That's an assumption which remains to be validated! How does IBM know that it's not the other way around? Perhaps love and laughter (i.e. higher emotions) are a natural and inevitable byproduct of the ultimate expression of the ideas of autonomy. Put another way, it may be possible to provide some sort of low-level homeostasis without emotions, but the maximum expression of those concepts might lead to a deeper philosophical awareness. At that point, look for IBM's business systems to call in sick with "mental health days" once in a while.
Richard Powers' novel Galatea 2.2 [barnesandnoble.com] was an interesting examination of the relationship between self-awareness, emotion, and intelligence.
Re:Autonomic responses are based on microbiology (Score:1)
It just sits there, mocking me.
Some day I'll get to put BSD on my work machine, and we'll see who laughing then!
Finally, somebody gets it. (Score:3, Interesting)
That was in the original Apple "Inside Macintosh".
The hardware side is reasonably close on this. All the newer interfaces (USB, IEEE-1394, PCI, PCMCIA) have identity info on all devices. And it's been that way for a few years now. It's time to pull the plug on the old stuff and insist that everything autoconfigure.
This is key. And again, Apple almost had it right, once. The original Apple model was that the system had two main repositories of system state - the Desktop file, and application preferences. The Desktop file could be regenerated if needed (and had to be, due to lousy database design), and application preferences were cosmetic only - you could delete preferences at any time, and just went back to the defaults.
Apple never faced up to checkability, though. And it hurt them, because they were running an unprotected OS with a tendency to trash its internal data structures.
Broken things must not contaminate other things.
It's unacceptable to ever get bad data from a disk. Reported errors, yes; undetected errors, no. Everything must have error checking. Memory parity must always be on. (And ECC ought to be standard.)
Re:Finally, somebody gets it. (Score:1)
That's one way to look at it, but what is really happening is that the Unix crowd ensures that features are built on top of rock solid foundations rather adding features that don't always work and then going back and making lots of fixes.
BTW, I don't tell the OS the configuration of the hard drive everytime I open a file.
Re:Finally, somebody gets it. (Score:1)
PCMCIA is still a sack of shit. I have any number of laptops, PCMCIA cards, tower drive PCMCIA drive bay units, whatever, and I always have to go find drivers for those devices, then for the PCMCIA devices, and plug them in, many combos don't work, lock up the PC, prevent it from booting, don't get recognized even with drivers installed, or do get recognized, driver loaded, but just don't "take" as far as the OS is concerned.
Nah, it has a long way to go. Another "Mac" idea was requiring hardware to configure itself, or at least prompt the user to push in the install floppy, which executes and configures automatically. I'll never forget the idiocy of using a PC at work for the first time (after college) and, what the hell is with all this IRQ crap, I/O port address stuff, just to get my modem working? How in God's name do I find this out (no Internet back then)? Why doesn't the manual say? Where is my computer's manual? What's all this IRQ crap in Duke Nukem just to get the SoundBlaster to work? WTH?
All you PC programmers suck! It was the only logical, rational conclusion.
Re:Finally, somebody gets it. (Score:2)
Thanks for the many excellent points in your post; but, I'd like to offer these additional suggestions. I'm sure there's some mathematical terms (closure? reflexive?) that these are based on, but it's been way too long since I took that college class.
An advantage is that one can capture ALL the settings that configure an application. AND, one can use those captured settings to EXACTLY restore that configuration.
An added advantage is that one can programatically generate permutations of parameters and their values for AUTOMATED TESTING of the possible configurations. Don't even need to know beforehand what the result is supposed to be -- just permute them, use them, capture what happens, and then sift through to see if everything makes sense. Save valid output as a baseline; fix any bugs that are discovered; repeat until all permutations are covered.
Two Questions come to mind (Score:2, Informative)
Secondly, how do we accomplish this without advancing machine tecnology too far? If a machine becomes self aware and protective of itself, what happens when we want to shut it down? What are you doing Dave? I know there are ways of preventing this, but will they work, and will we be able to find out if they work before it is too late, so to speak. I'm not trying to be paranoid, but this is something that is a real concern.
Another piece of this that someoneelse mentioned is if the computer is maintaning the basic stuff, what happens when the computer dies and no one knows exactly how it did what it did? A very real example is the ubiquitus (sp?) of calculators. How many of you can still do long division in your head? There was some story I read in High School where this guy who could do simple math without a computer was such an oddity that he became a king or something like that.
Keep doing those math problems.
Recursive Composition and Paramterization (Score:3, Informative)
It tells your heart how fast to beat, checks your blood sugar and oxygen levels, and controls your pupils so the right amount of light reaches your
There's an OO principle called dimeter which advocates as few dependancies as possible between objects. This sounds like a lot of hooks all over the place which is not a model of simplicity. It would be better for "it" to step out of the way and let each object adjust itself based on its surroundings just as in natural systems. Nature has a tremendous advantage over computers. It is far more efficient because everything is happening literally in parallel. Computers can really only do a very limited number of things at a time although sometimes the user perceives concurrencey due to very rapid time-slicing.
As a result, programmers are forced to make tremendous compermises given the comparatively limited medium with which they have to work. It will take well established techniques and objective analysis to determine the be way to utilize bits on silica.
Over the years I have recognised one principle that transcends this issue -- the issue of dealing with complexity. Oversimply it is Recursive Composition. This "pattern" or OO construct as it is sometimes referred to does not have a Class or particular set of relationships between objects. It's completely arbitrary. The idea, is to recursively delegate the responsibilty of another part of the system to yet another module. At the leaves of this tree you have the primative operations and at the root you have one simple instruction for triggering a potentially very complex cascade of instructions. Thus you have reduced the complexity of the overall system. The key difference between this and just another group of functions calling one another (and thus target to reduceing complexity of programs and in real-life systems) is parameterization.
As a simple example, imagine trying to encode or decode a database file. The database file has a header, a record list, and data chunks. Like this one on PalmOS PDB files [palmos.com]. If one were to apply the principle of Recursive Composition the API for this PDB codec would be, at the top level, PDB_decode(char *src). At the next level down you have operations like Hdr_decode(char *src) and Record_decode(char *src). At the leaves you have dec_uint32be(char *src) to decode an unsigned 32 bit integer in Big Endian byte order.
If you can parameterize cleaning exactly what is required to perfrom a task and delegate it to another module you have broken the problem into at least two smaller problems which reduces the order of complexity. Simple!
about 30 years behind the times (Score:3, Interesting)
Homeostatsis and self-regulation are not properties that you implement once in some abstract data type and that henceforth works for everything, or that require breakthrough new technology, they are design goals that you need to take into account when you design each and every part of a system. Biological organisms have been forced from day one to deal with these issues. The reason real software systems don't do this is not that people don't know how to, it's that software developers don't bother and aren't trained to do it, and that they can get away with it because there are always smart humans around to help it.
So, next time you write a new piece of software, think about how you can make it more self adapting and less reliant on numerous environment variables and other arguments supplied by the user. The pathsearch library is a simple example of this.
Security? (Score:3, Insightful)
Vaporware (Score:2)
The article lists 8 characteristics of autonomic systems:
When they develop an autonomic programing language it will be time to give it some serious consideration.
Re:Vaporware (Score:1)
So What (Score:2, Funny)
wakka smakka
Self rewriting software? (Score:1)
Request for paper mentioned (Score:1)
Auto-coding - Virtual Interaction Configuration:
Knowledge Navigational Mapping - Virtual Interaction Configuration [mindspring.com]
The Matrix Metaphores [mindspring.com]
VIC legal, equations, definitions and concepts [mindspring.com]
Command specs [mindspring.com]
Knowledge Calculator [mindspring.com]
Re:Bad for jobs? (Score:1)
Lay back and relax. Then after a while call you customer and tell 'em you'll need 20hrs more for finishing his program (and he'll pay).
Re:Bad for jobs? (Score:1)
Re:The day when computers program themselves (Score:1)
As automation increases, should we decrease the number of humans (Bender!), reduce the average workload to spread the jobs around, or should we maintain a purely competitive environment where those without means do not survive(thereby acheive the first option)?
I don't neccessarily think any of these is the best option, but I do think its silly that the number of hours in the American work week continues to grow along with the unemployment rate.
Re:The day when computers program themselves (Score:1)
Productivity increases of any type -- cheaper labor, more automation, better manufacturing techniques, revolutionary ideas -- all increase the overall quality of life for society, whatever the temporary bumps.
If everyone still works a 40 hour week, then quite simply, all that much more work is done, and everyone lives a better life. Europe trades off that productivity increase by decreasing hours worked. This gives more free time (good) but you don't get that for free. This makes inefficiency, leading not to decreased unemployment, but increased unemployment.
The ultimate of this, of course, is to have Reed Richards [berkeley.edu] invent a machine that produces copies of anything, including itself, and start passing them around.
By European, or the "oh no, this is progress?!?!?" theory of economics, everyone will be out of a job and starve. Yet what would happen is exactly the opposite. For a few $ a year of energy, people will have all the food, excellent items, big houses, and whatnot that they could desire.