Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Helping Computers Help Themselves 141

Jim Posner writes "The IT world's heavy hitters--IBM, Sun, Microsoft, and HP--want computers to solve their own problems.....If you're being chased by a big snarling dog, you don't have to worry about adjusting your heart rate or releasing a precise amount of adrenaline. Your body automatically does it all, thanks to the autonomic nervous system, the master-control for involuntary functions from breathing and blood flow to salivation and digestion." I'd just be happy with a few intelligent daemons to watch my back, like when a program runs amuck and fills up the process list.
This discussion has been archived. No new comments can be posted.

Helping Computers Help Themselves

Comments Filter:
  • self adjusting fans (Score:1, Interesting)

    by Anonymous Coward
    on the physical level, don't computers already have this? temperature controls, anyone?
  • A.I. (Score:1, Interesting)

    So basically they want a sort of A.I. This is news? A.I. research has been around for years.
    • Re:A.I. (Score:2, Interesting)

      by kryonD ( 163018 )
      The article is quite definitely outdated. The system they are referring to is known as an 'Expert System'. These systems are developed to learn facts about a given subject based on a set of predefined rules. The rules also react to facts and can initiate actions when a certain fact becomes true or known. More advenced systems are even capable of creating new rules, or modifying old ones based on the facts in their knowledge base. The medical community is probably in the lead in this field as they struggle to provide a reliable system that will accurately diagnose a patient, freeing up the doctor's valuable time for the actual treatment. One of the key requirements for an expert system is that it should be able to explain in detail how it reached a certain conclusion or action. IBM is simply trying to build something that will become an expert on troubleshooting. It should be noted that NASA has been working on this for years in order to provide more reliable satelites that are capable of conducting simple repairs and reconfiguration to react to the many mishaps that occur 50+ miles above the techs.

      My 2 cents
  • by dasmegabyte ( 267018 ) <das@OHNOWHATSTHISdasmegabyte.org> on Thursday September 12, 2002 @06:22PM (#4247820) Homepage Journal
    Because it means they want to make us obsolete to increase the margins of rich idiots. And it won't save that much money, in the long run, from well run companies.

    When I first came to this company, we had something like 20 IT employees. Through "attrition" (read: fire X, Y quits) we're down to 4. Every time somebody left, the remaining folks would write a script to automate what the other guy spent most of the day doing...watching servers for spikes and resetting them, etc.

    Did it save us from hiring new people? Our HR department will tell you it did, but it's untrue. The fact is the turnaround time for IT requests has become abyssmal. Adding new segments to our network takes much much longer -- to the point that a new code base for email took 2 people six months to analyze deployment options and deploy, and only took me three weeks to write.

    Customers are leaving, siting huge turn arounds for new features and fixes, and we're blaming it on our support dept. Support is fine -- they get requests to us fast. Deployment...well, it could take weeks even to get cosmetic changes through.

    Can you imagine the additional testing you'd have to perform before changing a truly autonomous server? And how can you be sure that the self healing server is really healthy, or just not noticing the problem?

    Das no like-y. Bad medicine.
    • by laserjet ( 170008 ) on Thursday September 12, 2002 @06:34PM (#4247887) Homepage
      I think IT type of jobs will just adapt. Instead of being the one watching the servers, you will train to become the one who sets up the server to watch themselves, etc.

      It's really no different than the cotton gin automating cotton production: those workers that become obsoleted are retrained to enter the workforce with new skills. It is a continous cycle where those who are obsoleted learn new skills to get new jobs.

      That's why our unemployment rate is usually fairly steady (between 3% and 8% almost always) - jobs are always being elimintated and jobs are always being created. In different sectors, maybe, but that isn't necessarily a bad thing.
    • When I first came to this company, we had something like 20 IT employees... we're down to 4. Every time somebody left, the remaining folks would write a script to automate what the other guy (did) ... Did it save us from hiring new people? Our HR department will tell you it did, but it's untrue. The fact is the turnaround time for IT requests has become abyssmal.

      Of course. That's why these guys aren't talking about a shell script to solve the problem. They're talking about 3 billion dollar budgets to get in there and solve these issues in very robust ways. You can't just fire you IT staff and hope one guy can do the work of five... you have to have better hardware and software so that you've only got one guy's worth of work.

      Can you imagine the additional testing you'd have to perform before changing a truly autonomous server? And how can you be sure that the self healing server is really healthy, or just not noticing the problem?

      I imagine you'd notice it exactly the way you'd notice an unhealthy server now... things don't run, performance drops significantly, programs hang, etc. The point is to make the computer smart enough to recognize these situations and intervene to fix the problem, so that the users never realize there was anything to worry about in the first place. Contrast that with the current situation, where the computer does nothing. Someone has to realize there is a problem. Someone has to figure out what the problem is. Someone has to fix it. For companies where up-time is money, this model for computer maintenance needs to be improved.
    • "Because it means they want to make us obsolete to increase the margins of rich idiots."

      Let's see...Rich guy, gettin richer my automating his computer network...Sounds like a fucking genius!

      Vincit que se vincit.
  • and more money for rich CEOs.

    Let us control the machines, what the hell are we going to do once they control themselves?
    • Well... when they reduce costs by eliminating people-jobs.. to the maximum extent.. everything will be free and no one will have to work anyways..
    • by elindauer ( 520825 ) <eric@lindauer.hcmny@com> on Thursday September 12, 2002 @06:45PM (#4247944) Homepage
      Not likely, at least not any time soon. This article is not really describing a new phenomenon. The basic trend is all computing is to start with something that does a simple task, but is terribly difficult to install and run, and slowly make it easier. You remove the points where the end-user has to interact with the system if those interactions could have been easily figured out by the computer. This kind of optimization has been going on since computers were born, but despite all the progress, the tech industry has done nothing but grow.

      So Sun and IBM are turning their attention to some particular area that needs more optimizations... this just means that in ten years, there is going to be a higher level of abstraction with the same problems to solve. I'll have to figure out how to my new McDonalds chain can just plug some new computers into a wall and have their order menus popup instantly... great for productivity, great progress, but it hardly cuts into the demand for technically skilled people.

      Of course, intuitively there must be some point where the optimizations made start cutting into jobs. My feeling though is that we are still working on some of the most basic problems of computing, and it will be quite a long time before we reach the peak of this curve. I mean, a big focus of the article is how to most efficiently get data out of databases! We all take for granted that this is (currently) a very tricky issue. Imagine looking back in twenty years though... it's easy to imagine that we'll laugh at having to think about such basic issues at all. "Configuring a network? Gimme a break, piece of cake! Connect some wires and you're done!" we'll say. And yet it's easy to imagine that despite having solved all of these problems, we will still be faced with a set of complicated issues of the day to solve to utilize these features. We're still working out how to move information around efficiently. And this is just a discussion about how to move information around efficiently. We're not even getting into applications and what to do with that information once you have it.

      Then someone will write an article about how IBM is focusing on the problems of that day, and is going to make it easy to handle *that* level of abstraction. We'll read that configuring interactions between networks to transparently and securely utilize excess CPU in your neighborhood, or your city, is going to be a breeeze, and we'll have this discussion all over again...

      • They already are cutting jobs, alot of jobs were loss due to computers, however the jobs loss created more jobs because people had to install, repair and operate the computers.

        By removing the operators but not replacing it with anyone else, where will the jobs come from?
      • Then someone will write an article about how IBM is focusing on the problems of that day...

        And the article will automatically be submitted to the Slashdot of the day, which will automatically post it, and none of us will know about it because our computers will automatically post replies and assign our mod points for us.

      • Imagine looking back in twenty years though...

        I'd rather look back in ~35 years when (biological) man has invented his last invention.

        --

      • Like all things, this trend won't last forever. At some point our computer intelligences will finally become reasonably general, with knowledge of real world context. At that point we will have obsoleted humanity and it will be time for the new generation to take over.

        Despite the dark horror science fiction to the contrary, this need not be a horrible event. In terms of available resources, some place like the astroid belt may in fact be the optimal place for our machine descendents to build their society. (I would think nanotechnological machine parts would operate far more efficiently in the relative cleanness of space, without all the garbage down here on earth). In addition, far far more energy is available up there.

        Finally, usually when a new species is born it moves to an unoccupied ecological niche. Fact is, humans will never colonize space. Sure we might be able to make little self-contained bubbles of earth up there (sort of how the first amphibious creatures to go to land required constant moisture and had to return to water frequently) but we will never be able to really use the vast resources or do much more than huddle in our hab modules.

        Intelligent machines smart enough to create new versions of themselves for this purpose will be subject to no such limits.

        Its been a few million years for humanity : a really long road. But the season finale is almost upon us, with the last plot details wrapped up in the last few episodes as we finalize the work this century.
    • haha and you wouldnt believe it, I was almost expecting this one.

      Your argument is analogous to the ones who opposed computerization in the 80s. Banks, shopping centres name it - computers would replace Humans ! more humans would lose their job was their argument.

      The fact is that jobs are created in other ways :

      1)In efforts at automation.
      2)Improving what has already been automated
      3)Support of the automated product

      The last point point is from that fact that if every cashier at the stores were replaced by robots - you would still want to see the manager when you have problems. Machines after all run by definition of finite set of rules, anything beyond that would call for human intervention or support.
      • The people worried about job losses because of automation were well justified. The only thing that prevents this from happening is human incompetence - we still have to build the damn things, or the machine that makes them.

        There was no such thing as IT in today's sense around 30 years ago. Now there are friggin' millions of IT workers. Assuming a relatively constant employment rate (which we have) this means these people would have been employed in something else a few years ago - consequently, no net loss from automation.

        However, if the damn things didn't break down / need to be programmed / be plugged in (a usual first suggestion by the IT guy, unfortunately because it can weed out some enquiries) then we might be in trouble. Ultimately though, human fallibility means that they all require human maintainence. Self-healing wouldn't work as the human coders would make mistakes, and technology's onward movement would require new instructions to be a constant requirement so the robots can still tell that their arm isn't working anymore since the DRM upgrade or other emergent technology.

        True artificial intelligence could end it though - but then the robots would see us as cheap labour so at least we could mine or do other jobs beneath the robots.

    • Well there will obviously be an initial cut in jobs, heck there are cuts in every single job these days even in jobs which are required, let alone jobs like network administration where if you're doing a good job it probably doesn't look like you're doing anything at all. However these things probably won't have all that much of an impact since for the most part the errors which occur in computer systems are caused by human beings whose ability to do things which no one had ever thought of before is matchless. No computer, at least as we know them could possibly manage to keep up with every new problem which could ever occur.
    • This is why compilers are bad -- they just mean fewer jobs for programmers, since a programmer can now write code in less than half the time. burn all compilers!
  • Okay, Realplayer adds an entry to my registry to start it's damn autoupdate thing every time I run it.

    From my point of view, that is a problem.

    From the point of view of those [expletive deleted] at real networks, the "problem" seems to be that I've found a way to disable their unfettered access to my system for whatever under heaven they want to do.

    Now, you say, what does this have to do with server farms and data clusters?

    In the present day - not much. Such things require a level of expertise to run such that sleaze of this kind is rare (albeit not unheard of.)

    In the near future, when 1 billion people (according to the article) are working at computers? Well, the article implies that this great growth in the computer aided labor sector (term I just made up) will NOT be accompanied by an equal upsurge in available expertise.

    Therefore, a lot of people will be running high-economic impact computer-whatevers without the background to comparison shop, or the technical knowhow to disable corporate flack. In fact, this is already happening.

    I worry that "intelligently self regulate" will become "intelligently install our software and make sure you pay whatever we decide it is worth" in short order. While they're at it, they'll charge you for the software to police you. Peachy keen.
  • what a bad way (Score:2, Informative)

    by ramzak2k ( 596734 )
    what a bad way to provide a quote from the story

    "If you're being chased by a big snarling dog, you don't have to worry about adjusting your heart rate or releasing a precise amount of adrenaline. ...."

    I was expecting the article to be on "Super Computers used in medicine" when I read that.

    This would have been a better quote:

    "...hope is that the constant and costly intervention of database and network administrators trying to figure out what must be done will soon be a thing of the past"
  • The researchers are trying to solve the wrong problem. The computers could cure the most common problems by simply identifying the users and automatically applying a LART as necessary.

    Andrew

  • ... the master-control for involuntary functions...

    Maybe it's just me, but when you hear "master-control" and "computer" together, don't you just picture this [tron-movie.com]?

    J
  • Making computers "heal" themselves would save millions of dollars in tech support.

    But first we need to get humans to fix the operating systems before computers can take over. *cough*Windows*cough*
  • Have the OS change its name every 2 years and charge $90 to your credit card.

    Oh wait, they're already planning on doing that.
  • A couple things that jump out at me -- Project eLiza -- this is a joke right? Eliza for system administration makes me think straight away of "What is your username? <typing sounds> What do you mean, there's no user of that name.. <maniacal laugh>"

    Second is:
    What is your username?
    jamuraa
    What makes you believe your username is jamuraa?
    It just is.
    Is it because of your plans that you say it just is?
    Not really, you gave it to me.
    Maybe your life have something to do with this. .....

  • Ok, with the testing being done say with something like Mozilla 1.1, running on the well tested platform Redhat 7.1, the "running amuck" problem is very slight. Now here's one that can do just that, run amuck, and the repeatable conditions to get it to misbehave: Arachne 1.70, running on top of MSDOS. When you are surfing with it, and change your mind about downloading a web page, normally you would hit the stop button in most browsers. In Arachne, this will crash the program, and your DOS, too. You'll have to reboot the computer. I'm sure some of you have other examples, especially programs that have just been written, and not completely tested, that can run wild. I've had some items in Redhat blow up on me, so it's not immune to that. These were programs that were provided in the distribution, and should work, but perhaps I am trying to run them on a machine with not enough ram, cpu power, etc. and trigger the bug.
    All this is what makes working with computers interesting for us, and give us something to do, and experiment with. Self-healing computers as such don't really exist totally, but we are working on it;-)
    • Now here's one that can do just that, run amuck, and the repeatable conditions to get it to misbehave: Arachne 1.70, running on top of MSDOS.

      Why would you ever want to use that in the first place?
  • i guess it's a commonly used idea, but did anyone ever read "Look To Windward" by Iain M. Banks?
    this sounds like "Hub".

    -ben-
  • Doesn't XP do this? If it's already dialing Microsoft up, isn't there someone at the wheel at the other end of the line?

  • Whoa. (Score:4, Funny)

    by American AC in Paris ( 230456 ) on Thursday September 12, 2002 @06:36PM (#4247899) Homepage
    If you're being chased by a big snarling dog, you don't have to worry about adjusting your heart rate or releasing a precise amount of adrenaline.

    Huh!

    Guess it's about time I upgraded, then...

  • I hope they do better than they did with self-correcting compilers. I remember those monsters when I first got into programming. They would introduce more problems than they would attempt to fix. So don't fix my computers problems, just tell me you think somethings wrong and I'll decide.
  • When confronted by big, snarling dogs, I'd much prefer intelligent demons to daemons. After all, you need something equally big and scary.
    • then he steps aside and laughs at you while the dogs consume your intestines.

      At least that what demons in my game would do.. ;)
  • Oy, this sounds like a Star Trek episode gone wrong (how ironic, that was the previous story). Computer given new ability to think/help itself (a vital upgrade that surely won't fail, Captain Picard!)...takes over ship (or office, as the case might be)...kills someone (nameless ensign or nameless employee)...new ability is erased...Captain says engage (or CEO says "lunchtime" or something). Seriously, though, I think the computers that try and fix themselves are just going to require MORE IT people to fix all the errors. It'll probably just muck up things worse than they were mucked before.
  • Open standards (Score:4, Interesting)

    by jukal ( 523582 ) on Thursday September 12, 2002 @06:42PM (#4247930) Journal
    According to IBM, open standards are not only essential to the deployment of autonomic technology, but they also level the playing field for the companies doing the innovating. "We want to sell our middleware based on fair competition with an equal set of standards," says Almaden Research Center director Robert Morris. "People should buy our toaster because it toasts bread the best, not because it has the only plug that fits in the outlet."

    This made me look fore more info on this guy (Robert Morris), here is an interview [oreillynet.com]. He seems like a good guy in good position.

  • uh oh (Score:1, Interesting)

    by re-Verse ( 121709 )
    While it may seem like a great idea at first, this is a pretty dangerous concept. Doesn't anyone remember 2001, when HAL decided to get rid of his problems by eliminating all the people around him? Right when these computers realize they are fine until people start making demands on them, we're in deep trouble.

    an example: This new computer sees that whenever Sally opens up her word processor, the sound driver crashes.. after struggling to figure out why the driver crashes, with no avail, the computer realizes that the problem is solved if Sally isn't around anymore to open up word processors.

    I wish i didn't have to add a "this is humour" tag at the bottom of this, but here it is.
    • And just think how many problems would be solved by eliminating the user end of everything. Ask yourselves which normally runs most 'stable': two dozen servers in an isolated room where someone visits once a week or so - or a single user end PC that a human uses everyday for menial tasks?
      The Matrix and 2001 are 2 films I'm currently thinking of for some reason, and half-a-dozen books... Thing is - if this _is_ going to be the future, will we have the wherewithal/'guts' to stop it from happening? Or will we ("software developers") think "well, I might not agree with the aim but it's _a job_"?
    • by fm6 ( 162816 )
      Think of all the terrible things that would happen if computers had the ability to make life or death decisions [warships1.com]!!!!!
  • This story becomes a recursive problem.

    Who fixes the repairing mechanisms? The computer? With what? More repairing mechanisms! But who fixes those? More repairing mechanisms! But who fixes those?.....end of universe.

    • If you're smart enough to climb up the technology ladder, it could be you. Go. Learn. Now.

      They didn't say they wouldn't need people anymore! They are saying they need people that just don't do the same repetitive things every time, to make room for more creative and smart programs.

      In the end, there will always be a human in control. This has nothing to do with HAL, just with computers that are made up out of more robust compounds, that follow some standards, so they can be interpreted by other computers. Compare it to the difference between a Word and a RDF document. Both carry information, but the latter is easier for computers, because it has pointers to similar information, so the computer can put things in context.

      Or something like that.
  • Why not ask an AI [frontiernet.net] (Chatterbox Challenge [chatterboxchallenge.com] 2001 & 2002 Winner) if it thinks the AI research has come to a level that it's good enough to control a computer? I mean, who's better to answer the question than an AI?

    *conversation below*

    Me: What do you think about artificial intelligence?
    Talk-Bot: I think you're asking the wrong person.
    Me: I thought you were the best person to ask.
    Talk-Bot: It amazes me sometimes what some people think.
    Me: Would you be able to control a computer, like the brain control my body?
    Talk-Bot: I could but that doesn't mean I would.

    After carefully analyzing the answers from an artificial intelligence, I have come to the conclusion that AI's aren't ready to control computers and their software just yet. Perhaps not because they aren't advanced enough, but because they don't "feel like it". This is of course calming news for system administrators and the likes.
  • by cheezycrust ( 138235 ) on Thursday September 12, 2002 @06:50PM (#4247965)
    All the negative reactions here come from IT workers, who want their job places secured. But you see, as one previous reply pointed out, it's just replacing the monkeys. Imagine you current job. How much of it could be automised? Maybe not in the current configuration, but what if we had more standards (like XML, like standard hardware, ...).

    This is going to happen, so the best thing to do is to climb up the ladder, and try to be ahead of it. It may be a lot of work in the beginning, but it could reduce work (and costs) in the end. This is similar to HP + (Compaq + Digital) who are reducing their server line from three types to one. It will cut in our flesh now, but it will allow us to grow as a whole.

    It's life, my friends, don't think you're immune for it.
    • How much of it could be automised?

      Don King, is that you? Did you mean atomize?

      Main Entry: atomize
      Pronunciation: 'a-t&-"mIz
      Function: transitive verb
      Inflected Form(s): -ized; -izing
      Date: 1845
      1 : to treat as made up of many discrete units
      2 : to reduce to minute particles or to a fine spray
      3 : DIVIDE, FRAGMENT
      4 : to subject to attack by nuclear weapons

      Sorry I couldn't resist.
    • A lot of it can't be automated, not the design in any case.

      I work for a manufacturing company, not a tech company, so maybe I have a different perspective from someone doing "pure tech", a lot of what I do all day is a solution to a particular problem that could not be generalized at all. It's the difference between providing a commodity and doing something that is customized. Desktop software is a commodity, and maintaining a desktop computer is commodity labor.

      Don't scream about eliminating your job, and support open source in the same breath. Open source has in large part been aimed at eliminating people getting rich from providing what boil down to software commodities, bringing prices down to reasonable levels and restoring control to the consumer. If software has made you a commodity also, then maybe you need to reevaluate what values your skills really had in the first place.
    • Even monkey jobs can't always be replaced with computers. I bet even some jobs would be better off assigned to monkeys instead of computers. Ford tried to replace all the monkeys in a car plant with robots. Basically it failed because even a monkey at minimum wage will say "oh, these little pieces are off, I better not weld here," the robot will just weld the hell out of it because it doesn't recognize the pieces aren't exactly in front of it's arm. So just because it's IT workers complaining, which is most of the Slashdot crowd anyway, doesn't mean they don't know what they're talking about.
  • I'd just be happy with a few intelligent daemons to watch my back, like when a program runs amuck and fills up the process list.
    ~ $ help ulimit
    ulimit: ulimit [-SHacdflmnpstuv] [limit]
    Ulimit provides control over the resources available to processes started by the shell, on systems that allow such control. If an option is given, it is interpreted as follows:

    -S use the `soft' resource limit
    -H use the `hard' resource limit
    -a all current limits are reported
    -c the maximum size of core files created
    -d the maximum size of a process's data segment
    -f the maximum size of files created by the shell
    -l the maximum size a process may lock into memory
    -m the maximum resident set size
    -n the maximum number of open file descriptors
    -p the pipe buffer size
    -s the maximum stack size
    -t the maximum amount of cpu time in seconds
    -u the maximum number of user processes
    -v the size of virtual memory

    If LIMIT is given, it is the new value of the specified resource. Otherwise, the current value of the specified resource is printed. If no option is given, then -f is assumed. Values are in 1024-byte increments, except for -t, which is in seconds, -p, which is in increments of 512 bytes, and -u, which is an unscaled number of processes.

    • and there's more...

      on *BSD there are login classes, kind of like groups but define access according to how much mem available, how many processes to run, and more. setting total processes (and other things, like open files) for the system is a sysctl variable as well

      on Solaris there is "set maxuprc=50" in /etc/system.

      there's more but i'm hungry. someone please fill in the rest.

  • Interesting stuff, until you realize that most of the problems that we would want our computers to help themselves solve are undecidable. So in general a program can't tell if another program has gone on the fritz.
  • Maybe we can evolve an OS or even daemon that protects on a whole. Just make a batch with random attributes and let them simulate breeding and survival. You could attack them with a lot of nasty stuff and keep using the ones that "survive". Best yet, they wouldn't have to stop evolving. On another note, the OS' might find disabling keyboard and mouse input is the best way to survive. :)
  • Computer vs. Human (Score:2, Informative)

    by lpret ( 570480 )
    I always am intrigued when someone declares their intent to simulate the human body's abilities on a PC. Since when did this become some sort of Holy Grail? Especially for technology?

    There are many ways in which your human body fails. As we've mentioned on slashdot before, it's not really that efficient -- we spend anywhere from 6-10 hours a day (25-40% of the day) recharging and regrouping. If, as a SysAdmin, your network was down for that percentage, would you still have your job? I doubt it...

    Also, it fails to protect your body from attacks, we have an endoskeleton, if you look at an ant or any other insect which can take out animals many times it's size, you will notice that they have exoskeletons. It's kind of like having your network security inside the network, leaving some of the network wide open. We all know that exploits will bring down a network that's even partially open.

    One more point about our body, it gets sick often, some more than others, and some worse than others. I, for example, have diabetes and I have an insulin pump [minimed.com] to inject insulin since my body attacked a part of itself for some reason as of yet unknown. It's something like your OS deleting your TCP/IP capabilities, it leaves you stranded.

    Now, I'm sure there are many biology people who will point out that our bodies are amazing feats of detail, etc. etc. That may be true, but I still don't see how that makes it a good blueprint for technology that we create. Remember, it is only with technology that our infant mortality rate is not 40% or whatever ridiculous number it was in the 19th century.

  • It took how many generations to get the autonomic nervous system to where it is today? And still we have autoimmune disorders [labtestsonline.org] and responses like sepsis [sepsis.com]. I just hope my computer doesn't destroy itself hunting down a virus...
  • How about ... (Score:3, Interesting)

    by Monkeyman334 ( 205694 ) on Thursday September 12, 2002 @07:05PM (#4248039)
    How about a word processor that will automatically correct me when I type "the" instead of "the"? I mean THE. THE THE THE. T-E-H, there. Oh wait, they already have that, and it's the most annoying feature ever. When you get computer that think they know what's best for you, bad things are going to happen. Even with something as simple as TEH. Imagine if it was advanced things like too many processes, think how much of an obscure problem it would be for a novice user to track down when they really do want too many processes? Anyway, I think it's a bad idea.
  • The problem with having various autonomous systems performing maintenance and adjustments is that they don't look at the big picture to see what their changes will affect.

    Take the human body's allergic reactions, for instance. Your body may react to something that's really not harmful, but it thinks it's protecting you. The unintended effect of the reaction can range from a mild annoyance to death.

    In nature, other life forms have evolved to take advantage of your autonomous reactions, and I'm sure we will see this in the computer world as well. Wait until some script kiddie figures out that he can crash your server (or at least eat up CPU cycles) by sending it a signal that makes an autonomous daemon overreact in trying to do its job. The problem will be discovered, exploited, patched (but not on MS boxes) and a new exploit will be found. Circle of life and all that, I suppose.

    Still, I think borrowing ideas from mother nature for the evolution of computers is the right way to go. After all, she's had millions of years to work on the problem through trial and error. We can build on that research and perhaps improve upon it, at which point we'll probably start looking at how to control our own evolution. Just remember to never write a daemon that prevents you from pulling the plug.

    • Just remember to never write a daemon that prevents you from pulling the plug.
      Heh. This reminds me of English classes at high school where we had to read heaps of short stories.

      One of them was about an intelligent computer created by a whole bunch of scientists. They booted it up, and typed in the question "Is there a God?". At that moment, a lightning bolt came from the sky, fusing the power source into the on position, and then the computer replied: "Now there is".
  • by mcrbids ( 148650 ) on Thursday September 12, 2002 @07:07PM (#4248047) Journal
    Although it's ostensibly about "self healing", it seems the largest portion of the page was about databases that self-optimize their queries. They make a big deal about Microsoft having stuff like that out, and that IBM has some big thing coming soon (LEO).

    AFAIK, the free and open-source PostgreSQL also has similar technology built in.

    *YAWN*

    Come back when there's something to read, eh?
    • Although it's ostensibly about "self healing", it seems the largest portion of the page was about databases that self-optimize their queries. They make a big deal about Microsoft having stuff like that out, and that IBM has some big thing coming soon (LEO).

      AFAIK, the free and open-source PostgreSQL also has similar technology built in.

      *YAWN*


      Oracle has had this for years. Oh yeah, and it works.
    • AFAIK, the free and open-source PostgreSQL also has similar technology built in.


      Not AFAIK -- the PostgreSQL query optimizer uses statistics collected periodically (namely, when the ANALYZE or VACUUM ANALYZE commands are run); optimizer statistics are not updated with any data collected by during query execution. I'm not saying that self-tuning optimizer statistics are a bad idea, but they haven't yet been implemented in PostgreSQL. You might be referring to GEQO (a version of the query optimizer built-in to PostgreSQL that uses a genetic algorithm to avoid an exhaustive search of the solution space for large join queries), but that is obviously completely different.



      Here's a good paper on a related Microsoft technology: STHoles: A Multidimensional Workload-Aware Histogram [nec.com]. IMHO that's the most interesting part of the AutoAdmin stuff mentioned in the article -- I don't care for the performance tuning wizard so much. Also, the design of the IBM LEO query optimizer, mentioned in the article, is described in this paper: LEO - DB2's LEarning Optimizer [nec.com].

  • hurry up (Score:2, Funny)

    by Tsugumi ( 553059 )
    Look, I saw all the shows when I was younger, predicting all the shiny toys I had to look forward to. Computers that can fix themselves is a step in the right direction. Now I want computers that can do all my work for me, make me cups of tea, and look after all my chores.

    Oooh, and a hover board, they said I could have a hover board, where is it, dammit?

    • "Oooh, and a hover board, they said I could have a hover board, where is it, dammit?"

      in the trunk of my hover car.
      Why don't we strap on our jet packs and go get it?

  • If I'm being chased by a snarling dog, my body may have automatically produced adrenaline and increased my heartrate, but the snarling dog is still there. The problem is not solved -- I have merely been given a couple of useful tools for a couple of possible solutions. Somehow, I cannot see Microsoft, in particular, coming up with a way for its software to fix itself. The error messages are already too incomplete and vague -- when any program would need complete and specific data about the problem. (Or, perhaps, the solution would be "close all windows, shut down and restart". That's right, what it already does.)
  • All I want for now is a compiler that will actually go add that fscking semicolon rather than tell me it's missing.
  • That's pretty simple to safeguard against, I just don't compile in the amuck source or libraries at install.
  • Nuts -- hit submit by mistake instead of preview.

    Meant to say:

    The article didn't go far enough!

    If you're being chased by a big snarling dog [...deletia...] I'd just be happy with a few intelligent daemons to watch my back

    RIGHT! Only I want the sales department's laptop docking stations in the staff-meeting room to be equipped with the voice-decoding circuitry and the clue-by-four attachment (big wooden mallet)!

    First time the lead sales weasel brings up a fundamental change to the project requirement late in acceptance testing...POW!

    Muah-hahahahahahahah!!!

    Well, hey, I can dream, can't I?

  • as long as people will buy buggy software for new features, the money will go to engineering features first, reliability later. That won't likely change, because it would require people to have different priorities. Such is life with the free market: imperfect, but apparently superior to other methods.
  • Where is Sarah Connor when you need her?
    • Dunno, I was always rooting for Skynet in those movies...

      The idea that this broad, her asshole kid, and a reject robot could defeat a true AI - it's laughable...

      Equally laughable was the old Star Trek stories where Kirk always outwitted the superhuman AI or robot by making it play emotional games...

      No one has ever pointed out Roddenberry's Luddite attitude toward technology...

  • like when a program runs amuck and fills up the process list

    I can imagine circumstances like this happening inadvertantly due to program or kernel bugs, but isn't this a rare occurance unless you've executed a deliberate fork bomb?

  • That's what it does. It waits for you to fill up your memory before it starts using the disk as memory.

    What do you do when you need to run a CAD program on a system with low memory? Do you need a daemon ouside the box to talk you into not starting and using that app?

    All this seems to be is increasing userfriendliness by limiting you to what you can do. I'd rather see a blue screen due to my evil memory editing in debug.
  • by isj ( 453011 )
    The essence of the article is that computers should autonomously fix problems and tune themselves. That is an excellent idea. Remember some old pascal compilers where you forgot to end the program with END. instead of END; and the compiler said "You forgot to use '.' instead of ';'" ? My point is not that it was a silly rule, but that the computer knew the error and could have fixed it itself.

    The article also touches automatic database tuning depending on the actual use of the database. I look forward to a database which automatically modifies the schema when it finds that a parent table always joins with a child table with referential constraints.

    IBM has previously introduced self-healing servers [ibm.com] that essentially are able to detect that something has gone awfully wrong and therefore reboots. It may not be an elegant solution, but if it works the customer is happy.

    All this is part of an evolution in software, or a "next step" in software implementation. The steps are:

    1. Software that works
    2. Software that works, but also detects errors and bails out
    3. Software that works, detects errors, and rolls back to a known point
    4. next step Software that works, detects errors, rolls back to a known point, and fixes the error and retries

    Examples:
    Out of diskspace when you download your email. The email program should find some spare diskspace somewhere on another partition or extend the current partition.
    foo-1.7 requires bar-1.2. rpm should automatically downloads bar-1.2. Preferrably from a computer on the same lan that already has bar-1.2
    A node in a cluster is overloaded. The cluster software should move applications/services to another node.
    A HTTP server is using too much bandwidth. It should automaticall service images with less quality (and therefore use less bandwidth)

    I look forward to it. It would allow me to let the computers monitor themselves and fix most problems without pestering me. And then I could use my time for something much more interesting than looking in /var/log/*, restarting failed applications, etc.

  • That would be a start anyway.

    And this "organic"self-repair Zen gestalt type of system is a bunch if crap. If it really worked, there would be no cyrrhosis, no adult-onset diabetes, no emphesima, etcera...

    Living systems reproduce asexually or breed because everything more complex than an algal mat is incapable of surviving for very long.

    We'd do better do better than what passes for intelligence and self-repair onm homo sapiens sapiens.
  • Among the first of these projects are some that enable computer systems to optimize computing resources and data storage on their own.

    This sounds like they want a computer to run hd and memory defraggers. big deal.

    Farther off in the future are other components of this autonomic vision, like maintaining ironclad security against unwanted intrusion, enabling fast automatic recovery after crashes, and developing standards to ensure interoperability among myriad systems and devices.

    Ironclad security : Firewalls?
    Fast automatic recovery : fsck?
    interoperability : kermit?

    Systems should also be able to fix server failures, system freezes, and operating system crashes when they happen or, better yet, prevent them from cropping up in the first place.

    fix crashes : watchdog card?
    prevent crashes : automatically convert to linux?

    don't worry about my mumbling, I just let my mind walk....

    and, about the fear of many that this will cost IT jobs... I don't think so.. maybe the work of sysadmins will be a little more specific.. but isn't this what computers where built for in the first place? To keep us from uninteresting work, so we can focus on the intersting stuff? nowadays, we have many automated processes in our world (cronjobs, anyone?) Besides, look at how good computers are at translating text from one langueage to another..maybe a machine can repair minor problems, but it will be far future before they can really run themselves. So, don't see it as a rival, see it as an assistant. and don't think about that %Ä$ paperclip now :)

    DISCLAIMER:

    It's almost 2am, I'm dead tired, a bit drunk, and I scored C++++ on geek code, so don't take me too seriously ;)
  • Bad analogy (Score:5, Funny)

    by mmmmbeer ( 107215 ) on Thursday September 12, 2002 @07:58PM (#4248353)
    Your body automatically does it all

    If our bodies worked the way we wanted, things would be very different. First, you'd get a huge boost of adrenaline so you could outrun the dog. Also, although your heart would speed up, you'd have no risk of a heart attack or other complications from overexerting yourself. You wouldn't get tired. And you'd be equipped with built in weapons for annihilating hostile canines.

    You'd also never have to worry about getting nervous trying to talk to that new cutie at work, acne wouldn't exist, and we all be our ideal weight.

    Our bodies, at best, make fair attempts at adjusting to situations, but they blow it as often as they get it right. Frankly, if our computers become as reliable as our bodies, I'm going to invest in pencils.
    • If our bodies worked the way we wanted, things would be very different. First, you'd get a huge boost of adrenaline so you could outrun the dog. Also, although your heart would speed up, you'd have no risk of a heart attack or other complications from overexerting yourself. You wouldn't get tired. And you'd be equipped with built in weapons for annihilating hostile canines.
      You'd also never have to worry about getting nervous trying to talk to that new cutie at work, acne wouldn't exist, and we all be our ideal weight.
      Our bodies, at best, make fair attempts at adjusting to situations, but they blow it as often as they get it right. Frankly, if our computers become as reliable as our bodies, I'm going to invest in pencils.


      Sounds like Anime.
  • ok... so... corporations barely need us to build machines anymore.. due to streamling of manufacturing and the cohesion of peripheral buses... we no longer need to install most programs for users due to streamling of install process's and to massly deployed upgrades and installs... .Net seeks to make it no longer our job to even deploy apps as all thats handled remotly... protocols like zeroconf and others seek to make configuration of networks no longer our job... Already cheaper workers are being found in greater quantities over-seas. now they are seeking to make configuration, building and deployment of servers and new apps without the need for IT... how long till were cutting cake at the local deli? sadly... advancement is advancement... and it sucks to be at the short end... but thats the market were in.... the only jobs that have some amount of garentee is developing and hardware design. Someones gotta create the new perfectly functioning computers
  • I thought they were Marting, Harding and Mazotti. Hrm... Boy! One day you're big. The next you're long gone.
  • You see what happens is that you create an AI that has the ability to improve itself you have just created the last AI that you will ever need. (Well, unless you need to create another one to defeat the first which has by now of course enslaved the human race.)

    I would bet that the first 'fantastic breakthrough' AI will be the creation of other 'intelligent' software.
  • But, at least for now, this effort is really aimed at the hardware. Today we can see the beginnings of self-healing hardware in place. Some enterprise systems can already phone home when they have a HARDWARE problem, and let the support folks know that there is a problem. And with systems like Sun's SunFire x800 series servers, the sys admin can dynamically reconfigure the system to de-allocated bad CPU's or memory, I/O boards can be removed hot, etc. So, the next logical step is for the server to de-allocate the CPU that failed itself, and send an alert, probably via SNMP, to the sys admin. By doing it dynamically the server keeps running, albeit with a reduced work capacity. Even better would be to have "spare" CPU boards in the box that could be immediately allocated to replace the failed board. All of this is possible today, with human intervention. The point is to get the system to be able to do it without human intervention.

    On the software side, I think it will take a bit longer. Some things, like database optimizers, possibly can be done right now. But, my observations (I'm not a DBA) of the database world indicate that most database optimizers aren't truly self-tuning/healing. Instead they can tune or heal for known conditions and make assumptions about how you want your database optimized. Most real DBA's hate this and have to spend extra time shutting off the self-optimizing functions and then performing their own optimization for their own real world scenario.

  • According to the article:
    Extricating the human from the loop is all the more urgent because of the outlook for the next decade. By some estimates, 200 million information technology (IT) workers might be needed to support a billion people using computers at millions of businesses that could be interconnected via intranets, extranets, and the Internet.

    Do the easy math: that comes one IT worker for every 5 people using computers. That seems like an outrageous overestimate. Haven't people learned from the demand hype of the late 90's which drove the whole technology sector into a deep recession?

    If you were to draw a curve graph comparing use of the internet (i.e. how many web pages viewed, how many users checking their e-mail), demand (as opposed to need) for technology resources, and the amount of money people are willing to spend for technology on a graph, you would see an interesting thing: In 97-99, you would see that demand for technology resources would be high--companies would be scrambling over themselves to hire as many IT professionals and purchase as much software, hardware, routers, cable, etc. as possible. At the same time, the line for money spent would be extremely high as well. On the other hand, the "actual use" line would be low. Move ahead to 2000-Spring 2000. You would see that the money line had dropped by a nice little chunk (no one wanted to spend any money on the web anymore if they could help it--it was the year of the dot com busts). Infrastructure would still be important, so that line would only have dipped slightly. People have ditched vague commercial ideas for more sound click-and-mortar technology. The internet use line, meanwhile, is going upwards rapidly. Move to late spring/summer of 2001. Both the demand line and the money line have plummeted. No one wants to invest in new technologies, and no one is even wanting to spend money on standard technology like Cisco routers. What was the dot-com bust is now the Internet infrastructure bust. Paradoxically, Internet use has continued to grow this whole time, and now over half of all people in the US do something online every day. So the whole time that IT technology acquisition and IT financial investment is going down, Internet use is actually going up

    Okay, that was a little long winded, but my point is that IT growth should *match* IT use, not move in the opposite direction from it.

  • They don't even know enough about how biology works yet. Consider the following gaps:

    1. There is no cure for a good many harmful viruses. Even dumb machines catch man-made viri. More complex ones are going to need HMO's?

    2. No safe appitite supressent discovered yet dispite the fact that many people are naturally thin.

    3. Cancer seems as elusive as ever, and may be simple but unreversable darwinian entropy of single cells evolving independent of the collective.

    4. Way too little about how the brain works

    Finally, biology often depends on trial-and-ERROR to adjust and correct itself. Do you really want your database "practicing" some new technique on the CEO's anual report?

    If you mirror something you don't understand, don't complain to me when it barfs.

  • This is already happening. Here are some of the results:

    Theory: Common DLLs would be updated with new DLLs when new programs came out, so that old ones would automatically benefit from the new code.
    Result: DLL hell

    Theory: Office2k and newer stuff with Windows installer 'tech'. Install on demand, restore file associations/missing files/shortcuts every time you run the program.
    Result: Nobody, not you, not MS, not the computer, at any time knows what's installed on the PC. Every time you try to remove a deskop/quicklaunch/start menu shortcut or a file association, the application will think for 2min then ask you to find the CD and let it look at that for another 2 min.

    What's next? The computer smells you coming, sprouts legs and runs away?

    The last reliable lifeform-like program available for computers was a virus.
  • The article pointed to last year though,
    was

    http://www.research.ibm.com/autonomic

    it is now located at

    http://researchweb.watson.ibm.com/autonomic/
  • Just in:

    "As part of Longhorn, Allchin said customers can expect to see new features for intelligent auto configuration, such as BIOSes and firmware that can be "automatically updated in a seamless way." Also, Allchin said Longhorn will include new functionality for server resiliency, such as self-healing characteristics, a more componentized architecture, and additional monitoring services with filters that can "dynamically" flow out to servers. "

    Right on target there, Microsoft!
  • Seriously, let's figure out how to write software in the first place, then figure out how to do all the whiz-bang stuff.

    I really think that software quality has stagnated, where funding nearly always stops short of allowing proper design and quality controls.

    Who at Microsoft and IBM are going to ensure that the super-self-healing code can heal itself and in a usefully wide variety of situations?

    Once such abstraction reaches a new threshold, how many people will be left around the world who can diagnose a real problem when it occurs?

    I've seen repeatedly that higher abstraction does not always result in a better system. I know many "software engineers" who can't even determine that basic network issues or OS contentions are "breaking" their software. All they care about are their nifty buzzword-compliant IDEs with code highlighting. Once the population finally degrades to where nearly everyone is like this, what then?

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...