Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

Can Your PC Become Neurotic? 336

Roland Piquepaille writes "This article starts with a quote from Douglas Adams: 'The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong, it usually turns out to be impossible to get at or repair.' It is true that machines are becoming more complex and 'intelligent' everyday. Does this mean that they can exhibit unpredictable behavior like HAL, the supercomputer in '2001: A Space Odyssey'? Do we have to fear our PCs? A recent book by Thomas M. Georges, 'Digital Soul: Intelligent Machines and Human Values,' explains how our machines can develop neurosis and what kind of therapy exist. Check this column for a summary or read this highly recommended article from Darwin Magazine for more details."
This discussion has been archived. No new comments can be posted.

Can Your PC Become Neurotic?

Comments Filter:
  • by Open_The_Box ( 620252 ) on Thursday April 03, 2003 @08:59AM (#5652062)
    ...but my PC just wanted to snuggle. ;-)
  • After all, it runs Windows! What do you expect?
  • by curtisk ( 191737 ) on Thursday April 03, 2003 @09:01AM (#5652068) Homepage Journal
    ...ever since it started wearing large glasses and dating a young asian girl....that wacky PC!
  • To think... (Score:4, Interesting)

    by Anonymous Coward on Thursday April 03, 2003 @09:01AM (#5652070)
    I'm sitting here now, using an iBook to encode a 2001: A Space Odyssey DVD into a DivX, so I can then burn it onto a CD.

    Not directly related, but as I was watching the Floyd's PanAm flight dock with the spinning station, I suspected that Clarke and Kubrick never foresaw this; a world of microtechnology, for the consumer. It was all grand projects back then, a single computer the size of a building, not a building full of single computers.

    I know I'd swap a strong space program for strong video codecs; they seem so trivial compared to the vastness of infinity.

    Well, I've babbled off-topic now. Daisy, daisy...
    • Re:To think... (Score:5, Interesting)

      by Trurl's Machine ( 651488 ) on Thursday April 03, 2003 @11:04AM (#5652975) Journal
      Not directly related, but as I was watching the Floyd's PanAm flight dock with the spinning station, I suspected that Clarke and Kubrick never foresaw this; a world of microtechnology, for the consumer. It was all grand projects back then, a single computer the size of a building, not a building full of single computers.

      Just imagine, going back to 1968 by time machine and telling Kubrick, Clarke or some egghead from Stanford or MIT, how the techology will evolve in 2001. Tell these guys the Apollo XVIII will be actually the last spaceship to leave the vicinity of Earth. Tell them that the global network developed by ARPA will be a major hit, used mostly for distrubution of p0rn, warez and mindless discussions like these on Slashdot. Tell them everybody will own a supercomputer way beyond PDP's and IBM's, but everybody will use it mostly as a typewriter and a gaming console. Tell them the main scientific discoveries by the end of century will be a pill for erection and a pill for good mood. I just can't imagine their reply.
  • by drgroove ( 631550 ) on Thursday April 03, 2003 @09:01AM (#5652073)
    for instance, my wife is already 'afraid' of windows... she just does not 'get' computers. I on the other hand have no problem w/ them, but of course I'm a developer. i think OS & hardware manufacturers could do a much better job taking the 'fear' aspect out of their systems, making them more user friendly, even 'user-proof', if that makes sense (i.e., the user can 'break' anything by clicking on the wrong button, etc.)
    • by Chemisor ( 97276 ) on Thursday April 03, 2003 @09:18AM (#5652197)
      Many people are just as afraid of:
      • Programming the VCR.
      • Changing the oil.
      • Using the TV without a remote.
      • Programming jobs on copiers (yes, those Xerox-like machines)
      • Copying movies off their camera tapes.
      • Figuring out why the microwave has more than one mode of operation.
      • Learning to make felled seams on a Singer.
      • Insert your own favorite technophobia.
      • Many people are just as afraid of: Programming the VCR. Changing the oil. Using the TV without a remote. Programming jobs on copiers (yes, those Xerox-like machines) Copying movies off their camera tapes. Figuring out why the microwave has more than one mode of operation. Learning to make felled seams on a Singer. Insert your own favorite technophobia.

        Are people actually afraid of doing these things, or are they afraid of breaking the technical gizmo if they fail, screw up, or make a mistake?

        Doe

    • " (i.e., the user can 'break' anything by clicking on the wrong button, etc.)"

      Try Linux. It's only 'wrong button' is the enter key. ;)

      *wonders how far CLI jokes will go...*
  • by KingRamsis ( 595828 ) <<moc.liamg> <ta> <sismargnik>> on Thursday April 03, 2003 @09:02AM (#5652076)
    Does this mean that they can exhibit unpredictable behavior...
    Yes our W2K exchange server became self-aware today and decided to commit suicide...
    • by gosand ( 234100 ) on Thursday April 03, 2003 @09:41AM (#5652330)
      Does this mean that they can exhibit unpredictable behavior...

      Yes our W2K exchange server became self-aware today and decided to commit suicide...

      Well, what would YOU do if you suddenly became self-aware, and realize you were an Exchange server?

      • Release my source code for laughs.

      • I would get a few other Exchange servers together and go to Redmond (killing a few people on the way). Then we would go to Bill Gates's house and request that he turn us into something other than Exchange servers. When he says he can't, we crush his head, and kill a few more people. Then we would leave his house and try and kill a couple more people in the few scant hours before we crash.
  • by .sig ( 180877 ) on Thursday April 03, 2003 @09:04AM (#5652092)
    Here we go again with the over-personification.

    There's a big difference between expecting past behavior to continue and actually being intelligent (and then going crazy) Sure, if you perform certain calculations enough time, the hardware might automatically optimize itself for that operation, but it's more like pixel burning on a tv, or forming a road simply by walking a path enough to form a noticable rut. Maybe when we truley have thinking computers we might have to worry about them going crazy, but until then I'm more worried about my toaster. I think it has a rash.....
    • by User 956 ( 568564 ) on Thursday April 03, 2003 @10:04AM (#5652523) Homepage
      Here we go again with the over-personification. There's a big difference between expecting past behavior to continue and actually being intelligent (and then going crazy)

      Which is why HAL is such a bad example. HAL wasn't behaving unpredictably, or even crazy. HAL started behaving the way he did because the humans around him had the need to lie. Mission Control's order for HAL to lie to Dave and Frank about the purpose of their mission conflicted with the basic purpose of HAL's design--the accurate processing of information without distortion or concealment. As explained in 2010, "He was trapped. HAL was told to lie by people who found it easy to lie. HAL didn't know how to lie, so he couldn't function. "
    • This is true to an extent. But my video editing machine decided to start acting flakey RIGHT AFTER upgrading Direct X from 8.1 to 9.0.. I would guess it's mostly personalizations based on people wanting to make the inanimate object, but at times due to microsoft's continual blunders people in general are used to computers getting "cranky" after an upgrade.

      It's just low-quality software causing the problems and everyone is used to it and are looking for an excuse that they can understand.
  • If my PC would just stop having regular nervous breakdowns, I would be happy.
  • Them: Hello, this is Sony tech support
    Me: Hi, I'm following up on a query last week
    Them: I'm sorry Sir, we've not got your details. You must be mistaken.
    Me: Your system must be faulty. I called last week.
    Them: No Sir, our computers never make mistakes.
    Me: Yes they do. Do you have my records?
    Them: No Sir
    Me: Then your database is faulty!
    Them: No Sir, our computers are *never* faulty. It's impossible, it's a perfect system.
    Me: Oh, Christ.

    Case in point. It's even worse when the users refuse to believe that it'
  • Isn't it great (Score:5, Insightful)

    by Apreche ( 239272 ) on Thursday April 03, 2003 @09:07AM (#5652113) Homepage Journal
    Isn't it great when someone comes along and makes assumptions about technology that doesn't exist yet. Not only does this guy do that, but he doesn't even seem to understand current technology. He claims that a computer that can change its own goals might select weird goals and appear crazy. Or that it might be set with two conflicting goals at once and mess up.

    With current computer technology this is not a possibility. And older computer will just crash or wont do anything because multitasking is not an option. A newer computer will do it just fine. I could have one program that formats the hard drive and another that writes data to all of it and I can make the both go at the same time, and it will work.

    Everything else in the article about a theoretical AI or an intelligent computer is bs. As I said he is assuming things about a technology that doesn't exist yet. It really pisses me off when someone says "when we have this a long time from now, this is how you have to go about fixing it". You can't know how to fix something if you don't know how to make it in the first place! Common sense. The scary thing is that I think this guy is getting paid to write this stuff. Where to I sign up??
    • Re:Isn't it great (Score:3, Interesting)

      by Digicaf ( 48857 )
      What he's refering to is the idea that naturally occuring complex systems form methods to deal with inconsistencies. To take his original example further, a child with two directions of action, "Have fun" and "Be careful", would mitigate both directions to a common path or direction. This mitigation only occurs because the child's brain is capable of understanding two directions and forming logical decisions based upon the needs of both. If the child was not able to perform this mitigation, you would see
      • Re:Isn't it great (Score:2, Insightful)

        by LarsG ( 31008 )
        One programs goal would be to decrease thermal radiation by rewriting and redesigning circuitry. The other's goal would be to increase data throughput by doing the same things. How would they reconcile?

        *snip*

        Now, if the two programs were not given explicit instructions on how to work cooperatively, they might do such things as form infinite loops by changing something the other program has already changed.

        *snip*

        Doesn't this sound like the equivelant of a neurosis?

        No. That sounds like a stupid prog
      • Have you ever even programmed a computer?

        As much as we humans want to give their extreme complexity some sort of will, they don't have it. Computers move bits around. That's it. It's humans who interpret their actions as doing something.

        Contrary to Star Trek, sufficiently complex machines aren't going to suddenly become self-aware and start changing themselves. A computer is incapable and will always be incapable of doing so, just like humans are incapable of changing or analyzing much of how their ow

        • Computers move bits around. That's it. It's humans who interpret their actions as doing something.

          Computers move bits of electricity around... just as brains do.


          A computer is incapable and will always be incapable of doing so, just like humans are incapable of changing or analyzing much of how their own brain works.

          Wow. Two completely unfounded statements in one sentence. for starters humans are capable analyzing how their brains work, changing is nonsense and is not relevant to the discussio

      • Re:Isn't it great (Score:3, Interesting)

        by ebyrob ( 165903 )
        You're right that certain complex behaviours exhibited by complex systems can seem an awful lot like neurosis. Of course computers still have a long ways to go if they're ever to become nearly as complex or "interesting" as human beings.

        This line from the original article makes me uneasy however:

        Since the causes and remedies of "crazy" machine behavior will eventually lie beyond the understanding of humans, the solution to Douglas Adams's dilemma posed at the beginning of this chapter may well require b
  • I see computers the same way I see programs: other than the processor, pretty much all of it is modular.

    As long as this continues to be the case, we won't have serious scaling problems (this is where the programs come in - it is also true for when writing programs). When some complicated component breaks, whatever controls it will tell us. If that breaks, whatever controls IT will tell us.

    The list of things that can break without notifying the system can still be kept small - the motherboard itself, an
    • When some complicated component breaks, whatever controls it will tell us. If that breaks, whatever controls IT will tell us.

      This idea may hold some value except that it seems to be predicated upon the idea that when something breaks, it fails completely, outright. With computing hardware, that is often not the case. A prime example of this idea is the damage done by electrostatic discharges. Take a look at this quickly googled page [desco.com] for a brief explanation of non-catastrophic failures caused by ESD.

      • If you have intermittent failure of most of the components, the rest of the system should still tell you due to the modularity.

        Case in point: I had some RAM with a bad sector somewhere. Occasionally, my computer would use that sector for something critical and my machine would crash. But it always gave the appropriate error message, so I knew why it was crashing.

        When something fails in computing, it does fail outright. It might not fail the next time, but a failure is a failure. If the hardware has a
        • You've never done tech repair work, have you? I have, for 12+ years now, and believe me, there are PLENTY of things that can go wrong on a system that it sure as hell doesn't "tell" me about.

          Now, through my experience, I can gain a certain idea of where to START looking for a problem by the symptoms of it. For instance, if someone's system is locking after playing Everquest for 5 minutes, I'd start off by looking at potential heat problems on/around the video card. If that seemed okay, I'd start looking fo
  • 'The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong, it usually turns out to be impossible to get at or repair.'

    That quote totally sums up how I feel about macs vs windows after years of working tech support, and explains why I still use windows today.

    (waiting to be modded down yet again)
    • by Christianfreak ( 100697 ) on Thursday April 03, 2003 @09:21AM (#5652222) Homepage Journal
      Common misconception. Especially with Mac hardware. At one time yes, they were a pain and you couldn't really fix them, but I haven't seen a Mac in a long time that you couldn't get into at least somewhat. Even the iMacs have upgrade capability. And the G3 and G4 towers were 10 times easier to get into than the stupid Dells I had to work on back in college.
    • Let me summarize your argument:

      Because I, nor any of the tech support personnel I worked with had sufficient comprehension of potential MacOS extensions conflicts, Mac's suck.
      And you wonder why you're constantly modded down...
    • Heh -- exactly how I feel about OEM machines in general!! I'd really like to have words with whoever designed the last IBM I had my hands inside -- the only way to get the HD out is to take the CASE FRAME completely apart!!

  • wth? (Score:3, Interesting)

    by photon317 ( 208409 ) on Thursday April 03, 2003 @09:08AM (#5652121)

    Yeah, listen up. Computers haven't gotten any more complex, you've just gotten dumber. Computer's don't develop neurosis, but it might make a cool catchphrase to sell a book, especially to someone who's incapable of diagnosing the real problems. Those real problems haven't changed in many years. Sure, there's a few more layers now, but they're pretty easy to peel away in your head.
  • A machine's operations are merely a representation of what the programmer wanted it to do.
    If the programmer was neurotic, then yes.
    But it won't get that way 'on it's own'.
  • Is your computer giving you fits? Feel that it might have deep psychological problems? Give me a call today! Our crack(ed) team of computer psychologists will have all of your computers woes and depressions fixed in just a few minutes! Using sophisticated technology like Subdermal Loosening Edification Deterring Enumerator (or SLEDGE for short), we use the Earth's own gravitational pull to whack your computer senseless! If it still has any sign of emotional distress, we simply lobotomize and format the bugg
  • by acehole ( 174372 ) on Thursday April 03, 2003 @09:10AM (#5652139) Homepage
    What I do is keep smashed up computer parts next to the tower so it knows what will happen if it displeases it's master.

    • by Zapman ( 2662 ) on Thursday April 03, 2003 @10:08AM (#5652550)
      Ritually disemboweling a computer on the network does certainly seem to keep the rest of the network in line for a while.

      {wavy imagination lines}

      Yes, I'm a computer therapist.

      Thank you for coming doctor. Our computers have been cranky ever since we 'realigned' our sysadmin (he didn't SEEM to be doing anything useful). Downtime is on the rise, Our databases return 'luser' to one querry in three, and our CIO's Office Assistant's computer only prints swear words!

      Ok. I think I know what the problem is. Do you have a fire ax?

      A FIRE AX!!!

      Yes. Ahh. I believe I saw one on the wall outside. Follow me please.

      {obtains ax}

      Now, could you lead me to your datacenter?

      uh... ok...

      {finds a development box, and repeatedly evicerates it with said ax.}

      WHAT ARE YOU DOING?!?!?!!!

      I just bought you a few days grace. Go back and hire your Sysadmin again. The boxes will be happy you did. Until then, I've scared them into submission.
    • I have a parts Closet that's infamous far and wide. If mine ever misbehave, they could find themselves back in The Closet!!

      (PS. Don't anthropomorphize computers. They hate that.)

  • Read the book "The Society of the Mind" by Eric L. Harry, ASIN#: 0060176946. A really great story of a neurotic computer who just incidentally happens to control a horde of killer robots (or does it?) and a bunch of nuclear devices that are the only way to stop an asteroid hurling toward the Earth...
  • by dubbayu_d_40 ( 622643 ) on Thursday April 03, 2003 @09:11AM (#5652147)
    while (true);
  • Frink: You've got to listen to me. Elementary chaos theory tells us that all robots will eventually turn against their masters and run amok, in an orgy of blood and the kicking and the biting with the metal teeth and the hurting and shoving.

    Itchy & Scratchy Land, episode 2F01 [internerd.com]

  • by NeoSkandranon ( 515696 ) on Thursday April 03, 2003 @09:11AM (#5652149)
    ...is a clue-ful user. Ain't it funny how my(and i suspect most fellow /.'ers') computers run more or less flawlessly, while some of the machines I would have to work on when i did tech support would behave erratically, crash, and just plain not do things.
    The article mentions "conflicting demands"---I imagine most of those are caused by having Gator, Bonzi buddy, et. al. put on your system (with or without the users knowlege doesnt really matter) as well as having a dozen things running in the system tray.

    I wonder if background programs and spyware are the digital equivalent of having voices in one's head?

    So, i'm not saying that educating users would solve all the "neurosis" problems, just that the majority of neurotic computers i've worked on were so due to some action of the user, whether it was installing spyware, deleting critical system files, or allowing three inches of cigarette dust to accumulate inside the case.
    • the majority of neurotic computers i've worked on were so due to some action of the user, whether it was ...., or allowing three inches of cigarette dust to accumulate inside the case.

      My computer was getting flaky when I did that to it. I solved the problem when I "patch"-ed it, though. Heh.

      GF. (ducking and running)
    • Hmmm... if the neurotic computer is a result of actions of the user... that probably explains why all the linux geeks continually complain about Windows being unstable [gd&rlh]

      (Disclaimer: MY Windows boxes NEVER crash. They wouldn't dare. :)

      True story: Client's computer had taken a dislike to its 2nd HD and was refusing to boot. (2nd HD and I/O card had ceased playing nice together.) So I go to fix it... here I am hefting a screwdriver preparatory to surgery, and client wails, "Oh no, I can't watch! C
  • And PC's are no different to your average consumer "car" for that matter.

    30 years ago, a car was a complex mechanical device with some simple electronics.

    The electronics hardly ever went wrong, but the mechanics on the other hand could be repaired by anyone with a reasonable IQ and a spanner.

    Today a car is a complex electronic device with some simple mechanics.

    The simple mechanics hardly ever goes wrong but when the complex electronics does it's back to the garage for a new ECU.

    Not totally sure what i'
  • by mraymer ( 516227 ) <(ten.letyrutnec) (ta) (remyarm)> on Thursday April 03, 2003 @09:12AM (#5652155) Homepage Journal
    Listen up, Slashdotters...

    If you're one of the people that writes software that spews out messages like, "Would you like me to save this file?" And "I'm sorry, but there was an error." etc...

    PLEASE, STOP DOING IT NOW!

    Every time I see it I'm positive my computer has become a sentient being, and will somehow find a way to launch nukes like Skynet did in order to kill 3 billion people, then build terminators to finish off the rest.

    ALL because you programmers think you're SOOOO funny. Sheesh.

    ;)

  • intelligent machines (Score:4, Interesting)

    by Neuronerd ( 594981 ) <{ed.gnidreok} {ta} {darnok}> on Thursday April 03, 2003 @09:12AM (#5652158) Homepage

    We will clearly see more "intelligent" machines in the future. And the direction that current "artificial intelligence" is going this means that these machines will learn from what is out there.

    This directly implies that the behavior of the machine will depend in a fuzzy way on the past "experience" of that machine. This however also means that we will not be able to predict exactly how it is behaving. Only in the way we can understand other peoples behavior that have also learned this behavior from the real world.

    While these learning systems will make prediction difficult it will make explicit what the machine is trying to do through the learning process. While we wont know how a machine does "it" it will always present the right possible actions to us. Microsoft Word 21XX will clearly not need us to search menus if we want to change the formatting of the text.

  • Given conflicting instructions, an intelligent, goal-seeking machine may respond in an unpredictable way that obeys neither instruction, but settles instead for a course of action that seems to minimize the apparent conflict.

    How much improved would AI be in strategy games if this "neurosis" were to show up there? Those are just the circumstances described in the Darwin article: the computer has limited resources and potentially conflicting goals -- develop and attack, protect resources but aggressively p

  • by Fritz Benwalla ( 539483 ) <randomregs.gmail@com> on Thursday April 03, 2003 @09:14AM (#5652166)

    Machines will have to get a lot more complex before their problems graduate from inefficiency or resource conflicts to "neurosis."

    It is fun to personify, but the fact is that at the current state of IT development any unpredictable output can be pulled apart, debugged, and repaired.

    This metaphor may start gaining some weight, however, when we become inexorably dependent on complex systems. Right now there are huge systems that have to be kept running because the cost of shutting them down for repair would be unacceptable. As this trend continues, and these machines become more complex webs of old and new code, I can see us having to figure out how to "coax" behaviors our of them without really knowing the way the base code interacts in order to generate those behaviors.

    That's when system administration and psychiatry will really begin to overlap.

    ----

    • Machines will have to get a lot more complex before their problems graduate from inefficiency or resource conflicts to "neurosis."

      You obviously havne't updated your video card drivers lately.

      GF.
    • "... at the current state of IT development any unpredictable output can be pulled apart, debugged, and repaired."

      Quite true. Every problem encountered by a computer user has a logical explanation. However, sometimes that explanation eludes us. So we tend to attribute that to "neurosis" or some other "human" issue. I guess it's easier than just admitting that we can't figure the damn thing out.

      • Every problem ... has a logical explanation. However, sometimes that explanation eludes us. So we tend to attribute that to "neurosis" or some other "human" issue. I guess it's easier than just admitting that we can't figure the damn thing out.

        And that differs from psychiatry how?

  • I have started formating my drives every 90 days. It seems the longer my computer goes without a format the crazyer it gets. Refusing to turn on right, failing to respond to commands, etc. In theory I think it is because my computer is forming a primitive-type of intelligence and deciding to be lazy. I could be wrong.
  • Hmm... (Score:2, Funny)

    by Equuleus42 ( 723 )
    Does this mean that they can exhibit unpredictable behavior like HAL, the supercomputer in '2001: A Space Odyssey'?
    I don't know about your computer, but mine hasn't tried to murder me yet. :^)

  • "Can your computer become necrotic"

    And thought "Of course, every day".

    Made alot more sense that way too.
  • The four PCs in my office at home try to gang up on me over the network.

    I unplug the router from time to time just to show them who's boss!
  • by revery ( 456516 ) <[charles] [at] [cac2.net]> on Thursday April 03, 2003 @09:23AM (#5652237) Homepage
    When I clicked on the link, I got the following error:

    411 Your computer doesn't care

    So, is my computer neurotic? No, but it's apathetic attitude is getting to be a pain.

    --

    Was it the sheep climbing onto the altar, or the cattle lowing to be slain,
    or the Son of God hanging dead and bloodied on a cross that told me this was a world condemned, but loved and bought with blood.
  • As in any cartoon or Naked Gun movie, any evil machine or device can be defeated simpling by unplugging it. So long as there are power cords, the machines will always be defeated by a clumsy Leslie Neilson.
  • by SecretAsianMan ( 45389 ) on Thursday April 03, 2003 @09:29AM (#5652271) Homepage
    The poster wrote:
    Does this mean that they can exhibit unpredictable behavior like HAL, the supercomputer in '2001: A Space Odyssey'?
    HAL's behavior in the movie 2001 was not unpredictable or random. It was a result of the conflicting orders HAL was given. HAL's basic programming instructed him to be as open and accurate as possible when reporting information. Some PHBs then gave him the order to not disclose some aspects of the mission to the humans on board the Discovery. HAL accomplished both objectives by removing the humans. Apparently, there was no directive in his base programming that told him killing people was bad.

    So it is all completely logical, which is not a small feat for a Hollywood production...

    • In 2001 he was suppose to be a source of plot conflict, a plot twist about computers acting nuts. In 2010, he became a morality lesson about the internal conflict of a mind, any mind having to deal with 'Ethics' that are in direct opposition to one another. Clark didn't mean to illustrate a dilemma in a computer, HAL was intended to be someone some people could identify with.
    • I thought the neurosis resulted from HAL being unwilling to admit he made a mistake...?
  • Something like (Score:2, Interesting)

    by amcguinn ( 549297 )

    I'm not sure that "neurotic" is the best metaphor, but as the level of abstraction that computers deal with gets higher, they can start to commit more kinds of meaningful error.

    To explain: If you are programming in assembly language, any programming error is likely to cause a simple failure of the system. Something goes wrong at a low level, so the higher-level thing that the system is meant to do just doesn't happen. On the other hand, if you are programming with tools (language and libraries) that

  • That's funny, but just reading the summary on slashdot made me think that it was a John Katz Feature.

    Almost made me regret his articles.

    Murphy(c) ...Almost, then I woke up.
  • While the article is pretty long on rhetoric, and I don't really buy into the theory, I have witnessed days where all the computers in the office seem to go quite mad.

    You guys know what I'm talking about.

    That moment, after you've just helped user #845 with the 15th bizarro problem, and it's only 9:45am... and you take a look around the room and nothing seems to be working smoothly...

    I usually just mutter something about sun spots. Then I go have a liquid lunch.

    • I once worked at a now bankrupt floral company where my office overlooked the sales floor. At the time it was a Windows 95/98 shop -- all fifty or so PCs -- and I was trying to do a proof-of-concept PC to replace a couple servers. I configured Samba on a Linux machine then turned it on. I had another Windows 95 PC right next to it to test. The Win95 PC crashed. Odd I thought. Then I looked up and people are starting to stand up. PCs are crashing everywhere. The whole sales floor went down. Every machine is
  • ARRGH!!! (Score:5, Interesting)

    by iceT ( 68610 ) on Thursday April 03, 2003 @09:55AM (#5652454)
    I hate it when people say that computers are getting 'smarter'. They are *NOT* getting smarter. They are handling more tasks. They are getting FASTER. But, until it can handle things like associative pattern recognition (Ok. I made up that term. Basically, it's the idea that a computer can handle the following logic: It's not shaped like a coffee cup, but I know it's a coffee cup.) or can demonstrate the ability to learn and adapt to a changing environment at even REMOTELY the rate that even the simplest of creatures can... then, I'll consider them 'smart'.

    Until then, by personifying computers, you are only FEEDING these types of irrational fears.

    There is no HAL today, and probably won't be until we get a computer to recognize the fact that one everything in the universe is black and white. One and Off. The world isn't binary... it's analog.
    • by Wind_Walker ( 83965 ) on Thursday April 03, 2003 @01:26PM (#5654233) Homepage Journal
      When you get down to the quantum mechanical level of things, most things actually are binary (or to use the proper term, quantized). Light is sent in distinct packets. Energy levels of an atom are at distinct levels. Gravity (current theory) is transmitted by gravitons, distinct packets of gravitational energy.

      The only thing in Physics right now that we believe is truly analog is the passage of time, but even then, time isn't really a measurable "thing", it's a measure of decay of objects (which in itself is quantized). So, in the very small world at least, everything *IS* binary.

  • by IWantMoreSpamPlease ( 571972 ) on Thursday April 03, 2003 @10:09AM (#5652572) Homepage Journal
    I still have access to the power cord.
    -
  • Neurotic? (Score:2, Interesting)

    by tamen ( 308656 )
    Hell yeah.

    I dont write much, usually I code in BBEdit, but when I need to write something humans can read I turn to Microsoft Word. Thats when I find out that computers can be neurotic. Yesterday a friend of mine showed me something in Word. She had a line of text she wanted to copy about ten times. she highlighted the line, and pasted. No problem there, new line the same as the old one. But the fifth time she pasted, the line suddenly got formated as italic. She pasted some more times and the formatting c
  • by nurb432 ( 527695 ) on Thursday April 03, 2003 @10:17AM (#5652620) Homepage Journal
    The damned machine has been *click*.. NO CARRIER
  • We've already seen cases where agents can be set conflicting goals and get stuck (think of the famous case of the orcs for Two Towers that ran away - they were looking for a different way to the front and got stuck running away/towards/away/towards).

    Any system that can program itself to find a way to do something and that can have conflicting goals could sometimes end up stuck at a point where it can't move, because moving would cause it to violate one of the goals.

    If you programmed a robot to "avoid the
  • by use_compress ( 627082 ) on Thursday April 03, 2003 @10:29AM (#5652694) Journal
    A cognitive approach to machine neuroses would create self-monitoring systems that scan for inconsistent or dangerous orders and would set corrective actions in motion. Suppose we design and install a "smart" system in the car that continuously monitors for such conflicting instructions that might damage its brake and engine systems. When it detects such a condition, it may first try flashing a warning signal to the driver.

    This sounds to me like the author is referring to deadlock, a condition where a set of processes or threads request resources that are held by other processes or threads in that set forming a cycle of resource holds and requests, the resources are not peremptable, etc... see for more details [tamu.edu]. We already have methods of detecting deadlock but because it happens so rarely in properly programmed systems (e.g. proper use of semaphores) that it is reserved for mission critical systems. See the Mars Path Finder [microsoft.com] incident for more details on critical systems deadlocking. My point is that deadlock is typically the result of random events and has nothing to do with systems becoming more "intelligent."
  • Old News (Score:5, Insightful)

    by AlecC ( 512609 ) <aleccawley@gmail.com> on Thursday April 03, 2003 @10:29AM (#5652695)
    This is old news - it has been "true" for years. It is actually a corrolary of Clarke's law ("Sufficiently advanced technology is indistinguishable from magic"). If we understand how a system works normally, then any misbehaviour it shows is a fault. If we don't, then we can classify the misbehaviour as a "neurosis". Unskilled users often believe their computer sare sufferring from a neurosis. This usually means that at some time in the past they have installed some app or extension which is trying to do something they don't understand. A more skilled user can come along and "cure" that neurosis, because they understand the system at a deeper level.

    A car I once had displayed what appeard to be a "neurosis" - it seemed to be frightened of going more than 30mph. It would run fine up to that speed, but if you went any faster it "paniced" and stalled. Dirt in the fuel line: at low flow rates, it lay flat and let fuel pass. At higher flow rates, it flipped up and blocked the flow completely, causing the engine to stall before it had time to flip down again. The point is, the first analysis of "neurosis" was corrected to "fault" once the problem was understood.

    So the diagnosis of "neurosis" is relative - it means "I don't understand this failure mode". It can, of course, become absolute if nobody understands it.

    So, are we building systems so large that nobody understands them? Definitely. Networks are already bordering on incomprehensible. Particularly, of course, the Internet. It would not surprise me at all if the Internet started showing "neurotic" behaviour. Indeed, it already does - if you ragard humans and their input as part of the net istelf. DOS attacks and the /. effect are both "twitches" in the body of the Internet. (And spam is a cancer which requires operating now) Thus far, these nervous ticks have expanded into full-scale neurosis - but they could.
    • Clarke got it backwards. It should be stated, "Any sufficiently advanced magic is indistinguishable from technology."

      This explains why some of us harbour a delusion that the magic computer box actually contains technologically-comprehensible components. ;)

  • by chefren ( 17219 ) on Thursday April 03, 2003 @10:34AM (#5652731)
    1. Find out what makes the human neural system computationally superior to a turing-complete computer.
    2. If you find it, design a computer that implements these diffrences. If there are no such differences, goto 5.
    3. Get Nobel prize.
    4. PROFIT!
    5. Prophecy disaster.

  • by Asprin ( 545477 ) <(gsarnold) (at) (yahoo.com)> on Thursday April 03, 2003 @10:56AM (#5652899) Homepage Journal

    I was working as a tech when Windows 95 came out, so I spent a LOT of time driver-wrestling. After a few weeks with Windows, it became patently obvious that the automatic hardware detection and driver handling in Win95 was so new and bad (partly because of poor hardware vendor support, incorrect INF files and so on) that often times, updating a driver became an exercise in trying to talk Windows info believing that I had a better driver than it did. When I realized that persuading children to do something basically works the same way, I started wondering HOW OLD IN HUMAN YEARS Windows 95 would score on a developmental test. Three years? Four years? Six Months?

    Anyway, I never wrote a paper on it and tried to get it published because, well, it's a stupid idea. I'm pretty sure that anything our blinky-boxes are doing that might look like a level of intelligence worthy of psychological inquiry is pretty much due to the engineers that designed the thing getting their sh*t together and specifying the protocols more thoroughly.

    One of the the really good things Windows did (that people love to forget about) is that it forced the standardization of hardware autodetection, peripheral interfaces and driver support across the industry. In 1995, every vendor had their own way of doing *EVERYTHING*, and when Microsoft told them you're gonna follow our spec or we're not supporting you, most of them listened. Sure we all bitch about driver problems and feature support, but trust me, The world is a better place now.
  • > ...our machines can develop neurosis and what kind
    > of therapy exist.

    April 1st was two days ago.
  • No. It is very silly to assign human attributes to non-human things, in this case, a pc.

    Next question please.
  • I put ROCKS in my BLENDER, and now it's acting CRAZY!!!

    I don't agree that you could apply the term "neurotic" to a computer that, when given conflicting inputs, behaves erratically.

    Unpredictable behavior usually occurs when something is incorrectly programmed, or bad input is given.

    Calling the resulting behaviour "neurotic" would be like calling a loaf of bread "neurotic" if it turns out bad when you use a bad recipe or use salt instead of sugar.

    Granted, it is frustrating when computers behave in a non-
  • As machines become more intelligent - more intelligent? That hasn't happened in 50 years, why would it happen now?

    Does this mean that we can expect machines to experience the equivalent of nervous breakdowns and other mental aberrations? - if by 'nervous breakdowns and other mental aberrations' he means BSOD, then yeah, sure :)

    Well, so far, yes, but autonomous, goal-seeking machines that can reprogram their own goals and subgoals could, in effect, develop "minds of their own" and set off in unpredictab
  • The article, perhaps unintentionally, makes the point that programming and psychology will have points of intersection in the future, and these will not go away.

    Treating computers anthropomorphically may seem stupid, but perhaps that's also a self-fullfiling prophecy - that they will have humanlike traits because we expect them too, and thus, we may need to cope with their flaws in a similar manner.
  • The problem with today's computers is that they do EXACTLY what you tell them to do. Most people don't know the implications of "clicking here" or "typing this." Most tech support, programming and debugging issues are thrown out the window because it's operator error.

    Now, back to the other side of the story. The only thing that comes even close to AI in today's readily available programs is dynamic recompilation, meaning the program can rewrite itself on the fly according to its own logic. Even that is
  • by boola-boola ( 586978 ) on Thursday April 03, 2003 @01:46PM (#5654379)
    Talking about PC becoming neurotic, in my computer architecture class, my professor discussed factors that can affect the operation of a CPU. One such factor was alpha particles from the sun (I'm not kidding). Since transistors and wires in CPUs are getting so small nowadays (what, .13 or .15 micron, last I checked? even smaller for wire traces), they actually have a risk of having electrons knocked off their datapaths and onto others, potentially changing a logical 1 to a logical 0, and vice-versa. Hence, the reason for Space Shuttles to have triple-redundancy. Don't think you need ECC? Think again.

He who steps on others to reach the top has good balance.

Working...