Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

Can Your PC Become Neurotic? 336

Roland Piquepaille writes "This article starts with a quote from Douglas Adams: 'The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong, it usually turns out to be impossible to get at or repair.' It is true that machines are becoming more complex and 'intelligent' everyday. Does this mean that they can exhibit unpredictable behavior like HAL, the supercomputer in '2001: A Space Odyssey'? Do we have to fear our PCs? A recent book by Thomas M. Georges, 'Digital Soul: Intelligent Machines and Human Values,' explains how our machines can develop neurosis and what kind of therapy exist. Check this column for a summary or read this highly recommended article from Darwin Magazine for more details."
This discussion has been archived. No new comments can be posted.

Can Your PC Become Neurotic?

Comments Filter:
  • by drgroove ( 631550 ) on Thursday April 03, 2003 @10:01AM (#5652073)
    for instance, my wife is already 'afraid' of windows... she just does not 'get' computers. I on the other hand have no problem w/ them, but of course I'm a developer. i think OS & hardware manufacturers could do a much better job taking the 'fear' aspect out of their systems, making them more user friendly, even 'user-proof', if that makes sense (i.e., the user can 'break' anything by clicking on the wrong button, etc.)
  • Isn't it great (Score:5, Insightful)

    by Apreche ( 239272 ) on Thursday April 03, 2003 @10:07AM (#5652113) Homepage Journal
    Isn't it great when someone comes along and makes assumptions about technology that doesn't exist yet. Not only does this guy do that, but he doesn't even seem to understand current technology. He claims that a computer that can change its own goals might select weird goals and appear crazy. Or that it might be set with two conflicting goals at once and mess up.

    With current computer technology this is not a possibility. And older computer will just crash or wont do anything because multitasking is not an option. A newer computer will do it just fine. I could have one program that formats the hard drive and another that writes data to all of it and I can make the both go at the same time, and it will work.

    Everything else in the article about a theoretical AI or an intelligent computer is bs. As I said he is assuming things about a technology that doesn't exist yet. It really pisses me off when someone says "when we have this a long time from now, this is how you have to go about fixing it". You can't know how to fix something if you don't know how to make it in the first place! Common sense. The scary thing is that I think this guy is getting paid to write this stuff. Where to I sign up??
  • by Fritz Benwalla ( 539483 ) <randomregs@@@gmail...com> on Thursday April 03, 2003 @10:14AM (#5652166)

    Machines will have to get a lot more complex before their problems graduate from inefficiency or resource conflicts to "neurosis."

    It is fun to personify, but the fact is that at the current state of IT development any unpredictable output can be pulled apart, debugged, and repaired.

    This metaphor may start gaining some weight, however, when we become inexorably dependent on complex systems. Right now there are huge systems that have to be kept running because the cost of shutting them down for repair would be unacceptable. As this trend continues, and these machines become more complex webs of old and new code, I can see us having to figure out how to "coax" behaviors our of them without really knowing the way the base code interacts in order to generate those behaviors.

    That's when system administration and psychiatry will really begin to overlap.

    ----

  • by Chemisor ( 97276 ) on Thursday April 03, 2003 @10:18AM (#5652197)
    Many people are just as afraid of:
    • Programming the VCR.
    • Changing the oil.
    • Using the TV without a remote.
    • Programming jobs on copiers (yes, those Xerox-like machines)
    • Copying movies off their camera tapes.
    • Figuring out why the microwave has more than one mode of operation.
    • Learning to make felled seams on a Singer.
    • Insert your own favorite technophobia.
  • by Christianfreak ( 100697 ) on Thursday April 03, 2003 @10:21AM (#5652222) Homepage Journal
    Common misconception. Especially with Mac hardware. At one time yes, they were a pain and you couldn't really fix them, but I haven't seen a Mac in a long time that you couldn't get into at least somewhat. Even the iMacs have upgrade capability. And the G3 and G4 towers were 10 times easier to get into than the stupid Dells I had to work on back in college.
  • by User 956 ( 568564 ) on Thursday April 03, 2003 @11:04AM (#5652523) Homepage
    Here we go again with the over-personification. There's a big difference between expecting past behavior to continue and actually being intelligent (and then going crazy)

    Which is why HAL is such a bad example. HAL wasn't behaving unpredictably, or even crazy. HAL started behaving the way he did because the humans around him had the need to lie. Mission Control's order for HAL to lie to Dave and Frank about the purpose of their mission conflicted with the basic purpose of HAL's design--the accurate processing of information without distortion or concealment. As explained in 2010, "He was trapped. HAL was told to lie by people who found it easy to lie. HAL didn't know how to lie, so he couldn't function. "
  • by LordDragonstar ( 569420 ) on Thursday April 03, 2003 @11:24AM (#5652665)

    Many people are just as afraid of: Programming the VCR. Changing the oil. Using the TV without a remote. Programming jobs on copiers (yes, those Xerox-like machines) Copying movies off their camera tapes. Figuring out why the microwave has more than one mode of operation. Learning to make felled seams on a Singer. Insert your own favorite technophobia.

    Are people actually afraid of doing these things, or are they afraid of breaking the technical gizmo if they fail, screw up, or make a mistake?

    Doesn't this fear come from the fact that they don't understand how to do it, or that they just don't understand the gizmo itself?

    So, do they fear any of these actions specifically, or do they just generally fear their own ignorance towards technology (we fear what we don't understand?) Perhaps we can be as user friendly as we want, but if the user chooses to remain ignorant, they will remain in fear regardless of how savvy we are when we design a system. Just a thought.

  • by amcguinn ( 549297 ) on Thursday April 03, 2003 @11:25AM (#5652678) Journal

    Every problem ... has a logical explanation. However, sometimes that explanation eludes us. So we tend to attribute that to "neurosis" or some other "human" issue. I guess it's easier than just admitting that we can't figure the damn thing out.

    And that differs from psychiatry how?

  • Old News (Score:5, Insightful)

    by AlecC ( 512609 ) <aleccawley@gmail.com> on Thursday April 03, 2003 @11:29AM (#5652695)
    This is old news - it has been "true" for years. It is actually a corrolary of Clarke's law ("Sufficiently advanced technology is indistinguishable from magic"). If we understand how a system works normally, then any misbehaviour it shows is a fault. If we don't, then we can classify the misbehaviour as a "neurosis". Unskilled users often believe their computer sare sufferring from a neurosis. This usually means that at some time in the past they have installed some app or extension which is trying to do something they don't understand. A more skilled user can come along and "cure" that neurosis, because they understand the system at a deeper level.

    A car I once had displayed what appeard to be a "neurosis" - it seemed to be frightened of going more than 30mph. It would run fine up to that speed, but if you went any faster it "paniced" and stalled. Dirt in the fuel line: at low flow rates, it lay flat and let fuel pass. At higher flow rates, it flipped up and blocked the flow completely, causing the engine to stall before it had time to flip down again. The point is, the first analysis of "neurosis" was corrected to "fault" once the problem was understood.

    So the diagnosis of "neurosis" is relative - it means "I don't understand this failure mode". It can, of course, become absolute if nobody understands it.

    So, are we building systems so large that nobody understands them? Definitely. Networks are already bordering on incomprehensible. Particularly, of course, the Internet. It would not surprise me at all if the Internet started showing "neurotic" behaviour. Indeed, it already does - if you ragard humans and their input as part of the net istelf. DOS attacks and the /. effect are both "twitches" in the body of the Internet. (And spam is a cancer which requires operating now) Thus far, these nervous ticks have expanded into full-scale neurosis - but they could.
  • Re:Isn't it great (Score:2, Insightful)

    by LarsG ( 31008 ) on Thursday April 03, 2003 @11:56AM (#5652900) Journal
    One programs goal would be to decrease thermal radiation by rewriting and redesigning circuitry. The other's goal would be to increase data throughput by doing the same things. How would they reconcile?

    *snip*

    Now, if the two programs were not given explicit instructions on how to work cooperatively, they might do such things as form infinite loops by changing something the other program has already changed.

    *snip*

    Doesn't this sound like the equivelant of a neurosis?

    No. That sounds like a stupid programmer that wrote two incompatible programs working at the same time on the same data without proper locking or arbitration.

    An illogical program or system will behave illogically, no surprise there.
  • seems like a sign of specialization... the fear that everything takes an expert.

    which, for cars, computers, and sewing machines that do embroidery - is not too crazy a gut reaction.
  • by Anonymous Coward on Thursday April 03, 2003 @02:33PM (#5654287)
    The Asimov story "Runaround" pretty much proposed, exposited, and summed up this idea much better than, and way before, the Kubrick movie or this essay. Sorry, no spoilers in this post... buy the collection "I Robot" and read it. In general: read the classics, especially the sci-fi classics if you are an engineer, or be forever doomed to having a lack of perspective and saying things that others have said before.
  • by boola-boola ( 586978 ) on Thursday April 03, 2003 @02:46PM (#5654379)
    Talking about PC becoming neurotic, in my computer architecture class, my professor discussed factors that can affect the operation of a CPU. One such factor was alpha particles from the sun (I'm not kidding). Since transistors and wires in CPUs are getting so small nowadays (what, .13 or .15 micron, last I checked? even smaller for wire traces), they actually have a risk of having electrons knocked off their datapaths and onto others, potentially changing a logical 1 to a logical 0, and vice-versa. Hence, the reason for Space Shuttles to have triple-redundancy. Don't think you need ECC? Think again.
  • by Rick.C ( 626083 ) on Thursday April 03, 2003 @03:08PM (#5654529)
    As explained in 2010, "He was trapped. HAL was told to lie by people who found it easy to lie. HAL didn't know how to lie, so he couldn't function. "

    That "explanation" in 2010 was revisionist in the extreme. In 2001 HAL had predicted that an assembly would fail soon. The ground-based backup computer (identical to HAL) predicted otherwise. One of the computers was wrong, but which one? The solution was to put the part back in service and see if it failed. If so, HAL is vindicated and can continue the mission. If the part doesn't fail, HAL is wrong and his future is uncertain.

    Then HAL reads Dave's lips as he suggests that if HAL is wrong, he will have to be shut down. Faced with a possible "death penalty", HAL decides that self-preservation is top priority and the means to ensure it is to kill the crew.

    Very logical, but not very ethical.

    Many humans have found themselves in just such a position and as a result, many other humans have died.
  • by rodrigo_braz ( 215167 ) on Thursday April 03, 2003 @04:19PM (#5655214) Homepage
    I agree that in that context HAL was not crazy but working properly. However, back in the 60's people believed you could eventualy have machines who could perfectly process information all the time. Mistakes were assumed to be the result of human faulty biological hardware, not a theoretical necessity. Today we know making perfect inference all the time to be intractable, no matter how powerful your hardware is. In order to process information in a tractable fashion, intelligent machines and us alike have to take risks of making mistakes now and then. In some of us many of these little bets outcomes are so bad that these people actually do go crazy, but that is a small enough portion of the population. The same will happen with intelligent machines, and some of them will make huge mistakes or even go consistently crazy. Which doesn't mean they will be useless, only that the proper cheks and balances will have to be put in place, like they are in place for people today.

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...