Can Your PC Become Neurotic? 336
Roland Piquepaille writes "This article starts with a quote from Douglas Adams: 'The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong, it usually turns out to be impossible to get at or repair.' It is true that machines are becoming more complex and 'intelligent' everyday. Does this mean that they can exhibit unpredictable behavior like HAL, the supercomputer in '2001: A Space Odyssey'? Do we have to fear our PCs? A recent book by Thomas M. Georges, 'Digital Soul: Intelligent Machines and Human Values,' explains how our machines can develop neurosis and what kind of therapy exist. Check this column for a summary or read this highly recommended article from Darwin Magazine for more details."
it depends on the user's technical level (Score:5, Insightful)
Isn't it great (Score:5, Insightful)
With current computer technology this is not a possibility. And older computer will just crash or wont do anything because multitasking is not an option. A newer computer will do it just fine. I could have one program that formats the hard drive and another that writes data to all of it and I can make the both go at the same time, and it will work.
Everything else in the article about a theoretical AI or an intelligent computer is bs. As I said he is assuming things about a technology that doesn't exist yet. It really pisses me off when someone says "when we have this a long time from now, this is how you have to go about fixing it". You can't know how to fix something if you don't know how to make it in the first place! Common sense. The scary thing is that I think this guy is getting paid to write this stuff. Where to I sign up??
While it's a nice metaphor. . . (Score:5, Insightful)
Machines will have to get a lot more complex before their problems graduate from inefficiency or resource conflicts to "neurosis."
It is fun to personify, but the fact is that at the current state of IT development any unpredictable output can be pulled apart, debugged, and repaired.
This metaphor may start gaining some weight, however, when we become inexorably dependent on complex systems. Right now there are huge systems that have to be kept running because the cost of shutting them down for repair would be unacceptable. As this trend continues, and these machines become more complex webs of old and new code, I can see us having to figure out how to "coax" behaviors our of them without really knowing the way the base code interacts in order to generate those behaviors.
That's when system administration and psychiatry will really begin to overlap.
----
Technophobia is not confined to computers. (Score:5, Insightful)
Re:Why I hate macs. (Score:4, Insightful)
HAL is a bad example. (Score:5, Insightful)
Which is why HAL is such a bad example. HAL wasn't behaving unpredictably, or even crazy. HAL started behaving the way he did because the humans around him had the need to lie. Mission Control's order for HAL to lie to Dave and Frank about the purpose of their mission conflicted with the basic purpose of HAL's design--the accurate processing of information without distortion or concealment. As explained in 2010, "He was trapped. HAL was told to lie by people who found it easy to lie. HAL didn't know how to lie, so he couldn't function. "
Re:Technophobia is not confined to ignorance. (Score:3, Insightful)
Many people are just as afraid of: Programming the VCR. Changing the oil. Using the TV without a remote. Programming jobs on copiers (yes, those Xerox-like machines) Copying movies off their camera tapes. Figuring out why the microwave has more than one mode of operation. Learning to make felled seams on a Singer. Insert your own favorite technophobia.
Are people actually afraid of doing these things, or are they afraid of breaking the technical gizmo if they fail, screw up, or make a mistake?
Doesn't this fear come from the fact that they don't understand how to do it, or that they just don't understand the gizmo itself?
So, do they fear any of these actions specifically, or do they just generally fear their own ignorance towards technology (we fear what we don't understand?) Perhaps we can be as user friendly as we want, but if the user chooses to remain ignorant, they will remain in fear regardless of how savvy we are when we design a system. Just a thought.
Re:While it's a nice metaphor. . . (Score:2, Insightful)
Every problem ... has a logical explanation. However, sometimes that explanation eludes us. So we tend to attribute that to "neurosis" or some other "human" issue. I guess it's easier than just admitting that we can't figure the damn thing out.
And that differs from psychiatry how?
Old News (Score:5, Insightful)
A car I once had displayed what appeard to be a "neurosis" - it seemed to be frightened of going more than 30mph. It would run fine up to that speed, but if you went any faster it "paniced" and stalled. Dirt in the fuel line: at low flow rates, it lay flat and let fuel pass. At higher flow rates, it flipped up and blocked the flow completely, causing the engine to stall before it had time to flip down again. The point is, the first analysis of "neurosis" was corrected to "fault" once the problem was understood.
So the diagnosis of "neurosis" is relative - it means "I don't understand this failure mode". It can, of course, become absolute if nobody understands it.
So, are we building systems so large that nobody understands them? Definitely. Networks are already bordering on incomprehensible. Particularly, of course, the Internet. It would not surprise me at all if the Internet started showing "neurotic" behaviour. Indeed, it already does - if you ragard humans and their input as part of the net istelf. DOS attacks and the
Re:Isn't it great (Score:2, Insightful)
*snip*
Now, if the two programs were not given explicit instructions on how to work cooperatively, they might do such things as form infinite loops by changing something the other program has already changed.
*snip*
Doesn't this sound like the equivelant of a neurosis?
No. That sounds like a stupid programmer that wrote two incompatible programs working at the same time on the same data without proper locking or arbitration.
An illogical program or system will behave illogically, no surprise there.
Re:Technophobia is not confined to ignorance. (Score:3, Insightful)
which, for cars, computers, and sewing machines that do embroidery - is not too crazy a gut reaction.
Asimov story "Runaround" (Score:1, Insightful)
Kind of interesting... (Score:3, Insightful)
Re:HAL is a bad example. (Score:2, Insightful)
That "explanation" in 2010 was revisionist in the extreme. In 2001 HAL had predicted that an assembly would fail soon. The ground-based backup computer (identical to HAL) predicted otherwise. One of the computers was wrong, but which one? The solution was to put the part back in service and see if it failed. If so, HAL is vindicated and can continue the mission. If the part doesn't fail, HAL is wrong and his future is uncertain.
Then HAL reads Dave's lips as he suggests that if HAL is wrong, he will have to be shut down. Faced with a possible "death penalty", HAL decides that self-preservation is top priority and the means to ensure it is to kill the crew.Very logical, but not very ethical.
Many humans have found themselves in just such a position and as a result, many other humans have died.Re:HAL is a bad example. (Score:2, Insightful)