Can Your PC Become Neurotic? 336
Roland Piquepaille writes "This article starts with a quote from Douglas Adams: 'The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong, it usually turns out to be impossible to get at or repair.' It is true that machines are becoming more complex and 'intelligent' everyday. Does this mean that they can exhibit unpredictable behavior like HAL, the supercomputer in '2001: A Space Odyssey'? Do we have to fear our PCs? A recent book by Thomas M. Georges, 'Digital Soul: Intelligent Machines and Human Values,' explains how our machines can develop neurosis and what kind of therapy exist. Check this column for a summary or read this highly recommended article from Darwin Magazine for more details."
To think... (Score:4, Interesting)
Not directly related, but as I was watching the Floyd's PanAm flight dock with the spinning station, I suspected that Clarke and Kubrick never foresaw this; a world of microtechnology, for the consumer. It was all grand projects back then, a single computer the size of a building, not a building full of single computers.
I know I'd swap a strong space program for strong video codecs; they seem so trivial compared to the vastness of infinity.
Well, I've babbled off-topic now. Daisy, daisy...
Well, my toaster is seeing a shrink (Score:4, Interesting)
There's a big difference between expecting past behavior to continue and actually being intelligent (and then going crazy) Sure, if you perform certain calculations enough time, the hardware might automatically optimize itself for that operation, but it's more like pixel burning on a tv, or forming a road simply by walking a path enough to form a noticable rut. Maybe when we truley have thinking computers we might have to worry about them going crazy, but until then I'm more worried about my toaster. I think it has a rash.....
wth? (Score:3, Interesting)
Yeah, listen up. Computers haven't gotten any more complex, you've just gotten dumber. Computer's don't develop neurosis, but it might make a cool catchphrase to sell a book, especially to someone who's incapable of diagnosing the real problems. Those real problems haven't changed in many years. Sure, there's a few more layers now, but they're pretty easy to peel away in your head.
Book: The Society of the Mind (Score:2, Interesting)
The only "therapy" a computer needs... (Score:5, Interesting)
The article mentions "conflicting demands"---I imagine most of those are caused by having Gator, Bonzi buddy, et. al. put on your system (with or without the users knowlege doesnt really matter) as well as having a dozen things running in the system tray.
I wonder if background programs and spyware are the digital equivalent of having voices in one's head?
So, i'm not saying that educating users would solve all the "neurosis" problems, just that the majority of neurotic computers i've worked on were so due to some action of the user, whether it was installing spyware, deleting critical system files, or allowing three inches of cigarette dust to accumulate inside the case.
intelligent machines (Score:4, Interesting)
We will clearly see more "intelligent" machines in the future. And the direction that current "artificial intelligence" is going this means that these machines will learn from what is out there.
This directly implies that the behavior of the machine will depend in a fuzzy way on the past "experience" of that machine. This however also means that we will not be able to predict exactly how it is behaving. Only in the way we can understand other peoples behavior that have also learned this behavior from the real world.
While these learning systems will make prediction difficult it will make explicit what the machine is trying to do through the learning process. While we wont know how a machine does "it" it will always present the right possible actions to us. Microsoft Word 21XX will clearly not need us to search menus if we want to change the formatting of the text.
Something like (Score:2, Interesting)
I'm not sure that "neurotic" is the best metaphor, but as the level of abstraction that computers deal with gets higher, they can start to commit more kinds of meaningful error.
To explain: If you are programming in assembly language, any programming error is likely to cause a simple failure of the system. Something goes wrong at a low level, so the higher-level thing that the system is meant to do just doesn't happen. On the other hand, if you are programming with tools (language and libraries) that deal in high level abstractions, a programming error can result in the system succesfully manipulating those abstractions in the wrong way. If the "rm" program works correctly, your script might delete the wrong files. The bugs that such a high-level system might have are more likely to look like "bad behaviour" or even insanity than the simple malfunctions of older systems. We are already seeing this. Pressing the wrong button can cause a personal email to be sent to a group of people, for example -- behaviour that looks almost malicious.
I used to think that the SF fears of machines "turning on their creators" such as "2001" or "Terminator" were just silly. "A computer can only do what it's programmed to do", I would say. I have long since seen the flaw in this. A computer that is programmed to use weapons, for instance, can use them on the wrong people due to a programming error (or a user error) at a higher level. (Worth knowing if you're an RAF pilot overflying a Patriot battery). A computer that was programmed (correctly) to create strategies (I this is still SF, or at any rate early research) might create strategies with the wrong objectives due to higher-level programming errors. That is the level of "bug" appearing in the plots of "2001" and "Terminator".
Re:Isn't it great (Score:3, Interesting)
Now, in complex systems that are not naturally occuring, these mitigation directives must be literally designed into the system itself. For instance, let's say you have two programs running with mutually exclusive goals. One programs goal would be to decrease thermal radiation by rewriting and redesigning circuitry. The other's goal would be to increase data throughput by doing the same things. How would they reconcile? Generally, a major way of decreasing thermal radiation is to reduce electrical input. But this also has a side effect of increasing the chance of data errors when the receiving component is not capable of distinguishing a signal from the backgrund noise. This decreases total data throughput. Now, if the two programs were not given explicit instructions on how to work cooperatively, they might do such things as form infinite loops by changing something the other program has already changed. One might find areas of circuitry that it has exclusive access to and change that circuitry with impunity. To anyone watching this process, the resulting circuitry would be hard to explain. Some components might become extremely fast while giving off enough heat to melt the sourrounding board, yet others would be slowed while running at a nice pleasant room temperature. Doesn't this sound like the equivelant of a neurosis?
Anecdotes: (Score:1, Interesting)
An earlier employer would just get upset when I would get to a unit for repair.
I'd not do anything special but would just be near it and it'd start working.
This also happened when I was a tech in the military.
I don't know why, and really you get a reputation from a few minor miracles (even if you do clean cola from the back of a CRT with earth friendly certified for electronics chemicals that are FLAMMABLE and Mr. Static makes his day.)
So in answer to the articles question, yes I get that all the time and I can fix it usually if it's
just age or binary cruft; reset and a swift kick are your friend along with more tools and parts than I care to name.
I really should say it more emphatically, just look at the source code for any large project. It is not humanly possible to understand it all at one and one tiny mistake and you have MechaMozilla
rather than a friendly browser.
Hmm, I want to send MechaMozilla to browse Bill's playpen in Seattle so it can't be all bad.
ARRGH!!! (Score:5, Interesting)
Until then, by personifying computers, you are only FEEDING these types of irrational fears.
There is no HAL today, and probably won't be until we get a computer to recognize the fact that one everything in the universe is black and white. One and Off. The world isn't binary... it's analog.
Neurotic? (Score:2, Interesting)
I dont write much, usually I code in BBEdit, but when I need to write something humans can read I turn to Microsoft Word. Thats when I find out that computers can be neurotic. Yesterday a friend of mine showed me something in Word. She had a line of text she wanted to copy about ten times. she highlighted the line, and pasted. No problem there, new line the same as the old one. But the fifth time she pasted, the line suddenly got formated as italic. She pasted some more times and the formatting changed again in line 9 and 10, back to normal. So line 1-4 was ok line 5-8 was italic, and the rest normal.
If an app thinks its smarter than the user it better realy be smarter.
Deadlock is not an "intelligent" behavior (Score:3, Interesting)
This sounds to me like the author is referring to deadlock, a condition where a set of processes or threads request resources that are held by other processes or threads in that set forming a cycle of resource holds and requests, the resources are not peremptable, etc... see for more details [tamu.edu]. We already have methods of detecting deadlock but because it happens so rarely in properly programmed systems (e.g. proper use of semaphores) that it is reserved for mission critical systems. See the Mars Path Finder [microsoft.com] incident for more details on critical systems deadlocking. My point is that deadlock is typically the result of random events and has nothing to do with systems becoming more "intelligent."
Hey! I thought of this, too! (Score:4, Interesting)
I was working as a tech when Windows 95 came out, so I spent a LOT of time driver-wrestling. After a few weeks with Windows, it became patently obvious that the automatic hardware detection and driver handling in Win95 was so new and bad (partly because of poor hardware vendor support, incorrect INF files and so on) that often times, updating a driver became an exercise in trying to talk Windows info believing that I had a better driver than it did. When I realized that persuading children to do something basically works the same way, I started wondering HOW OLD IN HUMAN YEARS Windows 95 would score on a developmental test. Three years? Four years? Six Months?
Anyway, I never wrote a paper on it and tried to get it published because, well, it's a stupid idea. I'm pretty sure that anything our blinky-boxes are doing that might look like a level of intelligence worthy of psychological inquiry is pretty much due to the engineers that designed the thing getting their sh*t together and specifying the protocols more thoroughly.
One of the the really good things Windows did (that people love to forget about) is that it forced the standardization of hardware autodetection, peripheral interfaces and driver support across the industry. In 1995, every vendor had their own way of doing *EVERYTHING*, and when Microsoft told them you're gonna follow our spec or we're not supporting you, most of them listened. Sure we all bitch about driver problems and feature support, but trust me, The world is a better place now.
Re:To think... (Score:5, Interesting)
Just imagine, going back to 1968 by time machine and telling Kubrick, Clarke or some egghead from Stanford or MIT, how the techology will evolve in 2001. Tell these guys the Apollo XVIII will be actually the last spaceship to leave the vicinity of Earth. Tell them that the global network developed by ARPA will be a major hit, used mostly for distrubution of p0rn, warez and mindless discussions like these on Slashdot. Tell them everybody will own a supercomputer way beyond PDP's and IBM's, but everybody will use it mostly as a typewriter and a gaming console. Tell them the main scientific discoveries by the end of century will be a pill for erection and a pill for good mood. I just can't imagine their reply.
Re:Isn't it great (Score:3, Interesting)
This line from the original article makes me uneasy however:
Since the causes and remedies of "crazy" machine behavior will eventually lie beyond the understanding of humans, the solution to Douglas Adams's dilemma posed at the beginning of this chapter may well require built-in mechanical psychologists and psychiatrists.
I think this over-estimates the use of much of psychology and under-estimates the ability of humans to understand what they have created. At least in the modern technological scene. It also seems to under-estimate human intelligence while at the same time over-estimating the artifacts created by that intelligence...
I have yet to see humans create something completely new that they cannot understand. In fact, the question has existed for quite some time whether this is even possible. To create something that *cannot* be understood by the creator. A similar long standing question, whether a thing can fully "understand" itself, would seem important in this discussion.
Re:I kill some systems through 'normal' use (Score:3, Interesting)
Bah, I have a friend like you, just make sure you always have 4 times as much memory as the next guy. More than likely you're always opening more apps at once than you need and ignoring system resource limitations... Personally I wouldn't let you close to one of my computers! (well, maybe my wife's. It needs re-installed anyways)
It actually *IS* binary (Score:5, Interesting)
The only thing in Physics right now that we believe is truly analog is the passage of time, but even then, time isn't really a measurable "thing", it's a measure of decay of objects (which in itself is quantized). So, in the very small world at least, everything *IS* binary.
Re:I kill some systems through 'normal' use (Score:3, Interesting)
I don't know why you'd want to
Myself, I'm more of a command line junkie... I tend to fit in wherever I can and inconvenience myself before inconveniencing my system. This grows out of the idea that I can generally do things quicker by hand than write a tool to do them. For some reason, my own brain is still easier to program than a computer no matter how much I practice on the computer.
So, I'm always on the lookout for good and useful tools, but I seldom write them myself, unless dire need arises (or I can't squelsh the desire!).