Guess what? They aren't.
"Can Androids Feel Pain?" Dr. John Irving Good of Trinity College, Oxford, asked in an essay published a few years ago.
It's a good question, one that year by year seems less rhetorical, less the stuff of fantasy, and more an ethical and social concern.
Inventor and author Ray Kurzweil projects that computers will match the computational functions of the human brain early in the next century, and that soon afterwards humans and computers will merge to become a new species.
As early as 1891 (in an article in the Atlantic Monthly), scholars and sci-fi writers have been writing about what many have seen as the inevitable fusion of men and machines.
Fantasists have also been drawn to aliens and the Space Age, themes still flourishing in epically popular evocations like "Star Trek" and "Star Wars." But if a new species arrives to dominate the earth, it probably won't come from distant galaxies. We're making it in labs and universities and teenagers bedrooms.
Good believes that humanity's survival depends on building Artificial Intelligence (AI) machines. More intelligent than we are, they'll answer our questions and solve many of our problems.
The great sci-fi novelist and essayist Arthur C. Clarke takes this idea still further in an ultra-brilliant collection, "Greetings, Carbon-Based Bipeds!" just published by St. Martin's Press.
The evolution of UltraIntelligent (UI) machines is imminent, Clarke predicts. Today's kids will witness the evolution of a species that's part machine, part human being and then, eventually, some combination.
"Perhaps 99 per cent of all the men who have ever lived have known only need; they have been driven by necessity and have not been allowed the luxury of choice," Clarke philosophizes. " In the future, this will no longer be true. It maybe the greatest virtue of the UltraIntelligent (UI) machine that it will force us to think about the purpose and meaning of human existence. It will compel us to make some far-reaching and perhaps painful decisions, just as thermonuclear weapons have made us face the realities of war and aggression, after five thousand years of pious jabber."
Clarke imagines AI machines taking over all but the most creative and trivial human work, inserting themselves into the loop between humans, work, creativity and entertainment.
To co-exist with UltraIntelligent (UI) machines and hold our own, Clarke posits, the entire human race, without exception, must reach the literacy level of the average college graduate -within the next 50 years.
"This represents what may be called the minimum survival level; only if we reach it will we have a sporting change of seeing the year 2200," Clarke says.
This also represents something that isn't going to happen. Except for the most technologically advanced countries - those in Scandanavia come to mind - even prosperous industrial societies like those in the United States, Western Europe and parts of Asia haven't begun to make education about new information technologies - or technology itself -- universally available to citizens.
In the United States, primitive politicians and journalists citing safety and moral issues argue for less, not more, access to technology. The only presidential candidate to make the Internet a major political issue is Elizabeth Dole, and she argues for more restrictions on youthful access to sexual imagery. This isn't a country trying to get to the minimum survival level Dr. Clarke writes about.
If Clarke is right, then for the first time we can begin to imagine a future in which the human race is no longer the planet's dominant species.
As he was thousands of years ago, man will again become a fairly rare animal, probably a nomadic one. Towns may still exist in places of unusual beauty or historic importance, but most homes will be self-contained and completely mobile, relocatable to any spot within hours. The continents will have reverted to wilderness; a rich variety of life forms will return.
It becomes clearer daily that we aren't going to be turned into alien pod people or probably even obliterated by the dread weapons we've been building. We are likely instead to simply become dumber, less durable, and les efficient than the computer-based machines we're creating.
A more concrete and hard-headed look at this evolution appears in Steven Levy's "Artificial Life: A Report From the Frontier Where Computers Meet Biology," now in paperback from Vintage. Levy opens his book describing creatures that cruise silently, seeing, reproducing, dying, even cannibalizing themselves for nourishment. The name of the ecosystem he describes is Poly World, located not in some jungle or forest but in the chips and disk drives of a Silicon Graphics Irix Workstation.
Levy calls this new species "a-life," (AL) and he argues that we're fast approaching the point where a-life will surpass our ability to control and shape it. As far back as 1980, he reports, the members of the NASA Self-Replicating Systems (SRS) unit confronted the possibility that artificial life would drive natural life out of existence.
Writes Levy: "The almost innate skepticism about whether it could happen at all, combined with the vague feeling that the entire enterprise has a whiff of the crackpot to it, assures that the alarm over what those scientists [making a-life] are doing will be minimal. The field of artificial life will therefore be policed only by itself, a freedom that could conceivably continue until the artificial-life community ventures beyond the point where the knowledge can be stuffed back into its box. By then it may be too late to deal with the problem by simply turning off the computers."
And what, exactly, are the problems? Will computers become conscious? Will they replicate our personalities and souls? Will they seek to push us and our inadequate and inferior ways aside? Will there be room enough for Us and Them? Will all this God-playing wreak havoc with the nature of human existence, as Mary Shelley warned a couple of hundred years ago?
Scientists, computing and otherwise, are hopelessly divided about the urgency of confronting the implications of a-life. Most don't think UI machines pose great danger to the human race, as long as we can turn them off when we want to.
"But can we?" scientist Norbert Winner asks in Levy's book. "To turn a machine off effectively, we must be in possession of information as to whether the danger point has come. The mere fact that we have made the machine does not guarantee that we shall have the proper information to do this."
Leaders of the artificial life movement are well aware of questions like this. But society at large has paid no attention whatever to the staggering ethical and other issues surrounding the science of artificial life. For most Americans, technology - as presented by a shallow political and media structure - is IPO's and start-ups, software and games, e-auctioning and e-trading, pornography or brain-damaging Net games. But AI threatens to alter human life more than all of them combined.
As much or more than any other social aspect of computing and science, AI, UI and AL suggest a monumental social and cultural story, however currently ignored. They won't be much considered until human beings discover a new life form imminently threatening to dominate the planet, or at least carving out its own space and behavior.
Pop culture, as usual, does a better job of raising these questions than journalism. Clarke's own "200l: A Space Odyssey" took a more malevolent view of computing's ultimate intentions than his non-fiction writing. And the looming conflict between humans and the AI machines they have made was at the heart of the evocative movie "The Matrix," which depicts a cataclysmic battle for survival between the human and mechanical species of the future. In fact, the "Matrix" asks the very question posed by Levy's scientists: will humans be able to turn the things off once they make them?
As the Space Age fizzles and the Digital Age takes shape, the sci-fi futurists and novelists are forgetting the alien invasion scenarios of the last half-century and turning their dark sides towards the evolution of the spiritual machines Kurzweil and others have been writing about.
The evolution of AI-life makes it even clear why the great sci-fi writers - Clarke, Verne, Asimov and Bradbury - have always had such hold on the imaginations of bright people. They weren't imagining the future so much as they were describing it.