Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Can Androids Feel Pain? 235

Computing has overtaken Sci-Fi. The evolution of UltraIntelligent (UI), Artifical Intelligence (AI) machines that are themselves a new species is just a few years away, predicts Dr. Arthur Clarke in his great new essay collection as do others in their writings and research. Today's kids will clearly witness the evolution of a species that's part machine, part human, or both. Humans need to scramble and learn in order to hold their own, says Clarke.

Guess what? They aren't.

"Can Androids Feel Pain?" Dr. John Irving Good of Trinity College, Oxford, asked in an essay published a few years ago.

It's a good question, one that year by year seems less rhetorical, less the stuff of fantasy, and more an ethical and social concern.

Inventor and author Ray Kurzweil projects that computers will match the computational functions of the human brain early in the next century, and that soon afterwards humans and computers will merge to become a new species.

As early as 1891 (in an article in the Atlantic Monthly), scholars and sci-fi writers have been writing about what many have seen as the inevitable fusion of men and machines.

Fantasists have also been drawn to aliens and the Space Age, themes still flourishing in epically popular evocations like "Star Trek" and "Star Wars." But if a new species arrives to dominate the earth, it probably won't come from distant galaxies. We're making it in labs and universities and teenagers bedrooms.

Good believes that humanity's survival depends on building Artificial Intelligence (AI) machines. More intelligent than we are, they'll answer our questions and solve many of our problems.

The great sci-fi novelist and essayist Arthur C. Clarke takes this idea still further in an ultra-brilliant collection, "Greetings, Carbon-Based Bipeds!" just published by St. Martin's Press.

The evolution of UltraIntelligent (UI) machines is imminent, Clarke predicts. Today's kids will witness the evolution of a species that's part machine, part human being and then, eventually, some combination.

"Perhaps 99 per cent of all the men who have ever lived have known only need; they have been driven by necessity and have not been allowed the luxury of choice," Clarke philosophizes. " In the future, this will no longer be true. It maybe the greatest virtue of the UltraIntelligent (UI) machine that it will force us to think about the purpose and meaning of human existence. It will compel us to make some far-reaching and perhaps painful decisions, just as thermonuclear weapons have made us face the realities of war and aggression, after five thousand years of pious jabber."

Clarke imagines AI machines taking over all but the most creative and trivial human work, inserting themselves into the loop between humans, work, creativity and entertainment.

To co-exist with UltraIntelligent (UI) machines and hold our own, Clarke posits, the entire human race, without exception, must reach the literacy level of the average college graduate -within the next 50 years.

"This represents what may be called the minimum survival level; only if we reach it will we have a sporting change of seeing the year 2200," Clarke says.

This also represents something that isn't going to happen. Except for the most technologically advanced countries - those in Scandanavia come to mind - even prosperous industrial societies like those in the United States, Western Europe and parts of Asia haven't begun to make education about new information technologies - or technology itself -- universally available to citizens.

In the United States, primitive politicians and journalists citing safety and moral issues argue for less, not more, access to technology. The only presidential candidate to make the Internet a major political issue is Elizabeth Dole, and she argues for more restrictions on youthful access to sexual imagery. This isn't a country trying to get to the minimum survival level Dr. Clarke writes about.

If Clarke is right, then for the first time we can begin to imagine a future in which the human race is no longer the planet's dominant species.

As he was thousands of years ago, man will again become a fairly rare animal, probably a nomadic one. Towns may still exist in places of unusual beauty or historic importance, but most homes will be self-contained and completely mobile, relocatable to any spot within hours. The continents will have reverted to wilderness; a rich variety of life forms will return.

It becomes clearer daily that we aren't going to be turned into alien pod people or probably even obliterated by the dread weapons we've been building. We are likely instead to simply become dumber, less durable, and les efficient than the computer-based machines we're creating.

A more concrete and hard-headed look at this evolution appears in Steven Levy's "Artificial Life: A Report From the Frontier Where Computers Meet Biology," now in paperback from Vintage. Levy opens his book describing creatures that cruise silently, seeing, reproducing, dying, even cannibalizing themselves for nourishment. The name of the ecosystem he describes is Poly World, located not in some jungle or forest but in the chips and disk drives of a Silicon Graphics Irix Workstation.

Levy calls this new species "a-life," (AL) and he argues that we're fast approaching the point where a-life will surpass our ability to control and shape it. As far back as 1980, he reports, the members of the NASA Self-Replicating Systems (SRS) unit confronted the possibility that artificial life would drive natural life out of existence.

Writes Levy: "The almost innate skepticism about whether it could happen at all, combined with the vague feeling that the entire enterprise has a whiff of the crackpot to it, assures that the alarm over what those scientists [making a-life] are doing will be minimal. The field of artificial life will therefore be policed only by itself, a freedom that could conceivably continue until the artificial-life community ventures beyond the point where the knowledge can be stuffed back into its box. By then it may be too late to deal with the problem by simply turning off the computers."

And what, exactly, are the problems? Will computers become conscious? Will they replicate our personalities and souls? Will they seek to push us and our inadequate and inferior ways aside? Will there be room enough for Us and Them? Will all this God-playing wreak havoc with the nature of human existence, as Mary Shelley warned a couple of hundred years ago?

Scientists, computing and otherwise, are hopelessly divided about the urgency of confronting the implications of a-life. Most don't think UI machines pose great danger to the human race, as long as we can turn them off when we want to.

"But can we?" scientist Norbert Winner asks in Levy's book. "To turn a machine off effectively, we must be in possession of information as to whether the danger point has come. The mere fact that we have made the machine does not guarantee that we shall have the proper information to do this."

Leaders of the artificial life movement are well aware of questions like this. But society at large has paid no attention whatever to the staggering ethical and other issues surrounding the science of artificial life. For most Americans, technology - as presented by a shallow political and media structure - is IPO's and start-ups, software and games, e-auctioning and e-trading, pornography or brain-damaging Net games. But AI threatens to alter human life more than all of them combined.

As much or more than any other social aspect of computing and science, AI, UI and AL suggest a monumental social and cultural story, however currently ignored. They won't be much considered until human beings discover a new life form imminently threatening to dominate the planet, or at least carving out its own space and behavior.

Pop culture, as usual, does a better job of raising these questions than journalism. Clarke's own "200l: A Space Odyssey" took a more malevolent view of computing's ultimate intentions than his non-fiction writing. And the looming conflict between humans and the AI machines they have made was at the heart of the evocative movie "The Matrix," which depicts a cataclysmic battle for survival between the human and mechanical species of the future. In fact, the "Matrix" asks the very question posed by Levy's scientists: will humans be able to turn the things off once they make them?

As the Space Age fizzles and the Digital Age takes shape, the sci-fi futurists and novelists are forgetting the alien invasion scenarios of the last half-century and turning their dark sides towards the evolution of the spiritual machines Kurzweil and others have been writing about.

The evolution of AI-life makes it even clear why the great sci-fi writers - Clarke, Verne, Asimov and Bradbury - have always had such hold on the imaginations of bright people. They weren't imagining the future so much as they were describing it.

This discussion has been archived. No new comments can be posted.

Can Androids Feel Pain?

Comments Filter:
  • by Anonymous Coward
    In Ralph Peters' "War in 2020", a KGB agent tortures an enemy AI unit in order to get it to divulge information....I wonder if running NT on my dual celeron box qualifies as torture???
  • One must consider that in order to program the computer, there has to be someone with the same or better "intelligence." I can only write a program that is less intelligent than myself. The computer program _can_ be programmed to learn, but only if I know enough to give it that ability. It can't learn how to learn, so it can never really be more intelligent than its designer. However, it can be made to process the knowledge it has faster than I can, and so can make quantitative descisions much faster, and with better accuracy. db48x@yahoo.com
  • Penrose drove the stake through his own credibility :)

    His speculations about the supposed quantum computations occurring in crystalline microtubules in the brain is scoffed at by physicists and physicians alike. I have heard that there is a treatment for gout which destroys many of these structures (without turning its users into zombies).

    Penrose's arguments about a new form of mathematics and physics being necessary to explain consciousness stuck me as carefully clothed mysticism.

    A better argument against strong AI can be found in Vernor Vinge's science fiction works: our inability to manage the complexity of large software systems :)
  • by joss ( 1346 )
    We're not going to a race of cyborgs. We're going to evolve into one cyborg. Once people start putting hardware in their heads they will use that hardware to improve communication, and once communication reaches a decent speed it no longer makes sense to think of a network as a collection processors - its is a single, massively parallel machine. As for this crap about all of humanity must be literate in order to compete with these hyper intelligent machines - how the heck does he figure that out. If computers can be smarter than humans they can be 1000s of times smarter than humans. The fact that a greater proportion of humans can read and write or have a vauge understanding of technology would be irrelevent. But, we're far closer to fusing hardware with wetware than we are to creating smart machines. It's easier and has more immediate benefit. Bring on the implants, preferably before I go senile.
  • the process of updating your internal representation of yourself.

    BTW, how is Edinburgh CS department these days.
    I hear that 1st years still cheats like crazy.
    I certainly did when I was there - it's a valuable
    skill to learn.
  • ...is will they dream of electric sheep?
  • "Philosophers aren't really a class of people anymore, everyone and anyone can be a philosopher, we're all philosophizing here. "

    I'd disagree. We can all write, but we aren't all authors. We can all provide an argument that is structured, but we aren't all philosophers. We can all do arithmetic, and I know how to get copper from malachite, but we aren't mathematicians and I'm not a chemist.

    Philosopher's are people who read lots of books of past research, think hard, and write stuff that's interesting enough to be published in philosophy journals.

    Needles to say, I did philosophy at university... As for authors - they are just that. I think Mondrian's later art is rubbish, but I'm not saying the man's a fool or talentless. I'm saying I think alot (not all) of AC Clarke's writing is poor, but I've nothing against the bloke. It's art - you don't have to give great reasons for not liking it.

    And yes, I always thought Azimov was a bit dull too. Philip K Dick was a bit more like it!
  • You make three assumptions:

    1. CPU power will increase at the same rate it has increased in the past. Unfounded. The ICE (internal combustion engine) increased it's power year on year when it was first invented. Now, people spend millions stretching the last little bit out of it. It reached a limit. Why do you suppose CPU's won't also reach a limit?

    2. You assume, circularly, that a computer will be able to design a new computer, that is better than it. There is no evidence to support this, that I know of.

    3. You assume nanotechnology will catch on (the most likely of your assumptions IMHO), and also assume that computers can construct other computers.
  • As opposed to a monstrous Stalinist bureaucracy, of course. I think our respective biases are showing. :)

    I guess they are showing, but Spain did end up under the thumb of the Catholic church, whereas the Stalinist bureaucracy is simple speculation.

    After the Civil War Spain was mostly in ruins. They rebuilt rather well- better than, say, Greece or the Warsaw Pact nations.

    Fuck that. The Civil War shouldn't have happened, and the responsibility for the destruction of the Spanish economy, and its failure to grow to match the rest of Europe are Franco's only.

    Notice he never went all the way with them, though, in spite of major-league wheedling by Adolf. Franco knew Spain was in no shape to get involved in WW2 and wisely stayed neutral.

    Neutral my ass. Spain was one of the main suppliers of raw materials for the German war machine, and sent troops to the Eastern front in several ocassions. Besides, Spain would have certainly been in the Allied side on WW2 hadn't it been for Franco.

    No place is pretty if you're on the losing side of a civil war and have to run for your life. It still beats the hell out of living in Cuba at any time during the Castro regime.

    Whatever. I'll take Castro over Franco, Pinochet or the Argentinian Junta any day of the week.
  • The law passed in Kansas is a catastrophe, whichever way you want to paint it.

    I don't remember Kansas passing laws regarding what parts of Physics should be in the curriculum or not. The fact that they felt the need to single out evolution is terrifying.
  • Penrose put the stake through what exactly? It seems to me that Penrose is just going Theist in his late years... I don't really think the AI community at large takes him seriously.

    There are self-aware machines. Us.
  • Yeah, well, didn't they use to say the same about blacks, asians, women, the poor, and anybody who just didn't happen to follow the Deity of the Week?
  • Hey, someone moderate that post up, please...

  • Franco didn't have any right to keep anybody from coming to power in Spain, or to turn Spain into a monstrous Catholic theocracy that prevented it from being a reasonably prosperous European country, rather than the rural backwaters it was after the Civil War.

    Spain would arguably have been better off if it had been involved in WWII on the right side, rather than flirting with the Axis like Franco did.

    The comparison is not with Spain today, but with Spain under Franco, which wasn't a pretty sight.
  • I think you got the plot of Cradle confused with Michael Crichton's Sphere. Don't feel bad, though. They were both abominable crap, although Cradle more so.

  • Well, maybe the belief in a God (maybe even a Creator) will not totally determine your views in AI, but if you believe in, say, souls, that will certainly color your position regarding machine consciousness and rights.

    I would suspect that a scientist's religious beliefs would have no effect on their views on inorganic chemistry or geology, for example, but I think that they will probably bear on their views on evolution, animal consciousness or machine intelligence. Science is, at the end of the day, not some sacred entity, but the distilled product of what the scientists do.
  • As the title suggests, this idea has been bandied about by science-fiction writers and philosophers of mind for a long time, over 20 years of which they've done in the limelight. In other words, Katz is regurgitating age-old obvious stuff as usual.

    Beer recipe: free! #Source
    Cold pints: $2 #Product

  • Just me, but I am more int the belif that maybe the AL's won't wipe us out, if AL's are more intelligent, would they not be more civilised, and look back at us like some of us would look at god? We would be thier creators, and a more civilised life-form would not want to harm us, are we all not finding out that we have to look after our environment, and the other species on this planet? don't we call this being "more civilised"?

    Hollywood has mainly taken an approach of the bad side of what would happen, which is not that unusual, if aliens came to visit us, would there not me wide spread panic, just because there would be a significant amount of people that are scared, more than welcoming? Even the hacker world has had to fight hollywood's stereotypes.

    So, I basically say, I am welcoming the AL's, and hope that they can continue what we have started, and go further, like we would like our own children to be.
  • i want to become a half man/half machine being.
    like in ghost in the shell. that would be elite.
    heh ;)
  • Sorry Jon, but why should machines become intelligent suddenly?
    Have there been any fundamental advances concerning the problem of mind and self reference lately, that I missed?

    Have you read the anniversary edition of Douglas R. Hofstadter's "Gödel, Escher, Bach"?
    The foreword was remarkably clear about affairs not having made any real progress on the hard problems during the last 20 years.

    But if we have not understood anything about these basic properties of the problem, then we can count only on the technical advances that were really made, thus leaving only the hope, that intelligence might arise spontaneously, if one puts together enough memory chips, computing power and network connections, comparable the critical mass of nuclear fission.

    To be honest, I doubt that. (Hasn't worked for Wintel, at least :-)
    Computers have been around for about 60 years now, some fairly powerful among them. They have become stronger, but not more clever. These things are "Rain men", that can count at an incredible speed, but lack self awareness.

    What was the approach of the neural net folks? They built something that resembled natural structures and hoped that their work would show similiar abilities. That is a legitimate approach, but it is still -because the basics are not understood- just a scientific version of guessing and hoping that it will work somehow.

    No, my opinion is unchanged. If not by chance, because some "guess" worked, there will be no AI, UI or AL that deserves that rating until someone makes progress on the fundamental problems. There is hope that new computing architectures, like quantumn computing, will let us see the problem from an different, more successful angle, but this is quite some years away.

    So I expect just a more of todays technology. Wearable computers, huge networks etc. That will change our life considerably, but it is not close to your catastrophy scenario.

  • Why not just say: The more things change the more they stay the same.

    Because that's not a proverb in English? Proverbs have a compactness and impact of expression that rephrasings or translations often lack.

    Saying something in another language doesnt make anyone sound smarter (unless they are translating it)

    I'm already saying things in another language - I'm not a native speaker of English.

  • Human way of acting has never been good,
    but yet he was aware, he really understood
    that something had to be done, done right now.
    The solution was pefect. I will tell you now.
    A robot was placed inside a human shell.
    The android was born and it became very well.

    The androids are here, here to take control.
    The androids are here, here on patrol.
    The androids are here, and we never go away.
    The androids are here, so do as I say.

    The humans liked the thing so we made many more.
    Never realising what the future had in store.
    We became more perfect now, producing on our own.
    Many thousands saw the day, we were sure not alone.
    But after a while we all got together,
    decided to rule and do it forever.

    We will use you, we will abuse you.
    You will be our slaves every day
    And if you try to decide,
    we'll make the process short.
    The human can be replaced,
    so don't disobey.

    Maybe you think that we enjoy this game? Of course we do!
    We've got feelings just the same as you
    stupid humans but we use them as we like.
    We know how to please and we do it alright.
    We are programmed in any kind of action.
    We are programmed in any satisfaction


    Lyrics by S.P.O.C.K
  • It's about time we had another good slap upside the head. We're getting full of ourselves lately. First we were the center of the solar system, then we were "images" of God.

    Now if computers show us up in Intelligence, we'll be a bit more humble. The world needs more humble people.

    Of course those who say it will never happen. Computers can't be "intelligent." Ask yourselves this. What is intelligence? Chances are your answer is a behavioral one, in which case, anything that shows intelligent behavior is therefore intelligent (ala Turing).

    Grue
  • Not that I'd accuse Jon Katz of plagiarism, after all, when they use the same plot for every episode it can't help sinking in after a while.

    I can't help thinking that we should deal with little things like starvation and poverty before worrying about the MACHINES THAT ARE TAKING OVER THE WORLD! OH MY GOD THEY'RE AT THE DOOR I CAN HEAR THEM THEY'RE COMING FOR ME THEY'RE COMING THEY'RE RIGHT OUTSIDE...

    I am better now do not be alarmed everything is fine sit down and have a beer and watch the pretty pictures

  • What will be required for us to accept machines as people?

    Ask me again when we have learned to treat all HUMANS as people.
  • Katz 1.0 has an interesting feature that allows him to select topics that get us talking. Unfortunately he is also crippled by two serious bugs:
    • his writing skills are atrocious, not unlike what I write after not sleeping for 32 hours
    • he tends to run topics into the ground.

    I think for Katz 2.0, assuming you don't release his source code and let us fix him ourselves, you should implement the following fixes:
    • An alarm that sounds when he has posted two stories on a single topic, where the second story adds nothing significant to the first.
    • A formal logic engine to catch glaring fallacies.
    • A PPM - pointless paragraph demultiplexer - which would allow boring paragraphs to be condensed to the smallest unit of speech that contains the same meaning. The number of passes would be configurable in preferences, e.g. three passes would reduce this entire story to simply "machines take over the world, mpeg at eleven." Note that the PPM is only really necessary on Katz, and does not need incorporated into Slashdot in general.

    Just my thoughts.

    ObSmiley :-)
  • > I've got to admit I'm having a hard time
    > considering machines as anything other than
    > machines.

    Personaly, I think *that's* the interesting problem. Someone else alluded to the fact that we have no solid philosophical basis for thinking that other people are "anything other than machines" -- which makes machines that can pass the turring test just like people, for all intents and purposes. Untill we can point to something and say "This is the seat of rational thought" we can't say "and computers don't have it."

    I think the most pressing angle on AI is human rights -- for the AI's. I personaly have a lot of trouble with the idea that a computer program could actually be sentient, but I have to admit that the same dificulties that apply to programs apply to people.
  • Versus the "GUI's" (genuine ultra-intelligentsia) and the "CLI" (carbon life intelligence).

    PS: I liked Katz better when he was political. He's trying to suck up to his "geeks" too much lately.

  • I don't buy the notion of computers and humans merging (Kurzweil, etc), at least much beyond your basic communications devices, and even then it's doubtful (rather painful upgrade process...). It makes much more sense to tweak human DNA for higher intelligence, better eyesight, more strength, more endurance, and all that other good stuff. That's *really* playing God, and it will happen, well within our lifetimes.

    Of course, then we'll have to deal with a bunch of ultraintelligent grandchildren. And guard against Big Brother requiring behavioral modification (genetic predisposition for obedience?). But we'll deal. Should be fun. If they figure out how to tweak adult DNA, I'm signing up.
  • 1 - A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

    2 - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    3 - A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    From Handbook of Robotics, 56th Edition, 2058 A.D., as quoted in I, Robot. In Robots and Empire (ch. 63), the "Zeroth Law" is extrapolated, and the other Three Laws modified accordingly:

    0 - A robot may not injure humanity or, through inaction, allow humanity to come to harm.


  • that is pretty damn funny. heh. i agree that Katz articles need to be taken with a grain of salt. actually, a whole damn bottle. :)
  • I've just started a PhD in an AI-related area, and as far as I'm concerned the "machine that's intelligent as a human" is as far away now as it was back in the 50's. We are SEVERAL fundamental breakthroughs away from a general-purpose "intelligent" machine.

    NONE of the currently-known AI techniques (and they're all quite old by now) holds out much promise in this regard - but that's not to say they aren't very useful for the right application. I respect Clarke's skills as a visionary, but I'm afraid that his hypothesis isn't supported by the trends in research.

  • Interesting subject!
    For another view (way back from 1979, but still a good read) try James P. Hogan's 'The two faces of tomorrow'. It actually deals with the subject of testing whether we could still pull the plug in the case things went out of control.
    (Hogan's webpage is at http://www.global.org/jphogan/)
  • "I was turned on in the H.A.L. Assembly Plant
    in Urbana, Illinois, on January 12, 1997."

    "Daisy, daisy, give me your answer, do..."

  • That's true to a certain extent... but I'm not working a 20 hour week, and my car doesn't fly, and we haven't eradicated poverty or crime or violence or hard labor. I've heard all of those things promised.

    I guess I'm of the belief that while technology will change things, it won't change things very much. There are still subsistence farmers in China walking behind their water buffalo just like their ancestors did three thousand years ago. They will still probably do that when (and if) we ever come up with Artificial Life.

    --
    QDMerge [rmci.net] 0.21!

  • Hmmm. A bunch of (rather past it) novelists predict that in n years we will be doing all sorts of wild far out things with new acronyms. How many times have these novelists been right in the past?

    Nearly every great technological leap has been heralded as That Which Will Usher In a New Age of Prosperity. Think of television, electricity, the steam engine, mass production, literacy, and the like.

    Think of the promises Lenin made, or Stalin, or Castro, or Franco.

    Think of the Romantic movement of the 19th century -- particularly idealistic communes such as the Chataqua community.

    Think of the Englightenment.

    Think of the Holy Roman Empire.

    In other words, while it's fun to drool over imaginary achievements which may be possible in fifty years (Popular Science predicted flying cars by 1980, communities on the moon in 1990 or something like that), history buffs won't take it too seriously, unless it actually happens.

    --
    QDMerge [rmci.net] 0.21!
  • by ch-chuck ( 9622 )
    I'm not even sure *plants* have feelings - perhaps machines happily malfunction in their subjective perception of eternal bliss whilst their creators curse foul matter - Deep thought! Calculate the meaning and ultimate purpose of life, heeehe. And all I wanted was some calculating engine to do my homework and releive the tedium of making astronomical charts for navigating and otherwise dominating the planet.

    Chuck
  • Computing has overtaken Sci-Fi.

    SF has featured artificial minds of various kinds for years and years, yet nothing we see today even approaches simulating a mind which is remotely comparable to a human brain. (Feel free to point out counterexamples...)

    As for the proposition that we'll have trouble holding our own, I find this rather doubtful; if the worst comes to the worst, we'll still control the physical layer.

  • Are they saying that human consciousness is just a program running on biological hardware?

    That's quite a common view. If you disagree with it - if you think there is a "soul", for want of a better word - then it'd be interesting to know what evidence for the existence of a soul you think there is. I would suggest that in fact there isn't any...

  • Come on man, spread the love, post the source code. No, wait. *They* are watching. *wink*
  • by MikeFM ( 12491 )
    I think for once he has the right idea although the concept of artifical life for a large part is disconnected from artifical intelligence in my experience. Life IMO details self reproduction and since AI is software this is no big deal. Any virus can do this. As someone who enjoys writing AI programs I think we are going to see human-level intelligence within the next 10 years. I doubt this constitutes any threat to us in the form of raw force. Anybody who has tried removing a virus from a network knows how hard it is. Imagine removing a virus from every computer on the global Internet and imagine that virus is as smart as you. Good luck. We may be surpassed in some ways by the AI but we have evolution on our side so we are equally useful to the AI. I agree that we will likely merge into simbiotic(sp?) species.
  • Because intelegence isn't the only limit on technological growth. If you partly remove the intelgence limit than the pace will instead be limitd by energy, infrastructure, and other factors. The pace will certainly be faster, but it will not be a singularity.

    You can't build an Alpha using only stone knives and bear skin, no matter how smart you are.
  • I believe random, inexplicable violence is on the rise. Though the "research" you cite suggests crime is down on average.
  • We have, a long time ago, created a world so full of complexity, that we have become reliant on our technology.

    But it is not only technical complexity we are dealing with--law schools are churning out new "complicators" (ie, lawyers) at an ever-more alarming rate.

    We're already two species. There's the homo informaticus to which all reading this belong, and the old homo sapiens that isn't at all sapient to how we are changing it's world.

    I find this remark a bit elitist. All persons have embraced some technology. Slashdotters simply do so to a greater degree than (most) ditchdiggers (my apologies to any ditchdiggers in the audience).

    The fact that we have embraced technology, and evolved thereby, was a willful, convenience driven event.

    This suggests some sort of social Darwinism which is definitely not happening. Yet.

    But I wonder how we will ever deal with all the complication a technical society burdens us with. Do you know all the terms of the EULA for the commercial software you are running? Are you sure you're not liable for the questionable use of your computer by all members of your household? Have any of the municipalities you commute through outlawed the use of a cell phone while driving?

    This increasing complexity is the only common thread I see for the recent increases in random, inexplicable violence in American society. Yet the value of SIMPLICITY in matters of everyday life is never mentioned by anyone--politicians, corporations, or otherwise. Any programmer knows the value of simplicity--we write functions and objects and class libraries to create "black boxes" which easily perform complex behavior, but other technology and the rules governing us become increasingly more complex.

    I bet a person could win the presidency by making an issue out of this.

  • OTOH, I feel that we won't know what consciousness is until after we create it. Until we can do that, all we are doing is spinning theories, not testing them. Science needs feedback to get anywhere. Programmers call it the debug cycle.
  • But the argument in G.E.B. has internal fallacies. Basically it is based on how we commonly depict ourselves to ourselves, but we tend to drasticly simplify ourselves in that model.

    Trying to be a bit clearer: A model of a thing is simpler than the thing itself. Certain features are abstracted away. When we make a mental model of ourselves, we abstract away many features to make it simpler to deal with. These abstractions cause us to believe that things like unboundedness, infinity, infinitesimal, soul, etc. are actual models of ourselves. What they are is features of the simplified model.

    If quantum theory posits that you can't say anything about a particular electron untill you detect it, in the same spirit (and to the same degree) I posit that one can't say anything about a particular integer until you represent it. And I mean the particular integer, not the class of which it is a member, except that one can infer certain of it's details by noting the details of the classes in which it has membership.

    Thus: The capacity of the brain is not unbounded, nor is the number of processes that it can execute, therefore an infinite number of integers cannot be represented (even if that is all that one was doing).

    Thus: An infinite number of integers is a property of the model used to represent the integers (within a mind) rather than an actual entity in and of itself.

    This same line of reasoning can be applied to many, possibly all, of the unbounded features of the models of the mind that are popular, including those in G.E.B. (although I suspect that Hoffsteder was aware of this, and merely simplifying for publication to a popular audience).

    But see also, Dennet "Consciousness Explained" for a different interpretation.

  • Remember, the neural net is just the hardware. You also need the programming. Neural nets do a lot of their own design, but they still need an original set up and initial configurations and state transition rules. Before you can learn you need to know how to interpret the feedback (even if it's only OW! vs. UM!).
  • Perhaps this depends on design choices? What are the motivations that they were designed with... There are lots of possibilities. Asimov (esp., "I, Robot") is a good place to start thinking about what the answer should be.

  • You are right about the "good Science Fiction" rule. I believe that it came from John W. Campbell Jr. But you are missing the context:

    The reason for the rule was the literary one that if you extrapolate multiple trends the reader gets lost, and the story becomes too difficult to believe. Science Fiction is largely an art form, not an attempt to predict. The predictions are a side effect.
  • Science Fiction is mainly fiction. Sometimes it tries to stick to what seems plausibly reasonable, sometimes it doesn't. It never gets things right. The communication satellite is an exception, but the one that is brought up all the time is one that he filed a patent for, not one from a story. The one's used in stories all had people running them. One needs to be at least minimally believeable to one's audience. And that has little to do with what is acutally possible (or fantasy wouldn't be so popular). Even the hardest of science fiction was only a story that was set against a plausible background and which turned one a known law of nature. And that was so difficult that few ever mastered the art.
  • The dates estimated for the singularity were interesting: 2005-2030. I noticed that the article was composed in 1993, however, so I wonder what his estimates would be today. OTOH, 2030 is the most recent data point that I have seen for the estimate of when computer hardware equivalent in processing capability to a human brain would be under $1000, so perhaps the estimates still hold.
  • Is missing by, possibly, six years a win or a loss?
  • I think your dislike for Katz has clouded your thinking. I sympathize.

    I don't actually dislike Katz; I simply disagree intensely with a number of his viewpoints. For all I know, if we had a chance to sit down, crack a few beers together, and argue philosophy into the wee hours, I'd end up with a genuine affection for the man. But I'd still disagree vehemently with what he is writing here.

    Of course pain is not irrelevant. When you say that pain is irrelevent but personhood is relevant, you fail to see that pain includes personhood.

    You're right, I don't see that at all.

    While persons experience pain, pain is not an intrinsic part of personality. Before the Fall (I write as a Christian, of course) humanity was human, and had the gift of personality, before there was pain in the world. And, we look forward to a day when this will be true again. So, while it may appear that pain and personality are linked, this is simply an accident of our local conditions in time and space.

    Now, I'm not trying to promote some angists philosophy, or, for that matter, arguing that ability to sense equals personhood. What I am saying is that when Katz says "Can Androids Feel Pain" he really means "Are Androids Going To Be Persons Just Like Me And You Really Soon Now." (Note the difference of sense and feel.)

    Yes, that is exactly what I understood Katz to mean. And my point is, to ask the question of pain when one really means to ask the question of personality is to misunderstand personality. The question itself is bogus -- it's simply a bad question, a philosphical equivalent of "have you stopped beating your wife?" One can't simply answer the question without dealing with the assumption behind the question first.

    Person isn't very good word for the use you are putting it into. Person is rather synonymous with human and that may well lead to assumptions that aren't correct.

    No, I'm not using "person" as synonymous with "human," but to mean "a being with the quality of personality." Christians understand that there are non-human persons. God is three Persons, Father, Son, and Spirit (the mystery of the Trinity), only one of whom is also human (the mystery of the Incarnation). Angels are persons too, although they are not human, but of different races from us.

    In Star Trek, for example, Humans, Klingons, Vulcans, Romulans, Ferrengi, etc. are all clearly persons. Commander Data is clearly portrayed as a person as well. I don't know what word is used in Trek to distinguish between Klingons and cats ("sentient"?), but whatever that word is, is what I am talking about.

    As to choice and meaning provived by modern technology. Clarke speaks about future not today.

    In the past and present, we have heard these promises before. Nuclear power would soon become "too cheap to meter", the Industrial Revolution, the Nuclear Revolution, the Green Revolution, all promised and failed to deliver humanity from need and want. All have failed on that promise, and have instead helped make certain men rich beyond the dreams of Midas while "saving" so much labor that we now have an unemployment problem. Most folks who haven't hit the jackpot on these various "revolutions" have been transformed into oppressed Morlocks or effete Eloi. Meanwhile, the promised "freedom from necessity" is further away than ever. This is not a technological problem -- we could implement a society with no poverty today if we had the will and the virtue to do so. We have not done so, we are not going to do so, because greed ("Greed is good! It fuels The Economy!") is the rule.

    And now, we have the hype of the Computer Revolution (under way already), the AI Revolution, and the Nanotech Revolution, bearing the same promises. I shall remain skeptical. So far, I see that we have some new robber-barons [webho.com] who have become richer than the old robber-barons, through control of the new resources. I don't see that I or my neighbors who require the necessity of a steady paycheck to put food on the table are closer to this mythical state of "freedom" than our ancestors 150 years ago, who could at least plant a garden and spin wool, and worried about the weather rather than the stock market.

    Sure numerous men and women have contemplated the meaning before but they have more or less been part of that other 1% and have chosen to think about meaning.

    I disagree. There are a lot of philosophers on the farm and the factory floor today. Certainly at least as many as among the cubicle-dwelling Eloi [dilbert.com]. I don't know what Clarke thinks we're going to be "freed" from, because history and observation show us that a life of work is not incompatible with the highest contemplation, and in fact may be a benefit to it.


    Modern broad-mindedness benefits the rich; and benefits nobody else.
    -- G. K. Chesterton

  • For one thing I have my doubts that even the most foolish inventor would construct a computer or AI that would lead to the destruction of the human race. While Asimov's 3 laws are deceptively complex (and probably prohibitively so), a person would be a fool to create a life form that destroys them.

    Ah, but according to Edward Teller [slashdot.org], it's not a scientist's business to inject morality or even common sense into research. One ought to simply do the work for the Corporation or the State, and let other take care of whether to drop the bomb/boot the AI/etc.

    Of course, what the "common man" gets told is that such things are far too complex for anyone but experts to understand, so don't worry about it. You'll read the press release when we feel like it. Besides, technology is inevitable, and if we don't build it, somebody else will.

    Who is making these decisions anyway?


    Don't say that he's hypocritical,
    Say rather that he's apolitical.

    "Once the rockets are up, who cares where they come down?
    That's not my department," says Wernher von Braun.

    Some have harsh words for this man of renown,
    But some think our attitude
    Should be one of gratitude,
    Like the widows and cripples in old London town
    Who owe their large pensions to Wernher von Braun.
  • Gee, all we need is a few laws of robotics, and katz' porblem is solved. If he's goning to write about one scifi author, he really should give them all equal credit.
  • I feel that we won't know what consciousness is until after we create it.

    If we don't know what consciousness is, how will we determine that we have in fact created it?

    How can one test for something as poorly defined as consciousness?
  • int main(){
    void *self;

    /* Reference self */
    self=
    }

    /* i wonder if this has some philosophical significance? */

    -----

  • G) I am an artificial intelligence!

    This way, Erwin [userfriendly.org] will have something to choose (:

    -----

  • I guess it seems to me that if technology is to free us from need (which I belive it can) shouldn't those of us who create it allow do so in a way that dosn't create more need, and dosn't create absolute dependence
    [emphasis mine]

    That's impossible. If technology frees us from need, and doesn't replace it with more need, then we will certainly become dependant on the absence of need.

    That is, if we survive at all. i have the feeling that if technology freed us from all need, we would die because of lack of any usefulness or atrophy to the point of Eloi. Or else someone would realize the need to destroy the technology before lack of need destroyed us. Intelligence is a response to need, without need intelligence becomes a liability.

    -----

  • When discussing AI, especially when combining it with self-replicating machines, several questions/comments emerge.

    OK, so the computer is artificially intelligent. But is it smarter ?? (i.e. more adaptable, able to produce insights, or able to produce useful results from systems that don't model well mathematically)

    I recall a book, late 70's/early 80's, by James Hogan [global.org], called "The Two Faces of Tomorrow". It started with a problem-solving computer being asked to help remove a small hill on the Moon, as the on-site crew lacked the equipment to remove it (forgot the basic reason). However, lacking, for want of a better word, "Common Sense", it removed the hill. . .by arranging a mass-driver [ssi.org]-launched ore packet to impact on the site, "excavating" it meteor-fashion. Sort of like burning down a house to rid it of fleas: the technique is effective, but tends to have far too many unwanted side-effects. Hence the question: if the computer is Artificially Intelligent, does it also have a Articifical Experience Base to base solution evaluation criteria on ???

    Number of computing-capable units only gives a brain, whether it be wetware or hardware, a certain information-processing capability. It's the software that really makes the difference.

    So, to steal from Red Dwarf [reddwarf.co.uk], it doesn't matter if the computer has an IQ of 6000, if it doesn't have the overall programming to effectively interface with the external universe...
  • I've got to admit I'm having a hard time considering machines as anything other than machines.

    I used to think this way, that is, until I saw the movie Ghost in the Shell. That movie really made me think about what we consiter to be "living" or "sentient". People can argue that a truely inteligent machine is just some program doing some computations and nothing more, but how different is that from how humans work? We use electrical impulses for our brains, similar to electrical impulses (1's and 0's) in computers. We store information in our brain (RAM, Hard Drive), we require energy to survive (Electricity), we can traverse different areas of the world (Networks, Internet). While it may be a little far fetched, if you consider what is meant by "living" (which there is no real definition) then the line becomes blurred. Hell, wasn't it Douglas Adam's who wrote about humans being the ultimate "computer" to figure out the question to the ultimate answer of the universe? (How's that for thinking on the edge?)

    With that said, the interresting thing about this is the fact that we as humans must create (give birth to) these machines. If in fact they do end up taking over the world, or whatever the doomsdayers say, it will be because of us. It never ceases to amaze me how humans are the most intelligent of all creatures, yet that intelligence could end up whiping the entire species out (Nuclear weapons are a good example of this already).

    I don't see these things happening soon however, not in my kids lifetime. Maybe close to when they get old and have grandkids, but I just don't see technology taking off that fast. As fast as technology increases, we always overshoot the future at least to some degree (ie 2001: A Space Oddessy).
  • Aren't we already "apathetic, lassitudinous (is that a word?) beings incapable of anything."? People have been trained to think that because it's done on computers it's necessarily better even when this isn't the case at all.

    -Laktar, a.k.a. Nick Rosen, laktar.dyndns.org


    If I Ever Became An Evil Overlord:
    67. No matter how many shorts we have in the system, my guards will be
    instructed to treat every surveillance camera malfunction as a full-scale
    emergency.
    -- Peter's Evil Overlord List, http://www.eviloverlord.com/lists/overlord.html
  • Isn't Jon Katz being a bit melodramatic? It's not usually his style, but this article is really bad. I found it so stupid and trite.

    -Laktar, a.k.a. Nick Rosen, laktar.dyndns.org


    If I Ever Became An Evil Overlord:
    80. If my weakest troops fail to eliminate a hero, I will send out my best
    troops instead of wasting time with progressively stronger ones as he gets
    closer and closer to my fortress.
    -- Peter's Evil Overlord List, http://www.eviloverlord.com/lists/overlord.html
  • "the remaining humans were apathetic, lassitudinous (is that a word?) beings incapable of anything. This is far more likely -- and far more worth consideration -- than the 'machines will take over' cry that's been popular since the first issue of 2000AD."

    As I read Katz' article, I thought this was the point he was trying to get across? The possibility that humans will be outsmarted by machines, to the point where we don't do anything anymore.
  • First of all I don't think this article was addressing the possibility of creating consciousness. Nobody can agree on what that is anyway. What I think this article is referring to is the possibility that in the future AI programs could be created with enough "intellegence" to infer more about the world than was intended, and therefore outwit humans in an attempt to allow itself to complete some arbitrary task in an unexpected way. Basically, a simulated brain doesn't need to be conscious to act like a human brain and even outperform one.

    -Jon
  • Once computers are designed by other computers, not humans, they should become even more powerful as genetic programming methods create efficiencies we could not have created.

    Clearly, you know nothing about "genetic programming". First of all, genetic programming doew not necessarily create "efficiencies". Genetic programming is merely a method to utilize recombination of pre-existing templates to create structures that are more "fit" to solve an existing problem. Thus genetic programming methods are of necessity *convergent*. Human brains have evolved to solve the "survival" problem, but there is a long ways between DNA and human survival. Survival, by the way, is a divergence problem.

    Secondly, the genetic programs that have, so far, "evolved" have their 'inventors' scratching their heads. While it is true that these programs *have* sometimes formed more "efficient" solutions, a) their structure is completely ad hoc and does not follow any duplicable plan; and b) clearly cannot be generalized: these structures only solve the problem given.

    I remember 15 years ago when the same predictions were made for so-called "expert systems".

    Don't get me wrong: I design and build "genetic programming" (and "neural networks": I find that the synergy between these two complemetary methods is useful). But don't leap to conclusions based on speculation.

    Simply by the current growth in CPU power, by 2030 at least, computers will have a processing power equivalent to the human brain.

    Good grief. Just because computers *may* have as many components as a human brain 30 years down the road does *not* mean that they will be "equivalent" to the human brain. Remember that the human brain consists of billions of neurons, *each* connected to (on the average) 10,000 others. Even our best neurophysiologists have *no idea* how this highly interconnected network operates. And, with a few exceptions, all computers today (and for the foreseeable future) will continue to be serial processors, unlike the brain, which is a parallel processor par excellance.

  • Assuming that there is no soul granting God, it should be possible to create a computer that mimics the mind. (Boiled down it can't be anything more than algorithms. The difficulty will be in providing a similar input environment.)

    The dull question is whether this silicon brain will be better able to interface with the dumb super calculator than our own. The physical connections will be easier, but the information connections may still be impossible. If it is, it will be better at physics and calculus and computer programming. So what.

    It seems that if silicon life will ever rival the complexity of carbon life, we will have to create environments that allow natural selection to do most of the work for us. These environments are the key, because the environment determines the characteristics of the resulting organism.

    The basic pressure driving organic evolution is the ability of an attribute to allow its underlying gene to reach the next generation. Because of this the gene needs to create an organism that sucessfully eats and breeds, living long enough to maximize successful children.

    This same survival selector may be used in some silicon experiments, but in others survival depends on the organisms ability to solve a problem. Right now we are able to shape A-life because we create the means of reproduction.

    At the moment we are selecting for the ability to recognize a signal, and other basic tasks. What if we are able to keep control of the process long enough to control selection of primitive social organizations? We could become the all-knowing judgemental god, stamping out evil whereever it occurs. And in the end we may have created heirs.

    Yes, there are lots of reasons why we will have to lose control long before A-life reaches the social stage. And as soon as we lose control of reproduction, we lose fine grained control. Though we may still be able to send computer virii through environments we feel are supporting the wrong type of development.

    Think in a thousand years silicon men will sit around their dorm room, and wonder how life started. Two people sitting across the table discussing if the weather conditions had ever been right to allow primitive nanites to develop, or if some god had to be involved.
  • Read The Time Machine again. The apathetic beings to whom the technology may as well have been magic were the Eloi, just one of the two descendents of humanity.
    The other heirs of humanity were the morlocks, the people who understood and used the techology. These pale, intelligent beings lurked underground and considered the Eloi to be only cattle.
    The Time Machine was a cautionary tale that has been largely unheeded by our society. Wells saw the world dividing into too groups, the elitists who were useless to society, but adored, and the workers whose contribution was essential, but shunned.
  • by Awel ( 28821 )
    People have been saying this sort of thing for years. Specifically, Arthur Clarke has been saying this sort of thing for years. I think we`ve still a long way to go before we reach the point Clarke is talking about - almost as long a way as we had forty years ago when Clarke first started going on about it..
  • Well, if it`s not that, what do you think human consciousness is?
  • Hollywood has mainly taken an approach of the bad side of what would happen simply because you get more exciting stories with spectacular special effects that way. What sort of a film would you get with the storyline `Aliens land on Earth, are friendly and make peace, everyone lives happily ever after`?
  • Clarke says:

    "Perhaps 99 per cent of all the men who have ever lived have known only need; they have been driven by necessity and have not been allowed the luxury of choice, In the future, this will no longer be true."

    And that:

    "the entire human race, without exception, must reach the literacy level of the average college graduate -within the next 50 years."

    How do these two statements fit together? If technologyis to replace need with choice, how can it also require everyone have a college level education?

    Shouldn't having our needs cared for by machines enable the individual who chooses to live as he pleases, possibly without the need to understand the technology?

    Could the artist live and create works that the rest of enjoy without concern for the machines that grow her food?

    I guess it seems to me that if technology is to free us from need (which I belive it can) shouldn't those of us who create it allow do so in a way that dosn't create more need, and dosn't create absolute dependence (All systems are imperfect, interdependencies between systems brings an increased chance of failure)

  • Would a more intelegent lifeform look at us like we look at god?

    Seems to me faith is inversely proportional to education and intelegence?

    I mean, the more I think about it the more god seems like a bad idea.

    What use would we be to them?

  • Anyone who seriously believes this is utterly ignorant of the past and current state of AI. When I was at Bell Labs, there were groups of brilliant people whom had dedicated their carrers to just one aspect of AI, speaker independant voice recognition. They have made great strides over the years, but it still doesnt work 100% of the time, or even enough of the time to function at a childs level. Same thing goes for image recognition.

  • OK, folks, time to clear the air: ALL the Kansas law did was to leave the decision of what to teach to the local school districts. This is the type of local, distributed decision-making that the /. community normally favors.

    Check it out for yourself, the Kansas law didn't mandate the teching of creation only, or even creation also, but simply said that it wasn't something the state should decide at all. That's it.

    (Source: http://www.worldmag.com/world/issue/09-11-99/cover _1.asp and whether or not you agree with their perspective, one has to admit that World's jouralistic product is incredibly accurate and correct. When they are wrong, they PROMINENTLY post the corrections, something too rare even in our supposedly "open" circles...)
  • *Anything* Clarke has written with Gentry Lee is drivel. Mindless pap.

    Most multi-author fiction is crap, with the possible exception of niven and pournelle[1].

    dave

    [1] which is at least entertaining space opera.
  • Completely off topic but, Prince is now:

    The Artist, formerly known as 'the artist formerly known as prince' formerly known as 'prince'.

    TAFKATAFKAPFKAP for short.

    dave :P
  • Only had a couple of Philosophy classes in college, but not sure how many of my professors, and more to the point, how many of the philosophers we studied would endorse very many of these points. Think we better decide how it is and whether it is that we even have "intelligence" or consciousness before we can even consider "creating" something else that does. And by creation I mean something beyond "Pro-creation", which at its best is duplication (but not even really.......)
  • by Signal 11 ( 7608 )
    Do androids feel pain? Well, yes. That is, if you drain all their oil and drive them up to the supermarket.

    --
  • I've recently begun work on my own "infobot" for IRC ( mainly to implement original ideas from the start instead of intruding on work of existing bots ), and though the thought of having an intelligent conversationist bot that can emulate a human is appealing in some ways, there are certain problems with taking that approach yet in almost _every_ level of AI work.

    First, Artificial Intelligence's time has not yet come. We've got to be able to make our robots and computers operate as tools to a much better degree than we do today. If we ever hope to succeed in the loftier goals of AI specified in Katz's column, let's first try for the basic goals first. Let's make digital machines or perfect servants first, able to understand our behavior and speech and interact with us before "blessing" them with independant thought. At the present day, the average population still is running into a gigantic learning curve in using Windows98. Let's eliminate this barrier first :).

    True AI, though it is a great academic endeavour, it is also a very futile endeavour. What can we really hope to accomplish with a free thinking robot? Heck, look how most of the free-thinking humans are turning out ;)

    Personally, I think the best use of AI is to give it a limited role in the ways certain digital machines interact with us. I do not believe that it is a machines job to become schoolteachers. I do believe, however, that a house-wide information and service computer that can understand speech and interact with humans is going to be a VERY useful tool in the future. However, we have yet to see how much AI is going to play a part in that.
  • Dont get me wrong. I think someday the AI labs of the world will create a respectable artificial intelligence that will probably be capable of some cool things.

    What I dont get is why periodically someone predicts that computers will take over the world and we will be their minions. Has anyone bothered to look at these people and notice that the closer we get to the eluisive goal of AI, the more work we need to do?

    Society is already at a point where lack of computer skills puts people at an extreme disadvantage in the job market. Why does everyone expect it to evolve more? Haven't they been paying attention to the fact the computers are being made to become EASIER to use? I imagine when the plow was invented there were just as many people standing around saying it would put them out of jobs as when robots started building cars in Detroit. The tools get more complex, but the conflict will perpetually remain the same.

    And remember who is building these things, people. We are building them as tools. They exist to make our lives easier. Computers already trade our secrets and data without our knowledge. Computers already teach our children. They already control what we watch, what we see. Guess what? somewhere down there was still a meeting in a board room that decided what that computer would be built for and what it would provide. Humankind is still in control. Computers dont change their mind. They do what they are programmed for. It's very simple.
    Although complex AI will probably someday exist, it's foolish to think that it will somehow be able to outsmart us. We created it you know, so by definition, we are the more intelligent. If survival goes to the fittest, it's absurd to think that tools can become more fit than their creators, it just doesn't make sense.

    -Rich
  • I don't think WE have enough intelligence to create such sentience.

    Think of the process of programming for a completely new computer. You first have to code everything in machine code, because there's not even an assembler available [pretend cross-compilers and cross-assemblers don't exist in this context]. So what's one of the first things you write? A crappy assembler, which you use to write a better assembler, which you use to write better assemblers. Once the assembler is good enough, you write compilers for higher level languages, and use these compilers to write better compilers, and so on.

    The situation here could be similar. Even though WE may not have the intelligence to come up with an untra-intelligent AI on our own, who's to say we won't use 'dumb' AI to help us design better AI (since it can sit there and do number-crunching 365 days a year, no bathroom breaks), and use that to generate even smarter AI, and then smarter AI... The difference with AI is that at some point the AI can improve itself without our help, as long as the power keeps flowing...

    --------------------------

    Paranoid thought for the day: What if we really are the result of some Gods' AI experiments, and they're watching to see if we're going to destroy ourselves or not? The physical "laws" of the universe are just restrictions and parameters of the the software that's being used to run us.

    -----

  • Exactly, as this post is still young (only 5 or so replies as I'm reading, no doubt over 30 by the time I stop writing), it the inevitable "computer are not intelligent", "you can't replicate humans in computers" psotings will show up. We've been through that on several occasions.

    The relevant questions to ask are:
    - what is conscious
    - when can you call a thought/idea creative
    - what is the difference between living and not living

    I don't think any of these questions have an answer that everybody understands. being an atheist and convinced that the human being is not the perfect creature imaginable, the idea that we can create something that is creative, has a conscious and fits my very vague idea of alive is acceptable though I don't see it happening anytime soon.

    Somewhere in the article it is stated that computers will match the human brains speed early next century. I'm not sure whether I can agree with this. How fast is the brain actually and do we measure this speed in gigaflops? What is more reasonable to say is that computers will soon be able to understand/parse our spoken language. Computers will have enough AI to grasp some of the semantics of what we say. For the latter to happen, we will have to provide them with a context. The more complex this context the more it can understand. Right now the context in which a computer has to understand its input is very limited. Most programs context can be put in a small number of if statements.
    The AI community can provide us with techniques that can enhance this context (rule based systems, belief networks, neural networks).

    Intelligent behavior is not limited to responding to input though. What's also needed is the ability to learn. This is where I see a problem. I'm not aware of any techniques to aquire new knowledge and add it to the existing knowledge. Neural networks are generally trained for a limited set of input with some knowledge about the output to guide this process. This is not the same thing.

    Once we solved the context issue and the learning abilities, we still have consciousness and creativeness to conquer.

    Consciousness is about self reflection. I.e. the ability to think about it self. To test whether animals are conscious of themselves experiments with mirrors have been performed. In these experiments a mirror was placed in the annimals environment. From the behavior of the annimals ot was deduced whether they were aware of the fact that the image in the mirror was a reflection of them. This showed that primates and dolfins recognize themselves and thus have a consiousness. The same experiment with a computer would be a bit more complicated. To feel pain (or anything else), the computer would have to be aware of itself.

    Creativeness is equally hard to grasp. Lets say it is the ability to create something new from existing things (things can be ideas, concepts). The way humans invent stuff often seems a bit random. Basically what happens is that there is some input (in the form of knowledge), some sort of problem that requires creativeness to be solved. From here it gets really vague. But at the end of the process a creative solution for the problem has been found. Once we grasp this process, I'm sure we can model it. An interesting thought is that we may not need to grasp it after all. Neural networks seem to do what I described. You put something in and something comes out. What happens in between is not known but we can still use the mechanism to solve some problems.

    So to summarize this somewhat lengthy post:
    I agree with the posting to which I replied in that we need to understand first before we can judge something to be intelligent, conscious, etc. On the other hand we already have some mechanisms (neural nets) that solve problems in a way we do not fully understand!

    The original article is a clear example of so called popular science. No actual new stuff is introduced. It's just a person reflecting about developments in computer science. As for Arthur C. Clarke. I've lost some of my respect for him after watching some bullshit programs presented by him on Discovery. Some of his books were nice, though (I read most of them when I was about 14).
  • Mr. Katz brings in Asimov at the end without saying much about Asimov's contention that machines won't try to take over the world because they'll be designed not to. Not just incapable of doing it, but (much more important) incapable of wanting to. A well-designed machine is the only object that it's 100% okay to enslave, because what it wants most from life is to serve you. At most I can see the situation evolving to something like the Lije/Daneel thing: the machine evolves from slave to partner, but it still "wants" to serve.
  • That's exactly my point. They DIDN'T say what was or was not in the curriculum. They simply said, quite properly, that it was inappropriate for the state to mandate any point of view on the issue!
  • Large space stations orbiting the earth inhabited by everyday citizens, which supposedly would have happened 10 years ago.

    colonies on the moon by today.

    Travels to Jupiter in less than 2 years from now.

    These predictions make more sense when you consider the background against which Clarke made them. In the 60s, the US space program was in full swing, we were shooting for the moon by 1970, and the sky was no longer the limit. If we had kept developing space technology at the rate we did then, we might be laughing now at the idea that it would actually take until 2001 to reach Jupiter...
    --

  • Two ways to make an "intelligent" machine, whatever that means :

    1) Rebuild (or simulate) every single neuron and make it fit into a human-like structure which we can't even begin to make conjectures about. This goal may be slightly beyond a 10 years term...

    2) Don't try to follow the human architecture, or forget the brain paradigm altogether. The former is Hugo De Garis' method, the second is good old symbolic AI (Minsky, Schank et al.).

    The only way to deem those things "intelligent" is to compare them with ourselves. Turing test, stuff like that. The concept of intelligence is totally human-centered. Therefore, to call such a machine "more intelligent than human" is plainly impossible. Either it is roughly as intelligent as man, or it is considered as "something else", but certainly not intelligent.

    How could you define intelligence otherwise ? Ability to solve problems ? Come on, your gnuchess program can do this, would you really call gnuchess intelligent ? Ability to overcome complex problems ? Matter of time before a brute force genetic program generation can solve incredibly complex problems (at least when proper modelling is possible), and sorry, I will not admit a genetic algorithm is intelligent. Ability to overcome men's attempts at destroying it ? In this case the Black Plague bacillum is probably the most intelligent lifeform that has ever existed on this planet.

    I like Arthur C. Clarke when he writes fiction. He should try to do only that - exactly like Hugo de Garis does.

    Thomas PS : Don't come and tell me De Garis is serious about what he writes, I won't believe you :p
  • ...you actually have to have a science of AI. The current state of AI is that it's 99.9% philisophical, and 0.1% science. I don't want to call it a sham, because the people are honestly trying, but the past 40 years of research has been an utter failure.

    That said, I think someday we'll solve the riddle of intelligence and get intelligent machines, but it ain't gonna happen in the next 50 years, and probably longer.

    It's not just a question of faster computers. We need 1) provable and demonstrable theories of intelligence, and 2) hardware to implement them. Both of these are not even on the horizon, much less in our generation.

    Sorry to dump the cold water of reality! :)

  • by Anonymous Coward on Monday September 20, 1999 @01:40AM (#1672430)
    Huh... I always run Katz stories through my X-Files meter-- Big Brother, spying, etc. being the keywords. I also included a few words from the actual X-Files series...

    Jesus. He had to put "Kurzweil", "AI", "Clarke", "ethical", "romantic", and a bunch of other words in the same story. My little perl script gave it the highest rating so far-- 5 out of 5!

    Is Katz really Chris Carter in disguise?
  • by Anonymous Coward on Monday September 20, 1999 @01:44AM (#1672431)
    If you were born after 1970, it is likely that you will witness computers created in your lifetime that are vastly more intelligent than any human. Conider:

    • Simply by the current growth in CPU power, by 2030 at least, computers will have a processing power equivalent to the human brain.
    • Once computers are designed by other computers, not humans, they should become even more powerful as genetic programming methods create efficiencies we could not have created. From this point forward, computers will most likely be vastly more intelligent than any human or group of humans.
    • Once nanotechnology takes root, computers will be able to manufacture other computers, removing their reliance on humanity.

      I think it is extremely important how we will perceive this new form of life, considering that everything I have mentioed above will certainly transpire in the next 70 years.

  • by Evan Vetere ( 9154 ) on Monday September 20, 1999 @02:01AM (#1672432)

    Vernor Vinge's Technological Singularity [sine.com] should be required reading on this topic. I'll summarize it, for those unwilling to hit the link:

    Within 30 years, we will have created computers that, intelligent or not, can solve problems faster than mankind can. The computers will, among other things, be able to build better computers faster than men can. An automated economy will emerge, with the artificial quasintelligences directing progress almost completely.

    The change in how homo sapiens sapiens moves forward technologically will be approximately as drastic as that of homo sapiens neandertalis discovering fire. No Neandertal could have predicted what the world would have been like post-fire.

    Will we live to see homo sapiens++?

    I personally believe Vinge is correct. It'll be a hell of a ride.

  • by jabber ( 13196 ) on Monday September 20, 1999 @02:09AM (#1672433) Homepage
    The point of no return has already passed us by. We have, a long time ago, created a world so full of complexity, that we have become reliant on our technology. We need the air traffic control systems, the banking networks, the databases, the ISP's, the chips in our cars. We rely on the IC controllers that run our assembly lines and decide how to make our clothes, cook our food, route our electricity.

    It is irrelevant to wonder if the machines will ever become sentient, or what effect that will have on us as a species. It's a moot point. We're already two species. There's the homo informaticus to which all reading this belong, and the old homo sapiens that isn't at all sapient to how we are changing it's world.

    The old species is already nomadic, living hand-to-mouth and at odds with nature. The new species has been able to avoid the dismal lifestyle of the old through it's fusion with technology. The fact that we have embraced technology, and evolved thereby, was a willful, convenience driven event.

    We are dependent on our technology as much as birds are dependent on their ability to fly. To un-plug means death. We may not be left biologically dead without our tech, but our lifestyle, our standard of living, would end. Is that no death? We, as we are, would cease to exist. We would revert to an earlier stage of evolution, and our species would prove to be another failed mutation.

    It is our survival instinct, our will to live, that drives us to develop new technology, and to become even more dependent on it. As birds that once only glided from tree to tree and now rise into the sky under their own power, we too will learn to soar in our newly claimed environment. But don't think that we will still be human when we do.

    With our beepers and PDA's, and our Internet access that makes us better informed (read better adapted to the environment than our predecessors) and better suited to survive. We are more fit that the agrarian society we are replacing. We are the new species. The earth will not overgrow with vegetation, because we, the new species, eat paper for a living. We burn fossils for sustinence and we belch smoke. We will for a long time, and then things will change somehow.

    Just because there are not Hunter-Killer aircraft and terminators running around, just because we are not batteries, does not mean that the machines are not in charge. They are - and we are them. We have already merged, we are one.
  • by Zach Frey ( 17216 ) <zach.zfrey@com> on Monday September 20, 1999 @03:04AM (#1672434) Homepage

    Hoo boy, off into the techno-spiritualism and "porn makes kids better" garbage again ...

    A complete, well-structuted and footnoted criticism of everything wrong with this essay would take far more time than I dare give it this morning, but a few thoughts:

    Pain is irrelevant:

    That's right, the ability of an AI or a-life program to "feel" pain is irrelevant to any moral or ethical issues. It is interesting, but it is not the ethical quandry that Katz makes it out to be. Think about it for a moment -- we already share the planet with entities which are demonstrably intelligent and capable of experiencing pain. We call them "animals". They've been around for a long time, perhaps you've encountered one recently?

    Now, if an AI could achieve personhood, that would be a different can of worms. But what, exactly, is personhood? That, at either an explicit or implicit level, is a crucial question in today's "culture wars." The traditional Christian answer which shaped Western culture for many centuries is that personhood is a spiritual attribute, and humans are persons because we are created in the imago dei, the Image of God, Who is Himself personal.

    Therefore, (to steal a phrase from A Canticle for Leibowitz [amazon.com]), "all that is born of woman" are persons.

    The current, post-Christian viewpoint seems to be to reject any spiritual basis for personhood, and to then try to base recognition of personhood from some observed attribute, perhaps cleverness (if it's intelligent enough, it must be a person) or emotional response (if it feels pain and can articulate enough angst, it must be a person). But, the distinction between person and non-person is muddled, because (it is argued) there is no way to draw distinctions other than quantitative. So, a Darwinist would claim that humans are simply animals with opposable thumbs. Minsky, etc., claim that humans are simply carbon-based computers with a big specialized processor and complicated software.

    From the Christian perspective, the issue with AIs is simple enough -- we have to determine whether an AI could ever be a person, and proceed accordingly. From that, one can proceed figuring out the ethical issues.

    From the post-Christian, modern/post-Modern materialist viewpoint, there's no good way to make any distinction other than some quantitative ones, so you drop into a quagmire of muddle, providing wonderful employment opportunities for professors of ethics and for cyber-pundits.

    Modern technology does not provide "choice" or "meaning":

    Katz quotes Clarke:

    "Perhaps 99 per cent of all the men who have ever lived have known only need; they have been driven by necessity and have not been allowed the luxury of choice," Clarke philosophizes. " In the future, this will no longer be true. It maybe the greatest virtue of the UltraIntelligent (UI) machine that it will force us to think about the purpose and meaning of human existence. It will compel us to make some far-reaching and perhaps painful decisions, just as thermonuclear weapons have made us face the realities of war and aggression, after five thousand years of pious jabber."

    For argument's sake, I'll take Clarke's 99% statistic as a given. It's not clear to me that a European peasant of the Middle Ages, who had a secure landholding, the ability to live off of it, and little regulation other than some taxes, had less "choice" than today's Dilbert [dilbert.com]-ized cubicle dwellers, who don't own their own homes but merely lease them from the bank, and who are at the mercy of the next "rightsizing."

    It is simply ludicrous that Clarke can believe that "the purpose of meaning of human existance" has not been thought about to this point. He seems to want to have it both ways, because what is this "pious jabber" that he so casually dismisses if not the very thing he claims has never yet existed?

    As for his example of thermonuclear weapons, give me a break. If anything, thermonuclear weapons have made us less able to face "the realities of war and agression" than generations past, by making war an unimaginable catastrophe. And I truly think that those for whom war meant close combat had a better handle on war and agression than we for whom war means smart bombs and air strikes.

    But there is another strong objection which I, one of the laziest of all the children of Adam, have against the Leisure State. Those who think it could be done argue that a vast machinery using electricity, water-power, petrol, and so on, might reduce the work imposed on each of us to a minimum. It might, but it would also reduce our control to a minimum. We should ourselves become parts of a machine, even if the machine only used those parts once a week. The machine would be our master, for the machine would produce our food, and most of us could have no notion of how it was really being produced.
    -- G. K. Chesterton

    Chesterton wrote this as a warning. It is perhaps the most frightening thing about Clarke and Katz that they seem to think this is a desirable state.

  • by Pyr ( 18277 ) on Monday September 20, 1999 @01:54AM (#1672435) Homepage
    Yes, Clarke predicted the use of satellites for communications long before it happened, but he also predicted:

    Large space stations orbiting the earth inhabited by everyday citizens, which supposedly would have happened 10 years ago.

    colonies on the moon by today.

    Travels to Jupiter in less than 2 years from now.

    If any of you geeks have seen "Arthur C. Clarke's Mysterious World" you would know better than to use him to "prove" your pet theories. He's gone from a genius author to a old crackpot out there on Sri Lanka, so I would seriously doubt any predictions by him that we'll be having AI buddies in our lifetime, or in our children's lifetime.

    As we see from the AI storywriting contest, real AI has hardly progressed in over 30 years. Programs get longer, machines get faster, but there is nothing near that spark of human thought or human creativity. The general consensus was that the AI storywriting machine was just fed a very long set of rules, that it really was hardly writing the story itself at all.

    Today we have pacemakers. Tomorrow we will probably have more mechanical replacements for body parts, but there is currently no point in "fixing what ain't broke" in the human body. it's such massive surgery with huge amounts of drugs that have to be taken for the rest of the patient's life that I would prefer staying the way I am, thank you very much.

    My final point: They predict AI robots will help us do all the heavy labor that humans normally do. We already have machines, but is there really any reason to make them intelligent? I would feel much more comfortable ordering my hamburger sans pickles with a non-sentient robot than one that actually thinks. Adding AI to those robots used to make cars just opens up a whole new can of worms. Why do that when our current solution works just fine?
  • by DrNO ( 61310 ) on Monday September 20, 1999 @02:04AM (#1672436)
    I guess that as long as our boxes are independent, i.e. not networked, machines are just machines. Cut the power and anything that may be considered "A-Life" gets nuked.

    OTOH, once you create a program that "lives" on the net, is capable of replication and adaptation and so on, it's ecology becomes more stable and elimination of the entity may become difficult. I see no particular reason why this type of entity should not qualify as a sort of "life" although its universe is certainly rather different than our own.

    As to the concern that humans are about to become obsolete - bring it on. We tend to be highly adaptable and are certainly aggressive competitors in the evolutionary arena.
  • by Ray Dassen ( 3291 ) on Monday September 20, 1999 @01:48AM (#1672437) Homepage
    Plus ça change, plus ce la même chose (sp). The immanent arrival of AI has been a constant prediction of both science and SF since at least the development of electronic computers (Alan Turing [turing.org.uk] already worked on a minimax-based Chess program), but IMHO what AI has shown us so far is what intelligence is not (one of definitions of AI is perhaps the most and the least revealing simultaneously: that which computers can't do yet).

    It has been argued very persuasively that traditional top-down AI won't work (see e.g. Hofstadter's Gödel, Escher, Bach [slashdot.org]), and while bottom-up AI (be it artificial life, neural networks or evolutionary computation) has produced some interesting results (like the WEBSOM [websom.hut.fi] classification system), I'm still very skeptical about "Real Soon Now" predictions of AI.

    Of course, I still hope someone proves me wrong (and that if they do that it's going to be "interesting times" but not in the Chinese curse sense).

  • Escalating processing power does not have to result in "intelligence".

    An example. 30 years ago, slide rules and nimble brains dominated mathematical circles. Eventually someone invented a small calculator which would add, subtract, multiply and divide. This helped out those who had difficulty multiplying 4 and 5 digit numbers in their head and could do it more quickly than most humans. (Note that in the example, we already have a machine that is superior to the vast majority of human brains in one aspect: simple arithmetic). These calculators were added to, soon including square root functions and other higher level math. A college student studying math was now not required to be able to do many of these functions manually. If you look at current calculators, they are able to do much of what is required in a college mathematics course. Following your extrapolation, because these calculators have been able to perform an increasing percentage of the requirements of college mathematics courses, eventually, the calculators would be able to get a college education.

    Surpassing the computing power of the human brain != simulating said brain.

    I would also like to note that I don't recall many documented predictions which were able to accurately describe society even 10-20 years out, much less 70 years. Even those predictions which included some things which came to pass were missing the big picture and tons of important details.

    LetterJ
    Writing Geek/Pixel Pusher
    jwynia@earthlink.net
    http://home.earthlink.net/~jwynia
  • by rde ( 17364 ) on Monday September 20, 1999 @01:42AM (#1672439)
    I've got to admit I'm having a hard time considering machines as anything other than machines. And, open-minded free-thinker that I like to imagine myself as, I can't see computers taking over to the extent that Jon seems to be envisioning.
    We are in danger of becoming too dependent on machines to the extent that one really big solar flare could kill off most of the developed world in a matter of weeks. But that says nothing about machines.
    Remember the end of Wells' The Time Machine? Technology did everything, and the remaining humans were apathetic, lassitudinous (is that a word?) beings incapable of anything. This is far more likely -- and far more worth consideration -- than the 'machines will take over' cry that's been popular since the first issue of 2000AD.
  • Hmmm. A bunch of (rather past it) novelists predict that in n years we will be doing all sorts of wild far out things with new acronyms. How many times have these novelists been right in the past?

    "just as thermonuclear weapons have made us face the realities of war and aggression, after five thousand years of pious jabber."

    What? So, you mean the soldiers involved in the Napoleonic wars, who after battles piled bodies into piles so large they started to burn spontaneously like compost, did not face the realities of war and aggression? Or were they jabbering piously?

    Or is it rather CNN in the post nuclear age, who jabber piously about defending human rights as they replay in slo mo for the 16th time that evening a missile hitting some black and white blob in a far off land.

    I've yet to see any artificial life, or anything that comes close to it. Maybe when I do see it it will evolve so fast we'll all be slaves to it by tea time.

    And as to whether androids feel pain - who cares? Do worms feel pain? Do cats? If androids feel pain, do they suffer from it? These are questions that have been asked for hundreds of years by people who have thought much harder about it than old AC Clarke.

    Philosophers have a greater insight into the mind than do computer programers and authors. Try:

    http://ling.ucsc.edu/~chalmers/biblio.html

  • by jflynn ( 61543 ) on Monday September 20, 1999 @03:20AM (#1672441)
    We are already to a point where computers require really good computers and good software for their manufacture. Try to design a chip like Merced with pencil and paper sometime.

    To me, a critical point will be passed when computers become better at writing software and designing hardware than humans are, and have the ability to improve themselves in this way. We are already seeing neural net and genetic designs that work very well, but we don't really understand why. It's entirely possible that computers in the future will be very difficult for us to understand at all on the lower levels, because they are self-designed and programmed.

    Nothing scary here, we specify a problem space, a computer optimizes connections and software operations to provide solutions in the space. But conciousness can't arise without self-referentiality and I wonder if this is where it will come from.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...