Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Jaron Lanier Takes On "Cybernetic Totalists" 161

Stefan Jones writes: "VR pioneer Jaron Lanier has written "Half a Manifesto" -- a long and considered rant taking on notions favored by Extropians, Singularity fans and others -- on the exclusive salon for long-hairs, 'Edge.' Lanier believes that the totalists are not only promulgating an irresponsible and inhuman ideology, but indulging in bad science. The site also features fascinating and spirited reactions by a slew of luminaries, including George and Freeman Dyson, Bruce Sterling, Lee Smolin, Rodney Brooks and Kevin Kelly. Good stuff, no matter where you stand on this issue." Oh c'mon -- no one around here would fetishize technology per se, would they?
This discussion has been archived. No new comments can be posted.

Jaron Lanier Takes On "Cybernetic Totalists"

Comments Filter:
  • uh..huhhuh...huh...he said `tittites`...

    Sorry, but this thread violates my `if you cant explain a concept in just a few sentences, using words most people can understand, then i dont want to hear it` rule.

  • In the manifesto he says cybernetic totalism is:(pertially)
    2) That people are no more than cybernetic patterns.
    3) That subjective experience either doesn't exist, or is unimportant because it is some sort of ambient or peripheral effect.
    4) That what Darwin described in biology, or something like it, is in fact also the singular, superior description of all creativity and culture.

    Even though lots of people seem to think that 2 => 3, I think this is a fallacy. This is especially sad since this fallacy causes many people to reject 1)
    I must admit that I se humans in general, and consciousness in particular as "just" biological hmm creatures
    But this doesn't dismiss the subjective experiences. When I eat an apple I have a subjective experince of the apple which is different (hence subjective) from the naxt man. But this doesn't change the fact that my perception of the taste of the apple is the product of some chemichal interaction between my tastebuds and the apple, followed by som electriv(?) activity in my brain.
    When I fall in love it's also a chemical/electric reaction, and so is my free will (which I insist on having).

    I took a course in philosophy of science (Perhaps I didn't learn enough - you be the judge of that), where some of the arguments against the idea is that consciousness is the product of purely biological mechanisms where :
    This discounts free will
    Trivializes love
    Doesn't leave room for the "soul"

    I don't wan't to sound like an Ayn Ryant follower, because they go to the other end.
    My claim is that the biological explanation leaves room for the mystical, religous and other out of the world experiences. Can't we agree on a standing point between the extremes?
  • But then we won't be human any more. We are defined by both our capabilities and our limitations, and any attempt at altering the mechanisms of our conscioussness is invarably going to alter the very nature of who we are and how we think.

    So what if we do things to ourselves that make us cease to be human? That would only be a bad thing if "human" were the best we could be. I have this doubt that human beings are the optimum form of life, and if we are, then that's pretty sad.

    And should people start making changes to themselves, it doesn't mean they'll be alone afterwards. If nothing else, you'll have groups making the same changes to contine to be together as that group, to make sure there isn't the problem of being alone.

    Look, if any of you biological fundamentalists feel that changing to something other than what would be called "human" is negative, then you don't have to do it, should the opportunity ever arise. If you think that it's the equivalent of "sailing off the edge of the world", then don't get in the boat. Just don't prevent anyone else from exploring what may be out there.
    ---
  • by Admiral Burrito ( 11807 ) on Thursday September 28, 2000 @10:45AM (#747965)

    I have a hard time taking this guy too seriously. I'm sure he's bright and all, and may even have some interesting things to say, but he pretty much turned me off in the early 90s when he was going on about how the net would (could?)bring about this "utopian" age.

    Read the article. Apparently he's learned from his utopian-fantasy mistakes. The article questions the transhumanist theories/fantasies.

    In a nutshell: Computers aren't going to change the world that radically because we don't know how to write the required software.

    I'm kind of undecided on that one. Most of the extropian/transhumanist/singulatarian ideas assume strong AI. It's obviously possible to produce intelligence- we are living examples (Aren't we? Aren't we?!?! :). But there there is no clear path from what we have now to strong AI.

    I'll take the wait-and-see approach. :)

  • I have now added The Guy I Almost Was to my bookmarks. Memetic replication successful.

    ACKK! there I go again..
  • I never said the world was going to end.

    eschatology
    n.
    1.The branch of theology that is concerned with the end of the world or of humankind.
    2.A belief or a doctrine concerning the ultimate or final things, such as death, the destiny of humanity, the Second Coming, or the Last Judgment.

    (from http://www.dictionary .co m/cgi-bin/dict.pl?term=eschatology [dictionary.com])

    Read the post replying to the post above yours. Humanity is defined by what we can and can't do, and these parameters are what shape our consciouss and unconsciouss minds in the way we call "humanity". By altering these parameters we alter how we think, and thus become something other than human.

    My fault for not reading all your drivel, but I disagree. I don't think that defines humanity. I also think that even if it did, anything we do to alter our own course can't possibly be argued as diverging from humanity.

    Moreover, hasn't this been going on since the discovery of cannibis? Probably further back than that. People have been entering altered states of conciousness through asphyxiation, starvation, meditation, and whatever else they can do to adjust their alpha waves since well before biblical times.

    That we continue to advance without trying to throw away that which we are and that which has gotten us this far? Not too much to ask IMHO.

    I think it's foolish to assume that anything is being thrown away. It's being built upon.

  • Henry VIII started the Church of England because he was a randy git who wanted to trade in his wife for a newer model...
    I always love the fact that the Church of England was founded to satisfy the sex drive of the monarchy....


    Hacker: A criminal who breaks into computer systems
  • I took a course in philosophy of science where some of the arguments against the idea is that consciousness is the product of purely biological mechanisms where :
    This discounts free will
    Trivializes love
    Doesn't leave room for the "soul"


    Free Will doesn't exist. It's merely a word device created by religious philosophers to make sure you continue to feel guilty enough for your wrong doing to either choose "right" behavior or pay the church for an indulgence.

    We are a product of not only our genes but also our environment and experience. When it comes time for you to make a choice - any choice - you make it based on your past experience, the way you were raised, and the environment you were raised in. You can argue that a person is "free" to choose the other alternative, but this is clearly not true. We make the only choice we could possibly make given the circumstances at the time of the decision.

    Once we have made a decision and experienced the outcome of that decision, we can look back in our minds and say "In retrospect, I should've taken the other path." but if you were to somehow rewind time, and put yourself right back into the situation again, with the same experiential data you'd make the same choice over again, because it was the choice that your experience and environment designated as the correct one.


    -The Reverend (I am not a Nazi)
  • And who needs wired magazine when you have a T1 connection and access to "News for Nerds. Stuff that Matters"?
  • This is another classic response when Technocrat wannabes find their sacred cow in danger of being tipped.

    It's not a matter of where we're going to be or do when the Sun overheats this planet in a billion years or so. It what dangers we face to the quality of life in the next century. I'm a bit older here than the average poster or reader hereabouts and remember a lot of the promises made during the Atomic Age about "trips to the Moon", "automated grocery shopping" and my personal favorite, "the 4 day work week." Despite the promise of enablement, we find ourselves working harder to make a basic living than most of us remember. There was a time when most of us could take care of rent with just one week's paycheck. That was about a decade ago.

    Then of course are the increasing stresses made on the environment, natural resources, and the complete inability to remember the lessons pounded on us during the '74 energy crisis.

    But most of all, in our rush to abandon the analog for the digital, we overlook one important fact. Organics run circles around cybernetics in the most important encompassing area, adaptability. Computers, AI, etc. are still confounded by the most simple errors. A single wrong digit totally incapacitated a supermodern battleship. The immense work involved in getting a cybernetic arm to lift an egg without crushing pales the average human's ability to cope with the loss or replacement of one or both arms.

    Then there is the plain matter of quality. The new anthem these days draws largely from the ongoing refrain; Analog Bad, Digital Good. I still remember quite vividly the raging debate of CD versus Vinyl. CD makes possible incredible fidelity long unattainable at the present prices until now. But slicing up the Analog signal and breaking it down to a range of discrete choices has a price tag. Now, you won't notice this on the cheap audio equipment most people buy, but play an album on a good turntable audio system and listen with a sensitive ear, and you'll pick up what was thrown away on the digital cutting floor.
    Another example that's even easier for most to see is to compare Laser Discs vs. DVD's. DVD's give us lots of great options, but compare the video quality with their admittedly bulkier forbears, especially in low-light scenes. The artifacts of DVD images are part of the price we pay for DVD feautures. Again it's a choice we made.

    And lastly, let's keep another thing in mind. At present, humans require a substantial amount of abuse and effort to brainwash. But how easy will it be for me to control you when you're receiving your information by direct download to your brain, and I'm the guy running the 22nd century version of Time Warner?

    These are my three laws from the Book of Lazar:

    1. Embrace the future with an open mind.

    2. Keep both feet on the ground.

    3. When one hand offers you the keys to the kingdom, keep an eye on what's hidden in the other.
  • >No combination of bio/nano/computing technology
    >is ever going to be me for the simple reason >that it will have different boundaries from me,
    >and our idea of self is shaped from these
    >boundaries.

    Last year I couldn't swim more than about 200m.
    Now I can do at least five times as much without
    difficulty, I don't actually know what my
    boundary is now. Have I lost my self?
  • > part of me desperately wants to suggest the humour in imagining what
    > kinds of possible religions might be created by sentient machines capable of 'experience,' but the
    > rest knows that this is beyond any forseeable event horizon.

    From Red Dwarf:

    "No Silicon Heaven? Preposterous! If there were no Silicon Heaven, where would all the calculators go?"

    > if you think that any amount of modelling, branch prediction, and
    > hypotheses can ever actually 'know' something, especially something as individual as experience, then perhaps you've forgot about the cat.

    If you're talking about Schroedinger's Cat, I'd say that's a bad example - quantum mechanics is probably the best-tested model we have going. Just because it conflicts with "common sense" doesn't mean it conflicts with observed behavior. The experiments are pretty conclusive proof that, yes, particles do exist in superpositions of states, the state vector collapsing upon observation.

    Manipulating the superpositions of states and building quantum computers is a difficult - and possibly insurmountable - problem, but the reason people are interested in quantum computers is precisely because we have the mathematical understanding to talk about what sorts of things we could do with such a system if one existed.

    Even if we never develop the technology to build a quantum computer, the theory's fascinating. (If I were 20 years younger, I'd forget all about learning about transistors and figuring out truth tables and just start trying to wrap my head around how to do useful computation with qubits.)

    I'll segue out to an earlier "does [spoken] language affect the design of computer languages" Slashdot thread... I don't find that particular issue terribly interesting, but the I do think that design of computer languages is strongly affected by the type of machine you're programming.

    Functional programing is a big jump from strictly procedural languages. You have to change the way you think, not the way you code. If someone develops a "usable" quantum computer, I can guarantee you that most of us are out of a job. Our generation has too much Von Neumann on the brain.

    And I'll bet the hard AI folks will have a field day with it. "The brain's not a VN machine, it's a quantum computer! That's why the first attempts at hard AI failed!". ;-)

    Back to subjective experience and consciousness again. My personal belief is that while Godel may well stop us from understanding (at the level of "run this code to get this result") conscious experience, it's no barrier to creating machines which have it.

    I'll even go so far as to live down to Jaron's bit about how cybernetic totalists like to annoy people on the other side of the debate by saying. "Proof: Humans have been creating machines that have experience consciousness without understanding the mechanism by which consciousness arises by mating with each other for about a million years. Other species have been creating similar machines by similar means for possibly much longer periods of times".

  • I haven't yet been unconvinced of what Daniel Dennett's various approaches to this putative "hard problem" originally convinced me of, namely that the reason why the "hard problem" of subjective experience does not have a solution is that it isn't a real problem.

    It only ever gets to be a problem because it's persistently invoked as one. If people didn't smuggle it into their discussions of the "soft problems" of neural functioning, etc. (all that "mechanistic" stuff), it would never come up of its own accord. What would a solution to it actually solve? What new things would one be able to account for? ESP? Near-death experiences?

    Subjective experience can do all the things it needs to without ever checking out of meatspace, including being that thing about me that I can't imagine being me without. You only need the uber-physical properties of quantum this and microtubular that to explain things I don't believe any consciousness has ever accomplished anyway. This isn't to say that having a quantum supercomputer built into my brain wouldn't be a cool thing: sure I'd like to be able to intuit non-computable things about the nature of ultimate reality. But the only thing that can normally make me do that is THC, and THC only makes me think I'm doing it...

  • that was what the (at least) was supposed to convey. I totally agree with you. I wish you'd responded to a little more of the comment, perhaps read a little more of the stuff that I've written (like my background info) to realize that I very well know its here and it has been here for a very long time...

    Please, at least take the letter as a whole - taking that comment alone completely misses the point, and only serves as a poorly constructed flame.
  • You're right, the Act of Supremecy was passed in 1534, three years before the Pope's Sublimis Deus. Prior to that however, the Henry VII refused to acknowledge Spanish and Portuguese claims to the New World, under the papal bull of 1493. In 1530 Las Casas started writing De Unico Vocationis Modo denouncing violent conversion.

    So, yes, Henry didn't create the Church of England directly in response to the Pope's Sublimis Deus. That's a factual mistake on my part. However, England did have previous grievences with Catholic Church for the Pope's partitioning of land, and for the most part everybody ignored Sublimis Deus anyway.

    Whether the Moors where driven out by the time Columbus set sail, the point is that Spain desperately needed to refill its coffers to recover from, if not fund, its wars with the Moors.

    And I believe the black plaque was still around for a while. At this point Europe had not known life without the plague. That is the mindset I was trying to establish: both centuries of war with the Moors coupled with random and meaningless death, which caused Europe to be very jaded and cynical by the time they discovered the new world.

    It still seems we are trying to conquer something.
  • Don't let them immantenize the eschaton!


    Hacker: A criminal who breaks into computer systems
  • by Alien54 ( 180860 ) on Thursday September 28, 2000 @06:51AM (#747978) Journal
    So far it is not bad at all. Being a technology type, I presume that you are used to reading books without pictures, etc. This definitely requires at least that level of literacy. Also, there is a specialized vocabulary that needs to be sorted out.

    Since this is "regular english", people do not realise that the fancy words used there form a jargon of their own. In a tech manual, you know how to handle this. But in this context it is easy to blank out, and watch your eyes glaze over. Snap out of it and grab yourself a philosophy tech manual so that you can at least get a hold on the jargon. [smile]

    - - - - - - - -
    "Never apply a Star Trek solution to a Babylon 5 problem."

  • Switchblade Knife by Pro-Pain

    (fires up gnapster ...)

  • Change is good it keeps everything fresh. The last thing we want is artificial limits to be placed on what we can acheive. The ones who want to meld themselves with machines should be allowed to (It will be futile to try to stop them).

    Anyway it's not as if there are going to be any sudden changes. It will happen very slowly over many generations. The inter-generational steps will be small and acceptable to people.

    Take an Average Joe from three thousand years ago into todays world and he'll probably be horrified by what we have become.
  • I took a course in philosophy of science (Perhaps I didn't learn enough - you be the judge of that), where some of the arguments against the idea is that consciousness is the product of purely biological mechanisms where :
    This discounts free will
    Trivializes love
    Doesn't leave room for the "soul"


    Hmph. It's like arguing against the theory of relativity by saying "but that doesn't make sense to me", or arguing against evolution by saying "I don't want to think I'm related to monkeys". Those arguments are not really arguments, they're just effects of the theory that people don't like.

    If you want to show that consciousness isn't purely the product of biological mechanisms, then find something more effective then "but I don't like what it implies".

    I don't know your personal feelings of the worth of the arguments, as it wasn't clear from the posting, but I had to add my 2 cents.

    In the meantime, I'll continue to believe in free will because it was predestined for me to do so. :)
    ---
  • Our boundries have only been expanding with any velocity for a brief period - perhaps a few dozen millenia. This is not a stable state, and will probably lead to real problems not too far down the line.
  • Luddite.

    You heard me. You're just a slightly more up to date version of the folks who speak out against ATM machines, electro-magnetic radiation from home appliances, and the Evil Internet, yet accept technologies like refrigeration and internal combustion engines, because we've had that technology "forever".

    Why is it so common to predict the end of the world (what do you think Escatology is about?) based on a pretty graph that shows steadily increasing technology?

    How do you define this "Humanity" that you propose we are abandoning in favor of technology? How, exactly, are we going to go about abandoning it?

    What humanity is doesn't change, and hasn't changed. We are still the same race, with the same problems, the same hopes, and the same daily social trials as humanity had one thousand years ago.

    You people love to propose that as a whole we are less than we were because we can do something differently. I say we are not.

    I mean, what do you expect?

  • by w00ly_mammoth ( 205173 ) on Thursday September 28, 2000 @04:38AM (#747984)
    I used to read Wired magazine some years ago, and I used to take it seriously. After some time, I began to notice a trend in its articles.

    There's this clique of "digerati" who keep popping up on its pages and in similar forums/magazines/books, explaining the future in all its robotic nanotech cybernetic glory. The same names keep appearing over and over, repeating visions of a future so vague and full of popcorn sci-fi visions that you can't quite pin down anything specific, but can debate about it for weeks.

    You may have seen these names before, waving hands and talking about the amazing future - Nicholas Negroponte, Marc Andreesen (even makes the cover of wired recently, all for having co-written mosaic w/ eric bina), Lanier, Kurzweil, Ted Nelson, Gelertner, etc. Most or all of these have been "has beens", who never quite produce anything useful, except visions of the future that are lapped up by journalists and viewed as the gospel.

    Sure, it was interesting to read about futuristic visions of tomorrow, but after 5 years of this crap, and hardly any progress in bandwidth, usability, AI, speech rec, home automation, etc., I have had enough of reading about this thing. Anybody can write vague gee whiz stuff, full of buzzwords that nobody quite agrees upon.

    Trust me, after you're read this sort of stuff for a while, it begins to enter the territory of the social sciences - it's always full of controversy, you can never prove anything wrong with it, but its proponents can always make their point with something so vague that it sounds profound.

    Give it a rest. It's not even as entertaining as campy 70s futurism.

    w/m
    PS - Funny how REAL contributors to technology of the future never write articles like this, maybe because they have something at stake? For instance, you don't see crap like this from Drexler, peter shor, or seth lloyd.

    And for all the people falling over themselves trying to write serious posts. If you think you're l33t at using buzzwords, try Alan Sokal [nyu.edu]

  • by cr0sh ( 43134 )
    I will say something for this article - it made me think. It made me question what I believe will happen. It made me step away and consider other alternatives.

    For that alone, the article has merit.

    However, other things Jaron writes about seem to infer on his part a lack of knowledge of certain subjects. One instance that I know of is his questioning why nature hasn't come up with the wheel in the course of evolution. My answer would be it probably has something to do with the disconnect between the wheel and what it rotates on. Being able to supply the chemical nourishment to the "moving" part is difficult - the best you could hope for in a bio-wheel would be some form of membrane transfer (like that which occurs in our lungs, to get oxygen into the blood cells), however, the wheel would have to rotate slowly enough for this to actually work (now, if I am wrong about all of this, someone please correct me). But we do have "almost wheels" in nature - certain bacteria flaggella, and of course the ball and socket joints used throughout the body.

    I don't know - what Lanier writes about seems to make sense and non-sense - but at least it will keep me thinking...

    I support the EFF [eff.org] - do you?
  • Yeah, pretty pricey. That'll change. Think about how expensive and difficult to use contact lens used to be - now they throw them away. Think about how expensive car phone were in the 1970's - now they give away cell phones free with the service.

    I wouldn't place bets on that price going down all that far. Prices don't just go down because of generally increasing technology it has a variety of factors:
    - contacts went down because there was an obvious HUGE demand for them and lots of money was spent on manufactuing techniques to reduce the cost. Plus it is basically a simple small piece of plastic.
    - car phones went down for the same reason computers have gone down: they can more and more functions on a single chip. therefore reducing production costs. Plus cell phones are basically simple walkie talkes with a few more functions.

    prostetic limbs on the other hand are pretty complicated mechanical constructions. Just look at cars -- they have not gotten any cheaper. Prostetic limbs will for the forseable future remain in a constant state of R&D and very little time and money will be put into trying to make it easier and cheaper to manufacture. This is because people don't neccisarily want more limbs they want better limbs. So limbs will continue to slowly improve but will remain expensive.
  • you're welcome.
    in the time since my first bbs, i've done (and still occasionally do) a lot of anonymous flaming, and i think the thrill of treating others viciously is really a harmless hobby that can be sometimes amusing. what's great about public message boards is that the relative privacy means we can say whatever we want to without consideration for anything -- instead of wasting time trying to understand and be understood, the act of speaking becomes all about Me. Saying things like, "you're a complete moron" allow me to briefly pretend that i, and i alone, really know what it's all about. there's a tremendous amount of power in that, and it's something i think everyone should do just so they can understand how base and petty they are (it worked for me).

    anyway, to go ahead and act out my assigned role as Defender-Who-Seeks-Public-Justification-After-Bein g-Oh-So-Dramatically-Flamed:
    1)it is true that i am not generally inclined to capitalize the beginning of sentences. the strength of language is that we readily understand poorly constructed phrases like "you please help?" or "no like bad man". in context even single words like "restroom?" are understood to mean, "Would you please direct me to the nearest restroom?"
    that's good, because it gives us an emormous degree of freedom to form things in whatever way suits our purpose, and everyone can use whatever level of syntactic precision they feel is necessary.
    2)i won't protest the label of pseudo-intellectual, since i'm sure parts of me fit into that category quite easily. still, you have to be grateful that i am, because otherwise you wouldn't have been given this splendid opportunity to "get your ya-yas".
    3)as far as the phrase "give rise to verbal rhythm, mental ambience, subtexture, supertexture" being meaningless, i'm not terribly concerned. i chose those words because to me they possess very definite, descriptive connotations out of which i can try to externalize what i'm thinking. other people, naturally, have association-sets of their own which may or may not be similar to mine. the more dissimilar they are, the more meaningless my speech becomes to the listener. since i wasn't exactly expecting to change the world by posting on slashdot, i can accept the your attribution of meaninglessness to my post as a failure (on my part) to communicate. but it still sounds good to me, and since i'm quite happy with just that, perhaps that really does make me:
    4)a first-class wanker.
    yes, you're right again. i do like myself. i do enjoy my own company. i am often pleased by things i say and do. yep....

    hmmm.. well, i guess i gave in on all four points there. so let's keep playing our roles and declare you the dashing, irresistably sharp chap who has single-handedly made a fool out of yet another of those awful pseudo-intellectuals.
    i can only hope this has been as entertaining for you as it was for me.

    ---
    the problem with teens is they're looking for certainties
  • by Tackhead ( 54550 ) on Thursday September 28, 2000 @06:53AM (#747988)
    > [because the things we build out of silicon today break down sometimes, and thus don't aren't the idealized Turing machines we claim they are] We kid ourselves when we think we understand something, even a computer, merely because we can model or digitize it.

    Nope, we don't understand conservation of momentum because we can't build frictionless surfaces. The pucks on the air-hockey table aren't good enough.

    Nope, we don't understand Newtonian gravity because we can describe it by an inverse square law.

    Nope, we don't understand special relativity because we can describe how clocks slow down when put into accelerating frames of reference.

    Nope, we don't understand consciousness when we can model it and build a machine that passes the Turing test.

    The first two statements ring hollow because we developed the technology to test our models. The third statement is untested because we have neither the models nor the technology.

    I said untested, not untestable.

    To make a long story short, a cybernetic totalist believes that at some point, we will develop models of consciousness that allow us to describe machines that posess it, and that we will also develop technology that will allow us to build said machines.

    I'm one of 'em. At present, I take that last paragraph on faith - I'm on the Minsky side of things; a brain is a computer made of meat. It should be possible to build one.

    Where I take issue with Jaron Lanier (aside from his IMHO preposterous assertion that not being able to build "ideal" computers equates to our not understanding computer science!) is that he believes that CTs see the eschaton as immanent. I don't. I see it as a possibility, but we have so many serious technological hurdles to jump over between "here" and "there" that I don't worry about it. Building nanopaste is highly nontrivial.

    (I do agree with him that the blind acceptance of criticality is a problem - I'm also one of those people who "looks forward to it" - but even I acknowledge that criticality might not be as good a thing as I think it will be.)

    So finally on to subjective experience.

    I have never seen a credible argument that subjective experience doesn't exist. I believe it exists. I experience it 24/7.

    But likewise - I have never seen a credible argument that subjective experience requires anything other than a sufficiently complex network of inputs, outputs, and some sort of feedback going on in the middle.

    I can't explain how that works. Lanier would call it "soul". I call it "mind", and view it as an epiphenomenon of "brain". The fact that it is an epiphenomenon of "brain" doesn't make it any less real.

    When Jaron says "But don't you experience your life? Isn't experience something apart from what you could measure in a computer?", I'd counter with:

    "Yes I do, but as for the second question, I don't know, because I lack the tools to measure experience. It may be, as suggested in Godel, Escher, Bach, that my brain lacks the capability to understand said measurements, owing to some Godelian "loopiness" in that it's hard to emulate a brainputer on a brianputer. But at present, that hypothesis is untested, and I have to proceed as though it were measured.

    As for ad hominem arguments, the notion that Turing developed the notion of machine sentience in order to deal with his own personal anguish is almost beneath contempt (I say "almost" because on Edge, "where ideas come from" is a legitimate topic for discussion), and I won't dignify it with a reply. Whatever the origin of the idea, the idea is IMHO valid, and Lanier does himself a disservice when attempting to criticize it on the grounds of its origins, not its merits.

    Feh. A long rambling rant from Tackhead.

    LionKimbro, regardless of our agreement or disagreement about Lanier's paper (I, too, found it a wonderful read; while I've taken a few slices out of him here, it's an excellent articulation of the non-CT point of view, contains much that is of merit that I've ignored in this post, and provides lots of food for thought), we're in solid agreement on one thing:

    Those of you taking pride in not understanding the paper should seriously reconsider your position. Ignorance is not something to take pride in.

    To those saying that Edge is "just a load of intellectual crap", would you also agree with some skript kiddie saying "That RMS guy at gnu.org, hes stupid, all he duz is write lots of stupid essays talking big intellectual crap about free beer versus free speech! Wut da fuk he talking about? N-E-1 with cl00 know that free software means #warez, d00d!!!"

    1. Such a bright boy
    2. Turing committed suicide
      • Therefore the turing test is flawed
    3. Extropians and former Art Bell listeners believe in a singularity beyond which everythign will change, in the year 2020 or therebouts
      • Therefore people who believe that there will be progress in computers or that there will ever be machine intelligence are nutty
    4. I am a VR weenie who got overly excited about polygons floating in simulated 3-d space and thought everyone else would like to jerk off in a simulated cyber environment like me
      • Therefore I know what I'm talking about with these dern computers that betrayed my fantasies
  • > [because the things we build out of silicon today break down sometimes, and thus don't aren't the idealized Turing machines we claim they are] We kid ourselves when we think we understand something, even a computer, merely because we can model or digitize it.

    Nope, we don't understand conservation of momentum because we can't build frictionless surfaces. The pucks on the air-hockey table aren't good enough.

    Nope, we don't understand Newtonian gravity because we can describe it by an inverse square law.

    Nope, we don't understand special relativity because we can describe how clocks slow down when put into accelerating frames of reference.

    Nope, we don't understand consciousness when we can model it and build a machine that passes the Turing test.

    The first two statements ring hollow because we developed the technology to test our models. The third statement is untested because we have neither the models nor the technology.

    I said untested, not untestable.

    To make a long story short, a cybernetic totalist believes that at some point, we will develop models of consciousness that allow us to describe machines that posess it, and that we will also develop technology that will allow us to build said machines.

    I'm one of 'em. At present, I take that last paragraph on faith - I'm on the Minsky side of things; a brain is a computer made of meat. It should be possible to build one.

    Where I take issue with Jaron Lanier (aside from his IMHO preposterous assertion that not being able to build "ideal" computers equates to our not understanding computer science!) is that he believes that CTs see the eschaton as immanent. I don't. I see it as a possibility, but we have so many serious technological hurdles to jump over between "here" and "there" that I don't worry about it. Building nanopaste is highly nontrivial.

    (I do agree with him that the blind acceptance of criticality is a problem - I'm also one of those people who "looks forward to it" - but even I acknowledge that criticality might not be as good a thing as I think it will be.)

    So finally on to subjective experience.

    I have never seen a credible argument that subjective experience doesn't exist. I believe it exists. I experience it 24/7.

    But likewise - I have never seen a credible argument that subjective experience requires anything other than a sufficiently complex network of inputs, outputs, and some sort of feedback going on in the middle.

    I can't explain how that works. Lanier would call it "soul". I call it "mind", and view it as an epiphenomenon of "brain". The fact that it is an epiphenomenon of "brain" doesn't make it any less real.

    When Jaron says "But don't you experience your life? Isn't experience something apart from what you could measure in a computer?", I'd counter with:

    "Yes I do, but as for the second question, I don't know, because I lack the tools to measure experience. It may be, as suggested in Godel, Escher, Bach, that my brain lacks the capability to understand said measurements, owing to some Godelian "loopiness" in that it's hard to emulate a brainputer on a brianputer. But at present, that hypothesis is untested, and I have to proceed as though it were measured.

    As for ad hominem arguments, the notion that Turing developed the notion of machine sentience in order to deal with his own personal anguish is almost beneath contempt (I say "almost" because on Edge, "where ideas come from" is a legitimate topic for discussion), and I won't dignify it with a reply. Whatever the origin of the idea, the idea is IMHO valid, and Lanier does himself a disservice when attempting to criticize it on the grounds of its origins, not its merits.

    Feh. A long rambling rant from Tackhead.

    LionKimbro, regardless of our agreement or disagreement about Lanier's paper (I, too, found it a wonderful read; while I've taken a few slices out of him here, it's an excellent articulation of the non-CT point of view, contains much that is of merit that I've ignored in this post, and provides lots of food for thought), we're in solid agreement on one thing:

    Those of you taking pride in not understanding the paper should seriously reconsider your position. Ignorance is not something to take pride in.

    To those saying that Edge is "just a load of intellectual crap", would you also agree with some skript kiddie saying "That RMS guy at gnu.org, hes stupid, all he duz is write lots of stupid essays talking big intellectual crap about free beer versus free speech! Wut da fuk he talking about? N-E-1 with cl00 know that free software means #warez, d00d!!!"

  • Luddite.

    You heard me. You're just a slightly more up to date version of the folks who speak out against ATM machines, electro-magnetic radiation from home appliances, and the Evil Internet, yet accept technologies like refrigeration and internal combustion engines, because we've had that technology "forever".

    Right. And the reason Picard resisted the Borg is because he's a Luddite and is resisting progress.

    Don't hamper progress by thinking about what you are doing! Don't start asking impertinent questions! Just accept and conform.

    Or else risk being labled a Luddite. People who express NEW ideas and concerns must be branded so they can be easily identified. Otherwise, they might influence people. Worse, they might influence the whole intellectual landscape! I guess that makes you a "thought-Luddite", since you are trying to hamper intellectual progress.

    Intellectual progress must not interfere with technological progress! It's Luddite VS Luddite in a 'winner takes all' showdown! Lets get ready to rumble!

    Name-calling is fun!

    James.

  • People think in pretty much the same way today as they did when the Ice Age finished

    You say that as a fact, and yet it remains an open question to this day and has been fiercely debated for centuries. That's one of the reasons people study the history of culture, literature, art, technology, etc.: to try to get glimpses into how people used to think and feel, to try to take their measure as humans.

    While it is possible to assert (as people do) that "we're as we have always been", or "we are more advanced than our ancestors", or "we are degenerate", there simply isn't enough extant evidence to propose a conclusive proof one way or the other.

  • I hope quite a few people here read Lee Smolin's comment [edge.org] in the feedback to Lanier's article, in which Smolin describes different scales of problem solving (CLASS 1, simple optimization, through CLASS 5, creating entirely new fitness landscapes and simultaneously functioning within them). Lanier's article didn't say much new to me, but Smolin's analysis of what we are up against if we hope to really speed evolution is very persuasive and enlightening.

    I think Lanier's whole discussion of subjective experience is a red herring. Personally, I do think I have subjective experience, but I don't see why a computer -- or some other evolved-partly-by-us creation -- couldn't also have subjective experience. Arguing the point seems to be a fundamental distraction from the deeper issue: what kind of impact ever-evolving technology will have on human society, and whether we will (eventually) create beings comparable or superior to ourselves.

    I tend to agree with Lanier that one of the worst risks we face is increasing social inequity. Think about some of the nightmare totalistic scenarios -- that we will create a race of super-beings that will look down on us and use us as fodder. Now, put on a very cynical hat, and think about the world we live in now, where the "first world" consumes most resources; where the richest 1% of the population has over 30% of the wealth; where entire social classes are disenfranchised and economically shut out. Seems to me we already live in the technological dystopia people like Joy and Lanier are afraid of! (I think Lanier acknowledges this, on some level.)

    I am still an associate of the Foresight Institute, because it is a good resource for tracking developments in advanced technology. But I haven't been to one of their functions in some time, largely because the prevailing meme pool there doesn't contain enough skepticism for my taste. (Cryonics in particular seems to me to be substituting overwhelming technological optimism for a real confrontation with issues of life and death.)

    Putting the optimist hat on, I would hope that newer, smaller, greener technology will eventually reduce humanity's ecological impact on the planet, and that as we do start to evolve creations (creatures) that have greater independence and autonomy, that it helps us widen the "circle of empathy" Lanier mentions. You could take an almost Buddhistic view -- that compassion is the essence of the universe -- and realize that it's our moral responsibility, as creative and intelligent beings, to evolve creatures that are themselves compassionate. Whether we can do this when we are so often lacking in compassion ourselves is another question... in fact (finishing off by wearing the theist hat), that may be the question that God created us to answer.

  • by Wellspring ( 111524 ) on Thursday September 28, 2000 @07:06AM (#747994)
    You may have seen these names before, waving hands and talking about the amazing future - Nicholas Negroponte, Marc Andreesen (even makes the cover of wired recently, all for having co-written mosaic w/ eric bina), Lanier, Kurzweil, Ted Nelson, Gelertner, etc. Most or all of these have been "has beens", who never quite produce anything useful, except visions of the future that are lapped up by journalists and viewed as the gospel.

    I think that the worst of the bunch are the anti-technological lit crit types. Alan Sokal's experiment last year came to mind while I read this, and I am glad you linked to it, since I couldn't remember Sokal's name. Here is the whole archive [nyu.edu].

    The problem is that the technological process is composed of normative and positive components. Normative components are pure opinion and value setting (eg that education is more important than national security, or that spam is something to be discouraged). Positive components are purely factual (eg that a technology can be projected to earn X dollars for the industry, or that a black hole's event horizon is at radius Y. Or that a component's magnitude cannot exceed that of its whole).

    Ideally, we use Positive techniques to achieve Normatively determined goals. Positive methods are evaluated by reason and experiment. Normative beliefs are evaluated using persuasion and politics. We need both! But as Sokal's experiment warns us, we also need to keep them separate. Science, like journalism, is our society's information-gathering apparatus. Politics, religion, the marketplace, and the media (including slashdot) are our decision-making apparati.

    It is really, really tempting to try to mix the two. In fact, it is pretty much inevitable, since much of what we think is fact is really widely-accepted opinion. Facts, like opinions, are often in dispute. However, I think much of the lit crit world is intentionally blurring the distinction for two reasons:

    1: Facts have a special weight in society. When these become subject to revision, one can manipulate opinions by manipulating the perceptions of what a 'fact' is. Also, anyone can have an opinion. By placing policy judgements into the domain of positive analysis, you make opinions the exclusive province of the Credentialled Academic. Noam Chomsky, for instance, is a giant in the field of linguistics. However, many people who agree with him seem to think that his academic standing makes him a better economist, for instance, than real live economists. This is an extreme example, but often people who are good at describing a phenomenon are considered somehow to be specially endowed with the power to judge something. You don't need a PhD in military history to be able to say that wars are terrible. Or an economist to say that economic growth is good. Walling something away from the generally educated public is not just bad, it is a form of subtle tyranny.

    2. The very lit-crit people who are trying to remove the objectivity from science are themselves just trying to move all the sciences under their banner (or as Lanier puts it, 'campus imperialism'). On a more meta level, the lit crit set actually study persuasiveness and persuasion for a living. They are experts in its techniques and uses. So why not make persuasiveness the basis for all scientific discourse? If everything in the sciences becomes a matter of who has the most witty barbs in the social scene, or whose critiques are the most cleverly worded, or whose syllables-per-word average is the highest, then of course the lit crit people would win. You can't blame them for trying.

    Our prototype for the Real Scientist should be Richard Feynman. He was a total iconoclast. He was a fiercely creative, but intellectually disciplined person-- willing to throw almost any notion away in the face of hard evidence. He wasn't political, and resigned from the National Academy of Sciences because he saw them as an organization devoted entirely to determining who was worthy of being a member. I'd say that that describes the lit crit crowd pretty well. And while everyone has opinions, when it comes to science and factual data, you have to bend over backwards to ensure your own objectivity. Lanier's article challenges one part of this threat, and I hope that people recognize the problem.

  • We are headed for some time of technological Singularity whether anyone likes it or not unless we plunge first into major warfare or otherwise pretty totally bite the dust. Exptropians and so on did not invent this nor are they responsible for it being so. They are mainly just a bunch of people who recognize it is coming earlier than most folks recognize it. Beating them up or saying they are all this or that type of whatever is irrelevant and simply flat incorrect. There is about any type of viewpoint among such folks as you could imagine.

    What is really important is making the future as good for all of us as is possible. To do that we need to stop acting like a bunch of gossiping ninnies.
  • The author lost credibility because he put down your operating system of choice? He must of been talking about your religion. Relative to something like Genera, UNIX is primitive. UNIX just often happens to be the best choice of a bad lot.

    P.S. I hope that you use gets() extensively in your C code.

  • I think there is a real problem here: Our social progress is not keeping pace with our technological and intellectual progress.

    You guys know this already. You call it 'shitty management'. Well, guess what? All management is shitty.

    So, what is 'good management'? Well, it's not called 'management'. It's called 'collaboration'. You see, collaboration is something that you actively do, while management is something that is done to you. I hope you recognize the difference, and the importance of the distinction. After all, what is our democracy if not a collaboration?

    Why is our education system based on a model of top-down coercion? Why do our schools emphasize rote-momorization and behavioural compliance? Why don't they emphasize conceptualization, raw experience and collaboration? A large majority of research supports the second approach, yet most of our schools fail to reflect this research.

    Have you ever known a guy who after being promoted to manager, starts acting like a prick? That's because he's trying to manage you. He's wearing his 'teachers hat', as many teachers like to call their presumption of authority.

    Good teaching is collaborative, good managing is collaborative, and good government is collaborative (ie democracy). But what if our government, in charge of education, decides not to teach children how to collaborate? Instead they focus children on competition. Doesn't intellectual competition encourage people to horde the 'secrets of their success', breaking down communication, an essential element of collaboration? Does this approach nurture a childs ability to govern effectively as a citizen in a democracy? Does he become more dependent upon his elected leaders as a result of this ommission?

    Do you ever wonder where apathy comes from?

    The prospect of extreme technological power scares people: like the prospect of a child playing with a machine-gun in a schoolyard.

    Lets solve our social problems. It takes only one thing: collaboration. Collaboration IS the solution. Just make sure it's REAL collaboration:

    Collaborators don't reward or punish each other.

    Collaborators don't win or lose.

    Collaborators don't organize themselves hierarchically.

    Collaborators don't impose ideas or rules on people.

    Collaborators choose to collaborate.

    James.
  • Larson has correctly identified the currently popular model of the world, "Cybernetic Totalism."

    Do you mean Lanier? Or do you mean Larson [netsurf.de]?& lt;/p>

  • wtf are any of these?
  • As soon as we are able to modify ourselves and transform ourselves to something else - we'll do just that, and this will be the beginning of the end of human rate - and ultimately - our civilization. The cataclysm will happen. It is only a matter of time. It can't be prevented. The funny thing about technological civilization - once you've started there is no way back. Every solution creates its own problem, its solution in turn creates yet another problem, etc.
  • Have you ever had a bad camping trip? Say, one where you lose your food to animals, you lose the spikes for your tent... That really sucks.

    Thats probably one of the closer experinces you could have living like people did in the past. Why did people make progress? Because life SUCKED. Spending all your days making sure you don't die SUCKS. That is why there was progress. Now that life doesn't ( in general ) suck so much, we have time to do other things.. like think! ( which you apparently didn't do before you wrote the responce...) . I don't know about you, but I would very much like to see what the surface of a different planet looks like in person. That is the reason for progress now.

  • Don't know about you but when I get to go to bed with a hot chick (a rather hypotetical example given my current life ;-) ), I feel very happy to live like an animal. Same thing when I eat some really good food, when I get a bit drunk or stoned. I enjoy feeling the heat of the sun on my skin. There are good sides about having a body, don't you think ?

    Of course the sleep, death and toilets stuff are quite death (huh especially death !), I'd be welcoming any improvements technology can offer me in these areas, but sorry, I'm not ready to leave my body behind. And anyway, I haven't seen technology making many steps towards me not having to go to the bathroom anymore 8-) . Still, I think that being a pure mind would just be too fucking boring.
  • I see anyone that believes in or even worse desires such a dramatic and unnecessary change as being at best a foolish idealist and at worst someone unable to accept the reality of their own existance.

    I accept the reality that our existance does not have to be what we have been handed. We can reshape ourselves into anything that physics allows.

    There is a very good novel that talks about this subject called Distress [amazon.com] by Greg Egan. I highly recomend it.
  • I agree. We should try to "analyze" where technology is headed. And we should try and stay aware of the downside of "ill-informed" deployments of technologies with hidden social consequences. That said, I wonder whose record is better, the decisions of scientists and engineers who seek to improve social conditions (i.e., the green revolution, anti-biotics, anti-cancer treatments, etc.), or those of self-appointed "analysts" of scientific and technological change? Such as your signature-mate Mr. Nader...

    I leave it to the reader to evaluate what they think their "answer" is.

  • You`re right! I was about to check that out! They`re watching us! Help!
  • I can't stand these retarded pseudo-intellectuals that invent ideology to disagree with. The intro duction to his paper requires the reader to agree with his assumption that Marxism is a totally flawed ideology. well his assumption there is wrong undermining his entire piece. not to mention the fact that he is a little paranoid and not exactly clear on what he would like to condemn. Please the next time one of these so called intellectuals goes on a rant can some one have them rant on something relevant ?
  • I think that one of Jason Lanier's "analysts of scientific veracity" must have decided that it was against the public interest to see it.
  • And using a simple utility that Linux invented called sed,
    Really ? I wonder what the program called "sed" was that I was using back in the late `70s...
  • As much as a detest the Extropians and social Darwinists this author criticizes, I have so far found nothing interesting in this sloppy and vague rant. The author lost all credibility when I read the following passage:

    This breathtaking vista must be starkly contrasted with the Great Shame of computer science, which is that we don't seem to be able to write software much better as computers get much faster. Computer software continues to disappoint. How I hated UNIX back in the seventies - that devilish accumulator of data trash, obscurer of function, enemy of the user! If anyone had told me back then that getting back to embarrassingly primitive UNIX would be the great hope and investment obsession of the year 2000, merely because it's name was changed to LINUX and its source code was opened up again, I never would have had the stomach or the heart to continue in computer science.

    Ok zealots, flame away! ;)
    --
  • "Sure, it was interesting to read about futuristic visions of tomorrow, but after 5 years of this crap, and hardly any progress in bandwidth, usability, AI, speech rec, home automation, etc., I have had enough of reading about this thing. Anybody can write vague gee whiz stuff, full of buzzwords that nobody quite agrees upon. "

    That reminds me of Mondo 2000! Does that still exist? Jesus, what a pile of crap!

    I still think that nuking MIT would be a good starting place! All this rubbish starts of with wearable computers, and ends up suggesting we`ll actually BE computers!

    These bozos must be paid by the word!
  • by Electric Angst ( 138229 ) on Thursday September 28, 2000 @04:43AM (#748011)
    This is a wonderful article. I remember my first experience with "Extropians" was at a dinner with some of the members of a large, international Extropian mailing list. I think that I realized exactly how far off the ideology was when I sat there and watch three men badger another man to sign a "Cryogenics Contract" where he would agree to be frozen when he died. Not only was it bad science, but it was the same kind of groupthink atmosphere that permiates the type of movements I was trying desperatly to get away from.

    On a related note, you might wat to check out The Guy I Almost Was [e-sheep.com] by Patrick S. Farley. I think that it gives one of the best descriptions of techno-fetishism among geeks, and the ways that we are being manipulated by it... (Also, it's a damn good story.)
    --
  • However, the techoskeptics are right about at least one thing: it isn't a given that such technology will be sued wisely.
    Agreed. We see unwise suits reported here on /. all the time. We need wiser laywers!
  • MacLeod and Vinge both got the terminology from elsewhere. I think the extropians may have invented the term.

    I'm not sure Lanier thinks the singularity is a Bad Thing as much as he thinks its silly. As he says in the article (you have read the article ?), the requirements for it to happen have not been verified. The idea that a mind can exist in a computer has been mooted but never proven. The broader conception that the mind can be moved into some kind of machine is almost certainly correct, but its possible such a machine would not be capable of the self-enhancing explosive growth that a computer might. And on and on. Read the first 5 points of the article.

    As to whether its good or bad, if its possible, its rather beyond that, no ? What's certainly true is that posthuman entities, where they to emerge quickly enough, will not have much time for humans, any more than we have for ants.

    And no. I don't want my mind in a computer. Not if it means sacrificing my current brain and body, as initially it almost certainly would.
  • What is the difference between "progress" and "Progress".
  • Not too fun plugging in points taking stupid derivitives. Math is not something that I feel is integral to most courses on programming unless you are programming a math program.
  • What other reasons can there be for what amounts to turning yourself into some(one|thing) else?
    Most of us want to turn into something else. If I was exactly the same now as I was at the age of twenty, ten years would have been wasted!

    We want to learn new things; we want to improve our bodies; we want to change elements of our personality and emotional makeup. That's why people go to school, read books (or even /.), go to the gym, try different diets, see therapists, take anti-depressants, whatever. Many of these means are misguided, and one might even argue that the goal itself is a mistake (after all, you are already a Buddha), but there's no dening that part of the human condition is the desire for personal transformation.

    It's a scary prospect for the future, one in which people will willingly rush wholesale into abandoning their humanity for the promise of an artificial dream.
    This doesn't have to be some Faustian bargain. Using nanotechnology to augment my physiology doesn't take away any of my "humanity"; nor would providing me with an AI butler/manservant/gal friday.

    However, the techoskeptics are right about at least one thing: it isn't a given that such technology will be sued wisely. The example given about people borrowing money they don't need in order to give good inputs to a credit rating algorithm is an excellent one. OTOH, that's not so much an issue of technology as of bureaucracy - the algorithm could just as well be implemented by men with ledgers and quill pens as by big iron.

  • I'm willing to bet life expectancy during the industrial age was much lower than in the glory days of, say, the Chinese civilization, or the Mayan civilization, or the Iroquois civilization, or Arab civilizations. Our life expectancies are only now coming back *UP* because we have high tech medicine to fix our broken parts. MOST of modern medicine is reparative, NOT preventative. Do you run your car until it breaks down THEN get everything changed? No, you check your oil, you check your fluids, you get it inspected, etc.
  • I'm not talking about cave people here. I'm talking about past civilizations which existed in harmony quite happily for eons. These weren't people who lived in caves and threw stones at rabbits. The Chinese civilization, the Egyption civilization, Mesopotamian civilizations, American civilizations. They people didn't have *all* that hard a life - certainly not in the proportion that people in third world countries have today, largely due to pressures exerted by Western countries. Life was pretty damn good (unless you were a slave I suppose, and even then it's not like wild animals were just roaming around waiting to eat you). Sure, we can say life is good in Western countries due to all this "progress"...but look at the countries we call under developed. That got shafted. On the whole, I think we really need to question where we are actually trying to go. In many cases, for every great new solution and marvel of Western progress we come up with, we come up with many more problems.
  • w00ly_mammoth wrote:
    There's this clique of "digerati" who keep popping up on its pages and in similar forums/magazines/books, explaining the future in all its robotic nanotech cybernetic glory. The same names keep appearing over and over, repeating visions of a future so vague and full of popcorn sci-fi visions that you can't quite pin down anything specific, but can debate about it for weeks.

    You may have seen these names before, waving hands and talking about the amazing future - Nicholas Negroponte, Marc Andreesen (even makes the cover of wired recently, all for having co-written mosaic w/ eric bina), Lanier, Kurzweil, Ted Nelson, Gelertner, etc. Most or all of these have been "has beens", who never quite produce anything useful, except visions of the future that are lapped up by journalists and viewed as the gospel.

    So you prefer, for example, to read the opinions of Bill Joy, because he's a sucessful technologist? (Myself, I thought Bill Joy's cautionary manifesto showed an amazing lack of forethought... he's had his head in the guts of the tech for so long, it's only just *now* occured to him that there may be some serious long term problems with it.)

    But the real reason I'm writing is that you've thrown "Ted Nelson" into the list of people who are (a) trumpeted by Wired and (b) have never done anything useful. For one thing, Wired is if anything hostile to Ted Nelson, and for another thing, whatever Nelson's failings as a coder or a manager of coder's he *did* suceed in writing some fairly influential books. You may not have read any of them, but Tim Berners-Lee has. The web might be a better place if more people understood what Nelson was after with Xanadu: here's a new Nelson paper on the subject [keio.ac.jp].

    I'm as annoyed with Wired-style fluff as anyone (take a look at the New York Times "Technology News" headlines: it's all about who-bought-who this week. Technology?). But there is no simple rule-of-thumb to find brilliant writing. You can't just refuse to read anything by someone who's not rated a Master at advogatro. Is there any reason to care about, say, Linus Torvald's opinions about, say, globalization, compared to some nameless writer at the Economist?

    And just to see if we can start a little trend here and actually talk about the article: Jaron Lanier is not at all sounding like the re-incarnation of Hugo Gernsback here. Is there any reason at all you attached this rant to this article, except that you've heard Jaron Lanier's name in Wired magazine?

  • Moreover, hasn't this been going on since the discovery of cannibis? Probably further back than that...
    Homo sapiens isn't the only animal to ingest plants that alter mental functioning. Drug usage predates humanity.
  • I second "The Guy I Almost Was". I read it a few months ago (I devoured all of Electronic Sheep [e-sheep.com], actually) and it had a profound impact on me. Very deep.
  • I've been wondering a lot about cryo myself lately. It's so damn optimistic! Is anyone here planning to go through with it? Ever since I read about those characters in Cryptonomicon...

    That idea of longlife being a rich guy only thing. Suddenly not being rich seems like a huge philosophical or metaphysical failing! Yikes.

  • In the responses Henry Warwick said,

    "Fact: Machines don't and can't think. Existence precedes essence. Computers pass voltages. Period. They don't remember anything. They don't think about anything. Everything we discuss or sense about them is secondary and something we bring to it. Saying that computers think is like discussing the political persuasions of rock formations.

    Once we see the Turing Test for what it really is, the real CT/Turing Test project is now revealed:

    Can we make machines operate in such a way that we can - deceive - ourselves into thinking there's actually a sentient human in it? And can we deceive/bludgeon others into agreeing with us?

    The Turing Testers know that machines aren't sentient, as they wait for the next rev of some machine to trick them. And once "tricked" - what makes them think they or anyone else wouldn't know it's a trick - everytime?

    "Gee - last week, the HAL 9000 passed the Turing test. Well wuddya know - that last algorithm really did the trick. Let's check it out now. UhOh. Today it's not passing the Turing test...so I guess it isn't sentient anymore..."

    That we so deceive ourselves does not mean the condition of sentience is or ever was actually present - it simply means that the required conditions to our test have been met at a particular historical juncture - on a given day, the machine has "fooled" us into thinking it can think. It's been programmed in such a way that we are led to believe it has a mind. This doesn't mean it actually has one. With the Turing Test, the machine must simply be able to do what we expect of a human within a certain range of activity. But is it Sentient? Hell No. It doesn't take Albert Einstein to see how nekkid that Emperor is."

    I think the turing test is really neat milestone but not really a test of anything but ability in a typical clerk job. I recently saw an AOL commercial touting AIM in which the perky actress said, "AOL instant messaging is better than talking on the phone!"

    How thoughtless do you have to be to believe that?

    People forget that when they talk on the phone or in person to someone that they are collecting FAR more information than just the words, the spelling, grammar, and the speed with which they are typed. Or maybe a person who would believe that is just very unobservant.

    Unlike AOL's fictional character most people I know collect enormous amounts of information during a analog (perhaps digitally carried) converstion. For important decisions I may dwell on the information obtained from just one conversation for days. Every facial muscule, every breath, arm gestures, tone and waver of voice and so on.

    I have a friend who will remain nameless who keeps going on dates with girls he meets online. Meeting people online is no problem, but baseing a friendship on typed converstions with little or no analog interation is flawed. Needless to say he keeps getting hurt, even more so than he did in purely meatspace endevors.

    I propose a new test. Create a human voice and/or form and if I or someone else who isn't asleep can remain friends with said robot for a year and not know, give their creators the prize.

    There is a reason that trust in a good friendship is earned over time. Friendships are far more complex than beauty parlor chatter held through a teletype. We don't need computers to talk about the weather or how bad politics suck. If AI is to be usefull is must be trusted... like a friend.

  • At last. A comment to end this Lanier lunacy.

    I think, Rob, that you should moderate this comment up to 1000, and archive the former material. Let's move on.

    As quickly as possible.
  • Comment removed based on user account deletion
  • So what if we do things to ourselves that make us cease to be human? That would only be a bad thing if "human" were the best we could be. I have this doubt that human beings are the optimum form of life, and if we are, then that's pretty sad.

    Exactly. It is a really irritating cliche in science fiction that whenever a character becomes "more than human" (whatever that means), that character only gets to enjoy the benefits of that state for a brief while before descending into madness and death. There never is a rational justification for this; it is just the standard "don't mess with forces beyond your ken" nonsense. The fact is technology doesn't stand still -- and I, for one, intend to keep pace with it.
  • If my humanity depends on having my mentality trapped in this eighth of a ton of slowly rotting red meat, then I will abandon my humanity with glee. Perhaps you're happy with living like an animal, but I have higher aspirations.
  • <sarcasm>there's nothing so frustrating as trying to read a 12K word manifesto in a font so large.</sarcasm>
  • by pallex ( 126468 )
    Does he go on about cybersex too, or is that just a little too early-90`s?
  • Apply now, space is limited.

    The REAL jabber has the /. user id: 13196

  • Then those are your goals. They aren't enevitable. And even if they are progress for you, they still aren't Progress (note the capital P), since there is no such animal.
  • Well what about something simple like replacing a limb witha prostetic.
    This is simple? A pegleg or a hook, maybe, but there's nothing simple about modern prostetics.
    The cost of prostetic legs is roughly according to one man I know who has one about ~$10,000 per leg and those are the older models.
    Yeah, pretty pricey. That'll change. Think about how expensive and difficult to use contact lens used to be - now they throw them away. Think about how expensive car phone were in the 1970's - now they give away cell phones free with the service.
    Computers fail so much that it's impossible to be at one with the computer. Take a program I am writing now. I try to program the computer to do what I want and I look over the line that the compiler is barfing on and it just dosn't make sense.
    Sorry, but your lack of programming skill is not the computer's fault.
  • I think Lanier makes some good points. Yes, this is a rant rather than an well thought out refutation, but as the Communist Manifesto shows, you can go a long way with a rant.

    The case against AI is pretty good these days. We are beginning to understadn how intelligence emerges from systems, and one of the conclusions we can make is that human-like intelligence seems unlikely to emerge from anything that isn't human. Johnson and Lakoff's exposé of embodied intelligence makes that case convincingly.

    The real clue here should be our inability to define intelligence in a way that distinguishes it from being human. Quite possibly we will never be able to identify anything non-human as intelligent.

    Belief #3 is interesting. I though only the looney corner of the libertarian movement still stuck to objective semantics. Putnam convincingly dismantled objectivism in the 70's by showing that it is a contradictory system which is incapable of providing a theory of meaning.

    Embodied semantics are the rage these days. Perhaps that won't last forever, but I don't see any other solutions that makes sense in light of what we know about biology and computers.

    Belief #4 is true of too many of today's technocrats. However, I think the power of darwinian thinking is very strong and very real. What I reject are the conclusions many amateur darwinists draw from it. Cyberselfish, for example, takes the bionomics people to task for their poor economic thinking. Darwinism is increasingly used to justify the worst aspects of capitalism, ignoring the obvious Darwinian analysis of government. The great failure of Darwinian social thinkers is to view their domain in isolation. Just as no responsible ecologist describes the lifecycle of a single animal without considering the other organisms that share its environment, it is stupid to try to understand economics without considering government, society, culture and environment.

    A good Darwinian analysis would show that regulatory action and environmental effects form a part of the economic ecosystem. It's never just survival of the fittest organism, even in nature.

    Belief #5 is an admission that faster CPU's won't solve any problems. I'm glad to hear it. Commercial computing is still rooted in the 1970's and we need to move on.

    And that brings us to belief #6. Millenialist thinking. Things have changed before, and they'll change in the future. But I have seen nothing to lead me to believe that nanotechnology or any other kind of technology will solve any fundammental human problems. Computers have done a great deal for us, but have they really brought any kind of millenial change? No, not really. We had machines before to help do things, we have machines now. Sometimes they still don't work. We still can't take the human out of the loop. We don't appear to be on the verge of uncovering immortality, we can't transfer our brains into computers and may never be able to. Computers can do a lot for us, including making us more intelligent and more adaptable, but this is the same trend that's been in place for millenia.

    The year 2000 has come and will soon be gone. And nothing big happened. It's nice to see the technical class getting back to reality.
  • by JimMcCusker ( 27543 ) on Thursday September 28, 2000 @04:49AM (#748037) Homepage Journal
    I have this doubt that human beings are the optimum form of life, and if we are, then that's pretty sad.
    Of course, that's assuming that there is an optimum form of life. Most people look at evolution and natural selection as having some purpose or goal. It doesn't. It's simply organisms reacting with the environment, trying to continue to exist. There is no optimum form of life, just like there is no external meaning to it. Meaning and goodness are applied to life by intelligence, trying to make some sense of it. There is nothing intrinsically better about being a human, or being a cyborg. If a human thinks they are better off being a human, then that's fine. If a human thinks that they're better off being a cyborg (or a pure computer) then that's fine. Just don't force everyone else into it! This singularity may be able to think faster, or be more creative, or whatever. It doesn't matter. Progress is defined by ourselves. It's not needed, and it isn't even really very important. It's just a filter through which we see change. Yes, it's a good way to measure movement towards a goal, but always remember that these goals are never external, never fated, always created by ourselves.
  • If the guy needs an elaborate disclaimer just to say that he doesn't empathise with computers, and then states that his position is going to be "unpopular and resented by his professional and social environment" .... all I can say is that his social environment must be a really warped corner of the world where common sense is singularily scarce.

    seriously, most of what he says makes so much sense that it's scary. there have been documented cases of end of the world theorists since at least 2000 years ago, and people still believe it when new ones come along with new and shiny "singularity" scenarios...

  • > I agree, but we've never had the chance to alter the way we think before.

    I assume you didn't go to university then. Altering the way I thought was what I spent most of my time doing ;)

    > It's an entirely new situation in our history.

    right...

    best wishes,
    Mike.
  • As I see it, Lanier doesn't say that transforming ourselves into trans-humans by 2020 is immoral. Rather, he says that it's unlikely to happen, because just throwing more MIPS at these technical problems is not going to solve them.
    --
  • Luddite.

    ... how I've missed you.

    You heard me. You're just a slightly more up to date version of the folks who speak out against ATM machines, electro-magnetic radiation from home appliances, and the Evil Internet, yet accept technologies like refrigeration and internal combustion engines, because we've had that technology "forever".

    Eh? Technology has been in most respects a positive driving force in our society for the last five hundred years or so. And whilst it has had a lot of negative side effects, these are due to us using technology excessively or before we understood the consequences. Tecnology in itself has the potential to offer us exciting new vistas, and is the only way we'll ever survive beyond the lifespan of the Sun.

    Why is it so common to predict the end of the world (what do you think Escatology is about?) based on a pretty graph that shows steadily increasing technology?

    I never said the world was going to end.

    How do you define this "Humanity" that you propose we are abandoning in favor of technology? How, exactly, are we going to go about abandoning it?

    Read the post replying to the post above yours. Humanity is defined by what we can and can't do, and these parameters are what shape our consciouss and unconsciouss minds in the way we call "humanity". By altering these parameters we alter how we think, and thus become something other than human.

    What humanity is doesn't change, and hasn't changed. We are still the same race, with the same problems, the same hopes, and the same daily social trials as humanity had one thousand years ago.

    I agree, but we've never had the chance to alter the way we think before. It's an entirely new situation in our history.

    I mean, what do you expect?

    That we continue to advance without trying to throw away that which we are and that which has gotten us this far? Not too much to ask IMHO.

  • Extropy is my favorite joke ideology. People who coin the name of their ideology because it "sounds good, kinda like the opposite of entropy, but not really" instead of its literal meaning are rather hard to take seriously.

    ex- : out of; away from.
    -tropy : indicates condition of turning.

    So, "Away-turning", not as the self-named extropians [extropy.org] claim, "the extent of a system's intelligence, information, order, vitality, and capacity for improvement".

    Steven E. Ehrbar
  • When the revolution comes - who will be the first against the wall?

    As horrid as it sounds, what future do you think we are really working towards?

    Star Trek? I'd say ideallistically yes. People make money being merchants and such in that future, but I've never seen anyone ever pay for something, there's no indication that a captain or an ensign earns a salary or goes to the bank. Its communistic at best, but probably unreallistic as we can't even get along for five minutes with ourselves - let alone our neighbors.

    Car Wars? Well, road rage is high enough - and there was that stint in the 80's where guns were being fired on the California highway system.

    Blade Runner/Cyberpunk? Dark dismal and cold - run by mega corps... sounds more realistic as the MPAA and other big companies are (at least) beginning to run the government.

    The Running Man? Heck, we'll watch Survivor (a piss poor example of surviving - let them kill and eat whatever they want, including eachother. Don't tell them how long they'll be stuck there. Loose them ONLY when they are dead, eaten, or in a medical emrgency. Last man standing sort of thing -that is survival... they are "living uncomfortably" - and yes, I'd put my money where my mouth is on that one.) anyways... back to the Running Man... We had the short lived American Gladiators, Roller Jam is getting big - and Professional Wrestling (right up there with extreeme violence) is huge now. America eats up voilence... this is becoming a viable form of entertainment.

    Do you want to talk about pattent laws or the MPAA or the DMCA and their effect as to outlawing technology? The recent suck.com article about geeks not understanding politics and law? We're screwed.

    We still have race riots (Rodney King), we have gone to war once over oil (Gulf War) - we can't get along. As Midnight Oil sang "The rich get richer and the poor get the picture."

    Now anyone here wants to even propose that nannite or cybernetic technology is a going to be a good thing? Folks - its only a good thing if we are ourselves as the whole human race are good. We're not. Period. At best we'll ruin ourselves, at worst - we kill everybody.

    The revolution is here folks...on a local scale it looks small, on a global scale - we're there - and we're all against the wall.
  • by flatpack ( 212454 ) on Thursday September 28, 2000 @04:01AM (#748069)

    As enamoured as we all are with the state and progress of modern computing I think we need to take a step back and really examine the underpinnings of our beliefs about computers and the very paradigm of computing that has pervaded so much of our cultural and scientific thinking over the last few decades. This article raises some excellent points about the things that many of us hold to be self-evident, even when we don't conscioussly think about them in these terms, and these beliefs affect the way we think and act in everyday life.

    The "eschatological cataclysm" that Lanier talks about occurring in the near future is truly a bleak picture for mankind as a whole, and yet it is one that I see talked about as if it were inevitable and a good thing. Just because we like computers are we really ready to throw away our humanity for a set of perceived benefits? We are what we are, and any gross changes in the state of our existance cannot help but fundamentally alter who we, as a species, are. There is no "intangible transition" between what we are and what we will be.

    I personally am rather fond of the current modality of human existance, and I see anyone that believes in or even worse desires such a dramatic and unnecessary change as being at best a foolish idealist and at worst someone unable to accept the reality of their own existance. What other reasons can there be for what amounts to turning yourself into some(one|thing) else? No combination of bio/nano/computing technology is ever going to be me for the simple reason that it will have different boundaries from me, and our idea of self is shaped from these boundaries.

    It's a scary prospect for the future, one in which people will willingly rush wholesale into abandoning their humanity for the promise of an artificial dream. I truly hope that I won't be around to see the fragmentation and then the end of humanity's promise.

  • ...that this guy thinks we're aliready computers and can easily scan 4-point text on a 21 inch monitor, and still comprehent what we're reading...
  • by British ( 51765 ) <british1500@gmail.com> on Thursday September 28, 2000 @04:05AM (#748073) Homepage Journal
    OOh, -1 penalty for not mentioning "The Well" or "Abbie Hoffman". C'mon, there's about 8 words here I have never seen before. Does ANYBDOY understand this? If so, please post a translation.

    Stefan Jones writes: "VR pioneer Jaron Lanier has written "Half a Manifesto" -- a long and considered rant taking on notions favored by Extropians, Singularity fans and others -- on the exclusive salon for long-hairs, 'Edge.' Lanier ....
  • No kidding. You should see my cats get all hopped up on catnip. I'm waiting for them to start raves in my basement and carry pacifiers around with them.
  • I think the author has a point. Computers are getting faster exponentially but not being improved (exponentially) in exception handling. For instance, one would like to like together all elements everyone knows about you to build every single web page you view. That way the page wouldn't waste your time with trivialities you aren't interested in, unless it was necessary to the page's format.

    But his thesis seems to be: programming is very hard, getting harder, and evolves slowly. Even faster and faster computers (assuming they can be built) can't do the job well enough. I couldn't agree more.

    Still, whatever campy sci-fi future we're heading to, I'll bet it will eerily resemble the present. After all, the present is the future of the past.

    And people are sh*t's way of making more sh*t.
    And we're talking about a planet of help desks.

    -Ben
  • It amazes me how easily posts on complex topics diverge into rants on style, ideology, and media rather than debating and expounding the basic ideas in the original post.

    Does anyone think that Lanier and the luminaries on the edge may have overlooked the innate factor that it is simpler to destroy than create? While nanotech is not at a self-replicating, harvesting level, if that is the ultimate evolution, not molecular assembly, is that not the real thread Bill Joy asserts?

    If a small robot of some variation could be imparted a simple capability to create itself from sugars and other simple biological molecules, and rapidly reproduce, why is the concern of their "perception" or "experience" an issue? Like a plague, this scenario would be devastating and quick, and the vector would not know of its actions any more than a viral strain.

    There is a middle ground between Lanier's socialist optimism and eliteism, and Bill Joy's doom-inducing Moore's Law extrapolation. The next generation of the computer viruses becoming physical is the hybrid.

    Lanier talks of brittle technology. It is that brittleness that engenders widespread rapid failures induced by viruses. If a small mech-virus, for lack of a more compact moniker, were to be crafted that didn't replicate, but would propogate it's effects along a power grid, by diverting energy into itself as quickly as possible to power the capability to continue to do that, would that not have potential for extreme effects on our society? Possibly, but expand that into any sort of hybrid where the initial vector may be mechanical, and the final is digital, and now many of our barriers drop.

    The ideas are not fully developed, but driving to one extreme or the other intellectually moves us in a direction akin to the total cybernetic movement Lanier refutes. We are not exploring the full causal spectrum of the ideas we present.

    Especially on Slashdot, which was founded on ideas and discussions. How odd.

  • by sethg ( 15187 ) on Thursday September 28, 2000 @05:12AM (#748081) Homepage
    Back in the Victorian era, a number of highly respected physicians said that a woman who want to college would become infertile, because her brain would become more developed at the expense of her reproductive organs. Why did otherwise intelligent people fall for such bogosity? Because it used the law of conservation of energy -- one of the, ahem, hot new scientific theories of the age -- as a metaphor for what was going on in a woman's body. These doctors treated the metaphor as sufficient proof for the theory, instead of looking for hard evidence that would demonstrate its truth or falsity.

    I think a lot of flag-wavers for memetics, evolutionary psychology, etc. have fallen into the same trap. It's easy to construct a Just So Story to link your pet idea with the latest scientific trend, and then you can plaster your story all over the Internet and accuse doubters of being trapped by outmoded ways of thinking. It's much harder to do your homework, collect evidence that supports your theory or refutes alternatives, and then convince skeptical and educated peers.
    --

  • You hit it right on the head: what started 500 years ago is only shifting gears, right into our collective consciousness.

    There are more people on the planet today than EVER before, and, correspondingly, more people live in abject misery than ever before. The underlying racism of this--these poor unfortunates just don't know what's right for them; let US decide what is best--is not even hidden from public airing in the West's so-called "intelligensia". They click their tongues at the backward nature of these people, if only THEY had computers, cell phones, ad nauseum, they would be better off, they would be like US. Rarely over the last 500 years has the question been raised that maybe THEY know what's best for them; no, you have to abide by the Kissinger Rational for command and control.

    This elitism is now pervasive in this realm, where "progressivists" justify every little "advance" as the means to godhood and every soul that resists their proclamations as Luddite. The fact that many of us have lived without these modern conveniences--and continue to do so in some "backward" places--is merely a statistical anomaly that will be quickly be corrected once these toys are ubiquitous.

    Humans are only more comfortable today than they were 200 years ago, not happier. Just look at all this "extreme sports" crap to see how empty modern life is, to see how desperate people are--even in this, the history's most powerful, most advanced hegemon--for any sensation that will convince them that they are truly alive and not some sleepwalking zombie.

    I've lost faith in technology's ability to lift us up. If it cannot make us happier, perhaps we should look elsewhere for the Next Revolution.

  • I initially thought Mondo 2000 was cool, but upon further reading, I found it to be a confusing mass of BS. If some woman wore a toaster on her head, they'd make an article about it calling it the latest fashion. And I don't think an issue went by without mentioning Timothy Leary or The Well(which I am sick to death of hearing about) at least once. I haven't seen a new issue in years, and glad of it.
  • oh yeah? and going from horses to cars has made a "new world" just how? it's the same old world with the same old deep problems, except that we can move things (and people) around considerably faster now. same with computers, we now have them all over the place and sure they've changed a lot of how we spend our hours (most of us /.ers should know, being paid to spend our days in front of computer screens). yet I see no evidence that the introduction of computers deserves anything like the name of a "new world". it's easy to cry "revolution" when you're in the middle of it.

    i have nothing against technology, in fact I love to play with it. but I'm deeply convinced that the major bottlenecks in human society and evolution are *not* technical, which means that technology isn't going to solve them, nor (I hope) to make them substantially worse. which is fine, really, as it's not its job. computers are all about easily doing things that we could already do, but were (hugely) prohibitive in cost and time. what gets branded as "innovation" from the computer world (and I don't only mean MS here) is just ridiculously simple compared to the real human problems that each of us has to grapple with, and I see no evidence at all of that changing.

  • Sounds like Jaron saw the attention that Bill Joy got with his article last spring predicting techno-apocalypse, and wanted some for himself. Forecasting the next depression/World War/Judgement Day is often a good way to get some subset of people to pay attention to you. Funny how with all those predictions of doom over the centuries, we're still here.

    Imitation may be a form of flattery, but it's not terribly creative.

  • by peter303 ( 12292 ) on Thursday September 28, 2000 @04:07AM (#748093)
    Why do people project more into technological innovation than may be there?
    (1) $$$$$ Money attracts hype. People call themselves prophets. Not all that different than new age religion, health fanatics, etc.
    (2) Generational rebelliousness: young guys understand tech and old farts don't. Nah-nah-nah-nah-nah.
    (3) Religious instinct: people search for the ultimate beyond themselves. The old religions are dead. Technology has the answer.

  • I have a hard time taking this guy too seriously. I'm sure he's bright and all, and may even have some interesting things to say, but he pretty much turned me off in the early 90s when he was going on about how the net would (could?)bring about this "utopian" age. Never trust anyone that says something new is either
    a) the greatest thing on earth, or conversely
    b) the greatest blight on the earth
    It usually means they're either very, very naive or they're selling something.

    Natham

  • by interiot ( 50685 ) on Thursday September 28, 2000 @04:09AM (#748096) Homepage
    Our boundaries are expanding all the time. Ability to go really fast. Ability to fly. Ability to see and measure distant stars. Ability to leave the earth. Ability to sort through an entire library of information in a second. Ability to send a message at light speed to the other side of the world. Ability to karma whore from thousands of people all over the globe.

    Okay, it's a large leap from "our boundaries are constantly expanding" to "it's okay if our only boundaries are memory and CPU usage". But, unless you give a strong argument as to why we should force our boundaries to remain static, I don't know why it would be bad to change them arbitrarily.
    --

  • fewer restrictions? You can call someone on the other side of the world, why not just appear there?
    --
  • by Apuleius ( 6901 ) on Thursday September 28, 2000 @04:14AM (#748101) Journal
    Extropians: Randists, only [extropy.org]
    more so.

    Singularity: Explained here. [caltech.edu]

    Read all of that and digest. It's fun.
  • by LionKimbro ( 200000 ) on Thursday September 28, 2000 @06:04AM (#748106) Homepage

    I am elated to finally find a paper that so clearly elucidates my position and observations. I am also saddened to realise that people who would benefit the most from it [uni-osnabrueck.de] are also the least likely to read it and understand it.

    [edge.org]
    3) That subjective experience either doesn't exist, or is unimportant because it is some sort of ambient or peripheral effect.

    Item 3 in particular hit home; I have had the exact same conversation and thought process ("Perhaps the person I'm talking with doesn't have a subjective experience?"), for the last five years. The last time I had it was a few days ago while talking with a fellow engineer here at LithTech [lith.com].

    Subjective experience is not an easy problem; in fact, it is a very hard problem [zynet.co.uk], but there is something in too many scientist's minds that makes them want to treat the subject as a superstitious topic, and treat those who find subjective experience difficult to fit within a computational framework as religious or spiritual zealots. Larson has correctly identified the currently popular model of the world, "Cybernetic Totalism."

    By the way; Not understanding his paper is not something to be proud of. Ignorance about *anything* is not something to be proud of. Use a dictionary [webster.com] or a search engine [google.org], whatever it takes, and understand these words.

  • Exactly. It is a really irritating cliche in science fiction that whenever a character becomes "more than human" (whatever that means), that character only gets to enjoy the benefits of that state for a brief while before descending into madness and death.
    Ironheart didn't. Neither did Obi-wan or Dave Bowman. Hell, even Wesley Crusher didn't, more's the pity.
  • I don't think Martians would necessarily be able to distinguish a Macintosh from a space heater.

    I knew my parents were Martians! I knew it!!!!

  • Blah, blah, blah. Whatever. When the rest of us become cybernetic hyperintelligent machine-men with laser beams and superhuman powers, I don't want to hear you complaining.

    Bruce

  • Our boundaries are expanding all the time. Ability to go really fast. Ability to fly. Ability to see and measure distant stars. Ability to leave the earth. Ability to sort through an entire library of information in a second. Ability to send a message at light speed to the other side of the world. Ability to karma whore from thousands of people all over the globe.

    These are all external boundaries - things we can do or things we can manipulate. None of them are boundaries in the fundamental structure of our mind, which is a boundary of a wholly new kind. People think in pretty much the same way today as they did when the Ice Age finished, and yet we're coming up to a point where it may be possible to alter this constant.

    But, unless you give a strong argument as to why we should force our boundaries to remain static, I don't know why it would be bad to change them arbitrarily.

    But then we won't be human any more. We are defined by both our capabilities and our limitations, and any attempt at altering the mechanisms of our conscioussness is invarably going to alter the very nature of who we are and how we think.

    Will this be for the better? I doubt it, I think that it will merely be different. But the point is, it won't be human in a very real way. And what's the point of improving yourself if you end up alone and apart from all you knew and believed in?

"It takes all sorts of in & out-door schooling to get adapted to my kind of fooling" - R. Frost

Working...