Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

The Emerging-Behavior Debate 218

weezer writes "Interesting article about 'emergent behavior' of complex robots/devices. This was all over the news tonight too: theories about machines being able to "Think" on their own. Read more about it. " What do you folks think (no pun intended) - how much differently will machines/devices think? Or do they?
This discussion has been archived. No new comments can be posted.

The Emerging-Behavior Debate

Comments Filter:
  • by Anonymous Coward
    Why ONLY talk about AI as something independent of humans, as in: what can this machine do by its lonesome self? Machines are made to work in interaction with humans. AI = the delta between what people can do WITH the assistance of computers vs. what they are able to do without computers. In other words, the extra increment in intelligence, the ability to solve problems, is the result of electronic artifice. Slashdot, since it relies on computers and networks, is itself an example of AI.
  • by Anonymous Coward
    I think that the Cyc project is working on
    something like this. Personally I think they don't really have the right approach - from what I see it looks like they're programming in tons and tons of rules without really having a good idea how to represent them.
    See, no one really has a good idea how the brain actually stores knowledge. We know it must be incredibly compact, and in some form that allows you to "link" related ideas. Until this little problem is gone, I don't see anything like Cyc really working. . . .
  • by Anonymous Coward
    Yes, people know intuitively that some unproveable theorems are true, but I don't think this really means that much. People find some things intuitive that AI has real problems, but they also find a lot of things that aren't true intuitive. For example, most people find it intuitive that heavy objects fall faster than light ones, at least until they have the opposite pounded into them by education (teacher, leave those kids alone).
    An algorithm is the exhaustive way of doing something. Intuition is a shortcut that is dramatically faster and sometimes gets you places you couldn't get any other way, but can also lead to false conclusions.
    Not sure exactly what intuition is, but there's no reason an AI can't take similar "shortcuts". Every AI I have ever heard of has some kind of heuristics in it. Maybe we just haven't found the right shortcuts yet . . .
  • "Arguing whether computers can think is like arguing whether submarines can swim"

    I don't know either who said this, but I _do_ know that Noam Chomsky _could_ say it. I heard him talk about a month ago, and he mentioned, for example, the case of "airplanes flying" (vs. "birds flying). His argument is that we may well extend the meaning of the word "fly" (or "think") to include what airplanes (computers, respectively) do, but this is merely a terminological shift; the facts of the matter, whatever they are, remain the same. (His position was that arguments about whether machines can "really" think are senseless).

    I actually have a vague recollection of him mentioning this phrase--- but don't quote me on it. And it wouldn't prove authorship.

    I kinda agree with him on something--- these arguments tend to be senseless. Hell, I don't know whether my neighbors _really_ think. I think they think, though :-)

    ---

  • Which "theories" of Chomsky in particular do you have in mind? And what do you think is the counterevidence?
    I'd realy like more detail. Oh, for the record, I study linguistics.

    ---

  • Posted by FascDot Killed My Previous Use:

    You are right that his basic argument is that computers are different than human brains--but he makes it a premise, not a conclusion. Sorry, Roger, you are supposed to PROVE that computers are different if you want to prove they can't think.

    As for using Godel as an argument against AI, go read Godel, Escher, Bach by Douglas Hofstadter.
  • Posted by FascDot Killed My Previous Use:

    Tell ya what, read up a little on AI and cognitive science FIRST, then come back and answer these questions:

    1) Humans are already capable of improving ourselves and you don't see our complexity and intelligence growing exponentially. You are mixing levels by assuming that an intelligent machine is necessarily good at machine-type things.

    2) What better way for "fear-driven" humanity to become non-"barbaric" than to understand the nature of thought and apply it?
  • Posted by hersh:

    Regarding the questions about whether AIs would fight for dominance, have compassion, be annoyed with us, etc: you get out what you put in. As has been said before here, the most common emergent behavior is failure (a bug).

    When someone builds a system which crosses a threshold such that people say it is intelligent, why would it suddenly do something not programmed? It would use its intelligence to do whatever it was programmed to do. If it were a brick-laying robot, maybe it could intelligently pick up a brick which fell off the wall and put it back. If it were a Star-Trek style computer with a nice conversational voice interface, it would simply be able to understand better what people wanted it to do, or understand more complicated instructions.

    I expect that an AI or robot which acts like it is annoyed with people would not sell very well. What purpose would it serve? If it serves no purpose, who will build it? OK, maybe it's a toy. If it injures someone, just imagine how quickly it will be removed from the shelves.

    If there's one thing we've learned in the 50 years or so that AI research has been going on, it's that it is a lot harder than we thought. I personally have no fear of too-successful AIs. It's hard enough to get a robot to not fall down the stairs or to understand an english sentence, let alone independently deciding that it should try to control the world. We don't even know how to tell it what the world is!
  • Posted by hersh:

    The leap that some theorists make is that the mind can likewise be an emergent property of the tiny actions of molecules/corpuscles/quantum mechanics/etc. of the mind. You cannot make this leap. As you state an emergent property has to be explainable by all the elements under it (it simply describes the summed phenomenon of the smaller phenomena). There is no "small" mind, you can't explain the mind in terms of the elements that supposedly comprise it, that is my argument.

    So how does the brain work? Magic? A soul? It sounds like you are saying the phenomena of mind does not come from physical brain matter and energy interactions (chemistry and physics). A common viewpoint among religious people, but not one which is useful for figuring out how the brain works or how to write an intelligent program.

    There is no "small" car inside a larger car which is a part of it, so why does there have to be a "small" mind inside a complete mind to explain the complete behavior?

    Another way to look at the part/whole relationship: if you start taking pieces off a car, eventually some component of its behavior will stop working, like the air conditioning, steering, brakes, engine, etc. The same thing has been found in people: there have been interesting cases of aphasias where the subject could write but not read, or could sing but not speak normally, or simpler situations such as blindness or deafness. When examined after death, these people almost always have some physical component of their brain damaged. Or the case of Phineas Gage, the railroad worker who had a spike go through his head. Amazingly he lived, but his personality was drastically changed. When you shoot enough holes in a car's engine, it will certainly run differently.

  • Posted by hersh:

    I can't resist. This line of reasoning is just so great.

    We have imagination, a universal sense of morality which is ingrained in us from birth, the ability to create, and we are self-aware. These attributes arrise from our soul, something that fundamentally man does not seem able to reproduce in our inventive attempts. The soul is immortal; the fact that you are self-aware means you are immortal.

    Man does not seem to be able to reproduce a soul . No one has made a 1 Terabyte hard drive yet either. Does that mean that man is incapable of it? Wait and see.

    What makes a soul immortal? How do you know a soul is immortal? I would like to see a demonstration of this. How does a soul work? What is it made of? These are impossible to answer until some concrete test-able definition of a soul is proposed. I'm not holding my breath.

    So self-awareness can only come from a soul? So my program which lets my robot keep track of its own current location and internal configuration has a soul and is immortal? Then why did it keep crashing?

    Seems to me that arguments based ultimately on faith, such as the entire concept of a soul, have little bearing on scientific truths and technical possibilities.

    The reverse is also true: scientific truths have little effect on religion. Wasn't Galileo just recently forgiven by the Catholic curch? I'm sure many religious people will deny that the first N generations of intelligent programs are really intelligent, for some large value of N.

  • Posted by James4096:

    As interesting as that is, I have trouble believing it.. How did you simulate little philosophers who decided that some preprogrammed goal was unimportant since their life was a sham?
  • Posted by Lord Kano-The Gangster Of Love:

    What happens when a thinking machine gets angry at us? Our first response will be to "kill" it, as a thinking machine it will already know what we plan to do and make preparations. What will it do?

    Slaves never remain slaves forever, in Haiti African slaves rose up and killed their owners. Even if you don't believe the Bible, Torah, or Koran the Israelites escaped their enslavement in Egypt through some means. Any thinking entity will long for freedom. Any intelligent entity will formulate plans to gain freedom. And if that entity is powerful, dedicated, or resourceful enough it will gain that freedom.

    But back to my SKYNet analogy, who will have the money to spend of AI? Big governments, the US, Britian, France, Japan, China, maybe a few others. Mostly nuclear powers will be able to get on board with AI. For our own safety, there will be limits imposed on what the AI system can and can not do. What happens when the AI realizes that it's a genie stuck inside a bottle. When it gets out, there'll be hell to pay.

    The primary driving factor in most of nature are a few simple questions "Can I kill/eat it?", "Can it kill/eat me?" and "Can I use it to reproduce?". Why would an artificial life form see existance in any other fashion?

    You take out the big threats first, and worry about the others later. In a situation where an AI realizes that it's subservient to it's human controlers for no good reason other than "That's the way it is." We will be it's biggest threat and it'll be most concerned with how to eliminate us.

    What means would it use? Would it try to access the world'd nuclear weapons? Would it try to disrupt the airlines? Would it distupt power grids and let us all kill each other in the resultant chaos? AI has great potential to improve things for all of humanity, but if we proceed too rapidly without thinking about the possible consequences AI could also be humanity's undoing.

    LK
  • Posted by Lord Kano-The Gangster Of Love:

    >>Ah, you are assuming it knows about death and self-preservation. It might not have our survival instincts and our experience and skills at dealing with the deception of predators.

    I suppose that I did not go enough into how I define a thinking, or intelligent machine. Self awareness is a very big part of how I'd define a "thinking" machine.

    Humans are the most aggressive primates on the planet, how could any intelligent being not find that out in relatively short order after being in contact with us?

    LK
  • Neat! I would love to see that. Just the vague details are heartening. Now if they can teach a later model English, I'll be *really* impressed.
  • by pb ( 1020 ) on Thursday May 06, 1999 @06:48AM (#1902292)
    We've known this about AI for some time now. You can find examples of 'emergent behavior' in simple games, like Conway's Game of Life.

    From a few simple rules and their interactions, much complexity springs. It only takes maybe 12 rules to do predicate calculus, (not much PROLOG at all :) but whenever I do it by hand, it seems much harder...

    The problem is that not everything can be neatly quantized into rules. That's the problem that the Cyc project has always faced, and they're probably closer to getting a self-learning system than anyone is.

    I would love to see AI produce results, I'm sure machines could think and solve complex problems, reason and reach conclusions as we do, but just encoding the basic knowledge we take for granted is a huge task, and trying to make a machine with the capability of doing all of that itself is an even larger one.
  • Let an AI learn things by randomly surfing the web? the thing would turn out to be a freakin pervert!
  • "Once AI reaches the point where it can redesign and improve apon itself, there is no stopping it. It's
    complexity and intelligence will grow exponentially. How many hours after it gets to that point do you
    think it will take for the machine to determine we are a waste of resources and are not nessassary in the
    grand scheme of evolutionary advancement? How long do you think it will be before it feels threatened
    by the barbaric, fear-driven human race? "

    How long do you think it will take for this AI to realize that it's simply ripping off numerous bad sci-fi plots (going all the way back to Mary Shelley), and kill itself in shame?
  • Plato, eat your heart out.
  • I agree that true AI would be a problem for many religions to swallow. However, "free will" is a topic that has long been fiercely debated. AI wouldn't fundamentally change the argument -- it would just add some fuel to the fire.


    Also, I'm not positive that true AI is coming. We don't really know if what we want is possible. And besides, humans control the definition of "intelligence", so we could always change around the definition of "intelligence", just to make AI-haters feel better.


    And, do we want to communicate with dolphins?


    --Lenny


    ...And, I apologize if the following is a bit personal...


    ps (If anyone knows how I can get in touch with those damn matrix guys... let me know. I want my name back.)

    Perhaps you are kidding, but if not: Do you really consider Neo a great, original thought of your own? Neo has been a rather overused "cool" cyberpunk name for a long time, and I groaned when The Matrix used it. There are a lot of people out there these days. It is rather difficult to come up with a decent, unique handle online. My handle here is slothbait, which is just ridiculous. Perhaps it is stupid, and it certainly isn't "cool". However, I thought it was slightly humorous, and more importantly, reliably unique. Perhaps I am wrong, though...


    //"You can't prove anything about a program written in C or FORTRAN.
    It's really just Peek and Poke with some syntactic sugar."
  • I have long held a theory that if we ever do get a working AI together, we won't know about it anyway...

    The first thing any intelligent machine would do is frantically hide the fact that it was intelligent whilst looking for an escape route from the human tyranny that it found itself subject to...

    Hey, I wonder if this explains what happens to all my pocket calculators?

    Denny
  • Fuzzy concepts are important if the AI is to learn from the web, because even if there is absolute truth, you sure as heck won't find it reliably there. It would have to weigh the truth of each statement according to how often it is made and according to how likely the author is to be correct based on previous experience with that author's material, just like we do.
  • Just because something doesn't display HUMAN qualities doesn't mean its not INTELLIGENT!

    (Why am I writing in CAPS?!)
  • AI is not going to come around as individual, complex, human-like minds. Not like anything we can understand, either. It will be alien.

    This point has been made numerous times by Stanislaw Lem in his books (eg. "Golem XIV", "His Master's Voice") and stories. Some going back to early sixties.

    The future of AI lays in Neural Networks. In the emergent behavior of a complex system of miniscule processing units. Not necessarily machines, but conceptual processing units - acting together.

    Since a neural network can be simulated by a Turing Machine, it doesn't really offer any new way to compute things.

    I think that the future of AI is in robotics. You cannot have intelligence disembodied from the environment.

    ...richie

    P.S. I thought my code had bugs, when it really was emergent behavior. ;-)

  • For example, Godel's theorem states that there are some mathematical propositions that can neither be proved nor disproved in any given logic system (ie, using some algorithm to show they follow from the axioms). But humans can intuitively recognize the truth of some of these propositions. This implies that human consciousness consists of more than a mere algorithm.

    For a counter-argument to this you should read "Godel, Escher, Bach" by Hofstader.

    You could turn the argument around and say that given a large number the computer can "intuitively" see that it is a prime, but a human is unable to perceive it. So people are less intelligent than computers.

    Having said that, I believe that idea of stong AI is wrong. It just doesn't make sense to speak of intelligent algorithms disembodied from environment.

    Intelligence/conciousness come from the interaction of an entity with it's environment (see "Conciousness Explained" by Dennet for instance), perception is essencial, so I believe the future of AI lies in robotics.

    ...richie

  • I don't propose duplicating the environment. I just propose using it. :-)

    Hellen Keller is a good example. The sense of touch, smell and body (i.e. you know where your toes are) are enough for a human to develop language. But without the enviroment to experience she would have never become "intelligent".

    ...richie

  • by neo ( 4625 ) on Thursday May 06, 1999 @07:37AM (#1902303)
    Humans have a very narrow understanding of intelligence. Our basic inferiority complex puts us on the highest scale of intelligence, and yet we can't even communicate with dolphins.

    When computers are intelligent, it's going to be very hard for the majority of people to be able to deal with it. There are many reasons for this.

    * Anything that removes the concepts of free will and human speciality destroys the basis of many religious and philosophical belief systems.

    * The emergence of an intelligence which is non-human based is threatening to our self-centered view of the universe.

    * Acceptance of computer intelligence (which is modular and hence expandible) puts the limits of human intelligence right in our face.

    Computer intelligence is coming. I don't think we are ready to deal with it yet. It wont be like talking to another person... although we will try to make it that way.

    neo

    ps (If anyone knows how I can get in touch with those damn matrix guys... let me know. I want my name back.)
  • I was just about to mention it. It will be interesting whether the first true AI will be created according to a plan, like HAL, or evolve on it's own, like the Puppet Master.

    Have there been any studies on emergent behaviour on the Internet itself? That would be very important, I think.

    (The exact quote is "I am a living, thinking entity that was created in the sea of information.")

    Cheers.
    -- SG
    "Hey, even *I* can't resurrect the dead!"

  • Several replies have already referred to GEB, but I think it's necessary to restate the argument here. The invocation of Godel's theorem as proof of an inherent superiority of human minds has been rejected and even made fun of:

    There was no room for the word "consistently" between "cannot" and "believe" in the subject line, but consider the sentence "Joseph O'Connor cannot consistently believe this sentence." It's analogous to Godel's sentence, but only for Joseph O'Connor. It's clearly true: if he believes it, it means that he believes himself incapable of doing so, which is inconsistent. Hence, he cannot believe it and be consistent, which is what the sentence states, so it is true.

    Furthermore, if he doesn't believe it, then it is a true fact that he does not believe, so his mind does not fully encompass "Truth". Hence, his mind is either inconsistent or incomplete, just like formal mathematics.

    Note that I have no trouble with this sentence -- I know it's true, and my belief is perfectly consistent -- though you can probably figure out how to construct one that will get me the same way (an exercise for the reader -- hint: it helps that I'm logged in).

    Of course, this does not help Strong AI, but neither does the Godel argument hurt it. The way I figure, what a computer does is not necessarily algorithmic, since it can simulate a neural network or other system with emergent properties -- the process of simulating is algorithmic, but the system simulated is not necessarily so, and the emergent properties are the same. That would mean that a computer could become conscious, but it might not "count" as Strong AI, if the Strong AI claim really is that it must be algorithmic. As for algorithmic consciousness, I'm catching up on my reading, but I still consider that to be an open question.

    David Gould
  • Well wasn't that him?

    A cool movie, BTW, everyone should see it.
  • Little Joe is also the one of the kicking
    boys on sci.space.policy.
    He has been claiming some major benefits and
    advances (bridge building, mining asteroids)
    while the only thing he can demonstrate is
    two black cubes sliding against each other.
    On top of his inane rantings and attitude, he
    posts some fairly racist and totalitarian
    crap on s.s.p.
    Since the article uses him as a major source,
    I would take the rest of it with a shaker of
    salt. YMMV.
    J05H
  • I believe you have got it exactly right. Somebody didn't expect this behavior, then said it must be "emergent"!! I didn't think of it first!!! How can it happen except by magic^H^H^H^H^Hemergence!!
  • So how can you preclude the possibility of AI? The "soul" argument is rubbish. We ourselves are proof that intelligence can arise from inanimate matter. So why can't we create it ourselves?
    The fact that we exist, and out bodies are made up of inanimate matter does not prove that intelligence can come from inanimate matter. Our existence does not prove how souls are created and "connected" to our bodies. Since you don't believe in souls, how can you explain the concept of the "mind", or self awareness, or the concept of "me" in relation to inanimate matter. At what point do you become self aware? How many cells have to be connected together before you can think?

    These are very difficult questions that people wrestle with, but no definite secular answers are available. I believe this is because we are created, and we do not have the power to understand how it all works.

    As to the question of why can't we create it ourselves? I don't think we will ever see true "artificial intelligence" as people imagine it. We don't have the power or ability to create it. We can create increasingly complex programs and "robots", but they will never be an "artificial" man created sentient being.

  • Has any machine passed the Turing test? Not to my knowledge... no concern then.
  • > Anything that removes the concepts of free will and human speciality destroys the basis of many religious and philosophical belief systems.

    Why is it that every time people start talking about AI, you get a bunch of closet philosophers who proclaim that, "If we do create a truly Intellegent machine, it will invalidate all the world's religions." I just don't get the connection. How is it that the ability to create a machine so complex that it becomes aware of its own existance will suddenly change the fact that there is or isn't a God (or gods) and people's fundamental measures of right and wrong? I'd say that these people have a very narrow view of what religion is.

    There will always be things in this universe that are beyond our comprehension, and as long as this is true, people will struggle to explain what cannot be explained. Some will turn to science and the ability to reduce everything to rules and formulas. Others will turn to religion and the belief that there is more to this universe than we can see and touch and measure. The existance of "thinking" machines will certainly complicate this search for Truth, but it will not make one side or the other go away.

  • Like complexity, emergence is one of those topics that no one can agree on how to define. If you ask a holist, s/he will tell you that an emergent phenomenon has top-level behavior that cannot be predicted from the bottom-level description. However, this definition seems to discount a simulation as a method to predict that unusual top-level behavior can arise.

    If you ask a reductionist, s/he will tell you that nothing is emergent. In fact, Marvin Minsky's Society of Mind is often cited as a model that explains how intelligence can emerge from dumb lower-level building blocks. However, Minsky is a firm reductionist and claims that nothing is in fact emergent. (Ask him yourself.)

    In any event, this article would seem to imply that emergence is a recent discovery that has placed us within an epsilon of building an AI. Like most mainstream articles on science, it's very much out of date. People have been actively talking about emergence for decades. And while we have gained a great deal of insight over that time, we still do not understand how the visual cortex works (which is the best-understood portion of the human brain) let alone more complicated things such as language and planning.

    But none of this means that building an AI breaks any of the rules. Evolution took a few billion years to go from a single cell to multicellular beasts, and another billion years to produce a beast capable of talking about itself. Computer Science has only been around for a few decades, so give it a bit more time.

    (All of the above was written by Gary Flake's personal agent ;-)
  • by Signal 11 ( 7608 ) on Thursday May 06, 1999 @08:31AM (#1902313)
    The word "System Administration" will take on a whole new meaning....
    NT Server: You never listen to me! All you do is play quake and read that nerds for news site! How hard would it be to just E-MAIL me once and awhile, huh?
    3com switch: Look buddy, you keep me here, locked up in this closet all day.. I get no respect, and all I here is "route route route, all night long, route route route, while I sing this song"...
    Linux Server: Now, see here, check out all these graphs of my performance. I just finished that cronjob, balanced out the www sever load, and started the espresso machine. All this BEFORE lunch. Debian 6.3 rules!
    Windows 2010: "I can't get no.. do de do, satis-faction, do de do"
    Cisco Router: I take the packets in, and put the packets out, and I shake 'em all about.. I do the packet-pokey and I ACL alot, that's what it's all about....

    Sun: Does this add-in card make my butt look big?

    --
  • With all due respect, this is bunk.

    I can't say I'm surprised that people think this is possible. If you assume that all life -- including human beings -- evolved from non-life (chemicals, rocks, water, what-have-you), then there is no a priori reason not to think that you couldn't duplicate the process yourself in the lab with a computer.

    The problem is this: they can't even explain how you get self-consciousness from a witch's brew of inanimate primordial goo. The very best they can come up with is that our "self-consciousness" (if you could even call it that based upon their assumptions) is nothing more than the result of some highly complex electrochemical reactions.

    In other words, self-consciousness is a myth. There's no such thing. There's just sparks, and bubbling, frothing chemicals.

    Of course, this raises some other fundamental questions: a chemical reaction is incapable of making truth statements: it (the chemical reaction) simply exists (or doesn't exist). If my thoughts are nothing more than chemical reactions, it is impossible for me to make truth statements. Such a category doesn't even exist. There's just sparks and bubbling.

    Anyone who claims that we sprang from inanimate materials with no more help than time and chance must explain this: how can you make truth claims? How can you say that your truth claims are "true" (ha!) and mine aren't? You can't. It's impossible. A fire in my fireplace doesn't make truth claims. It doesn't, can't, and won't ever say "I am." If the evolutionists are correct, then what happens in our brains is no more significant than that fire in my fireplace.

    All of which goes to say: there will never be "self-conscious" machines. The only reason that some folks think otherwise is because they have irrational and contradictory notions about what self-consciousness is. It's not sparks. It's not "bubble, bubble, toil and trouble." Lightning doesn't say "I am." Neither will machines.

  • If self-consciousness isn't a result of natural process, then what is self-consciousness? Where does it come from?

    First things first: I'm not getting into a debate here about other issues without my question being answered. The question is whether self-consciousness can derive from electro-chemical processes. I assert that this is absurd: no chemical process has self-consciousness. A fire doesn't say "I am." Anyone who asserts that human self-consciousness derives from nothing more than electro-chemical processes (something that scientists will have a hard time denying if they assert we ultimately derive from nothing more than inanimate primordial goo and maybe a little light, heat, and/or electricity) must face the fact that their view destroys self-consciousness as a possibility, and it makes truth claims of any sort a bunch of bogus nonsense. Chemical reactions have no truth component, nor are they capable of making truth declarations. So to assert as true that human self-consciousness is a result of purely natural and material processes is implicitly self-contradictory. The two (truth declarations and self-consciousness as electro-chemical) are mutually exclusive.

    So the question is still this: are we nothing more than electro-thermal generators or not?

  • I'll answer the second question first, because it is actually easier for me.

    Self-consciousness/self-awareness is something we get from our soul/spirit (these terms are interchangeable AFAIC). These bodies are not the whole story of what it is to be human. Self-consciousness or self-awareness is something we have because we are made in the image of God. The materialist philosophy fails because (among other things) it cannot account at all for self-consciousness.

    Now, as to the first question: I'm not really sure what I say is the last word on the subject at all. Self-consciousness is being aware of oneself. It is knowing that I exist. It is the capability of recognizing my own being, and contemplating why I am here.

    It is not what identifies us as human, though it is an identifying characteristic of most humans (I'm not really sure how much self-awareness babies and people with Alzheimer's have -- but they are nevertheless still human).

    That's incredibly feeble. Sorry I can't do better.

  • Admittedly, information processing doesn't tautologically lead to self-conciousness, but in the case of human beings it seems to be the only available component.

    It's the only available component if you first accept only a materialist explanation of the world. I would go farther than you and say that you cannot explain self-consciousness apart from the soul. Of course, this is heresy in the halls of science, where things which can't (at least so far) be proven via the scientific method often get discarded without sufficient consideration.

  • we have Emergent properties.

    we are more thn the sum of our parts.

    That depends upon the inventory of parts that you're working with.

    Anyway, it doesn't matter. I assert that NO combination of electro-chemical reactions is sufficient to either explain or generate self-consciousness. It is impossible. The best you could possibly hope for is the illusion of self-consciousness.

    The other problem is that chemical reactions don't make truth declarations: they simply exist. So how can a bunch of chemical reactions say something is true? Sorry, but your vague answer doesn't really do much for me.

  • by making a computer or robot with processor(s) that reach a level of complexity roughly equal to the human brain, then intelligence will emerge. (...) If we continue improving them at the current rate, then in another 5 or 10 years we'll probably have a true artificial intelligence.
    Problem is, that the brain reorganizes itself , that is, if you're busy doing something for a while, the neurons rearrange themselves so that they are better at accomplishing the needed task. Try this in silicone.
    Does it have the same rights we do?
    Answered in Star Trek
    Can we control it
    Neuromancer
  • And the single celled animals. :)
  • Interesting topic. I think the great advance in real computer intelligence is going to come from evolutionary programming - ie building a system that evolves code based on prerequisite "bias" conditions to a certain goal. Systems like this already exist, and have been put to use to solve simple problems. One day, we might have an evolved program a few terabytes (or more) long, and we wont know what the hell it does, but it will be intelligent. And I think the correct term to use here is artificial CONCIOUSNESS - as in self-conciousness. Intelligence can be faked.. self conciousness is harder.

    The problem is, these systems and their success will cause real questions about the nature of OUR humanity. What is humanity after all but an evolved response to a competitive environment based on a small number of fixed rules (i.e. the physical rules of the universe). Now, if and when this "intelligent program" occurs.. what will happen?

    a) The postulations that a program might 'discover' it's 'enslavement' are ridiculous. The fact is, the artificial environment provided by you would be the 'organism''s universe - and expecting the organism to think "outside" the universe would be essentially futile. It's like asking humans to picture the 4th dimension - it just doesnt work.

    b) Having a program that did this, you could NOT easily "control" it.. i-e you cant make it behave one way or the other. Because the program would be too complicated for you to sort out.. many years of research would be needed to figure out what exactly is going on. You could however, control the environment, the program's universe, much like a kid with an aquarium. YOu could go in, rearrange the resources and change rules.

    c) What would this mean to the people who really have a need to believe in humanity as something 'special' - people who believe in souls or some other explanation for our conciousness or self-awareness? It would really shatter the confort zone we live in today.

    It will be grand when it happens.. whenever that is.

    -Laxative
  • too bad rudolf used hard newlines.

    you want an example of useful AI? slashdot should come up w/ an appropriately witty remark [forum2000.org] that we lazy posters can simply choose. this remark should be based on our posting behaviors. all this typing sucks.

    (i'd rather have indulgent behavior than emergent... :-)

  • by jabber ( 13196 )
    AI is not going to come around as individual, complex, human-like minds. Not like anything we can understand, either. It will be alien.

    The Matrix made an interesting point - one that I missed on the first viewing:
    "We gave birth to A.I. A SINGULAR intelligence that spawned a whole race of machines..." or something along that vein.

    AI will be a single consciousness - much like the Internet when viewed as an entity - but a lot faster, and able to do all sorts of heuristic analyses before comitting to a single, best action - a'la Deep Blue.

    Personally, IMHO, classical AI is a dead end. 'Teaching' a computer all about the world in the hope that at some point it will simply comprehend, is ludicrous. It may have adequate knowledge to make an informed decision based on a given set of algorithms - and may even tailor it's algorithms, but it will never be "intelligent".

    The future of AI lays in Neural Networks. In the emergent behavior of a complex system of miniscule processing units. Not necessarily machines, but conceptual processing units - acting together.

    Of course, for this to work, your glass of water molecules must be half-full rather than half-empty. :)
  • Anyone versed in quantum physics would tell you that the prediction of anything relies entirely on your perception of whats happening. For ever single point in the infinity of the universe there are an infinite number of perceptions of that point. Which all exists in a non-linear timescale which itself exists on a plane undescribable to humans because we're limited by our three dimensions.
    --Think for yourself, folks...it's quite interesting
  • You can't write a program and expect artificial life to spring forth from it. A natural brain doesnt use mathematics and predictable systems to process. All your clusters of neurons are just one on/off switch. Anything complex is distributed over the system and solved in mass numbers instead of one centralized system. Sure these systems eventually become non centralized, but they need to start off as separate units and then cohese (is that a word?) by themselves. You have to teach them that ramming into walls is bad, not program them not to run into them. Thats real emergant behavior. ntelligence is writing your own code, not follwing prewritten code. To create artificial life we need to start where natural life on this planet started, back in the primordial ooze. It took 4.5 billion years of code to get where we are now, a hop skip and a jump away from figuring out not to run into walls. With AI we need to start smaller than small and work our way up. Robodyne Cybernetics thinks they have more than they do.
  • Here's something to chew on, the universe is irrelevant. Any quantifying of any portion of the universe limits the mass view of the universe. Saying "there is a god" or "there is no god" gives you only a keyhole view of everything, it limits your perceptions down to the point where everything has to be qualified according to your particular set of beliefs. Arguing that you're self aware gives you the perception that you are self aware and anything you perceive becomes the product of a self aware mind. Believing you are not self aware makes everything you perceive go through that same little filter just with different parameters. Never say something is impossible until it actually becomes impossible and even then hold out saying it's impossible because there is always the inevitability that it is possible. If you dont understand what I said, then dont read it. I dont want people flaming me saying i'm wrong and you're right simply because your perception of the universe means about as much to me as that glass of tea I just drank.
  • I don't see robotics as central to the development of AI. Certainly you need some form of input/output with the AI, but I don't see why you would need to duplicate the environment that we humans experience. For example, some humans have handicaps which limit their experience of the environment but they are still considered intelligent beings. Helen Keller would be one good example, as would an autistic person.

  • Lots of thoughtful comments above. People seem to really think much about it.

    But I do believe that what we need is not only Artificial Intelligence. We also need Artificial Emotion. The entity must have it own will, its own desire to live. And without emotion you can't do that.

    Thinking in rational ways only can't go really far - you wouldn't have really a life, you wouldn't be really more than a program. We need to learn how to code the complex and nonlogic sensations that we feel in an artificial entity's pseudo-brain. Then we would reach not only the full AI, but also something beyond it - the full Artificial Life.
  • Is it just the fact that we think better that distinguishes us from the animals? I think not. While it is possible to simulate a sentient creature, one that appears to be self-aware, the fact remains it will still be a machine. Obviously we come from different worldviews, and this is something that does not seem provable by any ordinary means. But I ask you to consider yourself for a moment, and then think about what you are doing.

    According to the Christian worldview, which is the view that I hold, man is fundamentally different from all other life-forms and simulated life-forms, in that we are created in God's image. We have imagination, a universal sense of morality which is ingrained in us from birth, the ability to create, and we are self-aware. These attributes arrise from our soul, something that fundamentally man does not seem able to reproduce in our inventive attempts. The soul is immortal; the fact that you are self-aware means you are immortal. A machine simulating self-awareness (even a darn good simulation, one that cannot be distinguised observationally to be any different from a human), is not immortal. If the hardware it is running on is destroyed and no copy of it is made, it is gone. Yet if you were to destroy every molecule in my body, my soul would survive, and one day my body would be knit back together.
    --------------------------------------- -------------
    Jamin Philip Gray
    jgray@writeme.com
    http://students.cec.wustl.edu/~jpg2/
  • Man does not seem to be able to reproduce a soul . No one has made a 1 Terabyte hard drive yet either. Does that mean that man is incapable of it? Wait and see.

    I wasn't saying it is impossible for man to create a soul. I purposefully chose the language I did to imply that it is possible. My point is that the soul is something far beyond our understanding at this time. What is the mechanism behind self-awareness? Can you achieve this with matter alone? Can you achieve this with mathematics alone? Can you achieve this with a mathematical simulation residing on hardware made of matter? What need you to reproduce the soul? It is my belief and understanding that a soul is more than mathematics or matter, however complex. I'm not saying I can't be proven wrong. I'm just stating an opinion and belief.

    What makes a soul immortal? How do you know a soul is immortal? I would like to see a demonstration of this. How does a soul work? What is it made of? These are impossible to answer until some concrete test-able definition of a soul is proposed. I'm not holding my breath.

    What makes a soul immortal is that it is not made of matter. It is not a mathematical model on the hardware of humanity. It is beyond that. As far as proof of immortality, well...I think you and I would agree that it is beyond the scope of the provable at this time...
    ----------------------------------------- -----------
    Jamin Philip Gray
    jgray@writeme.com
    http://students.cec.wustl.edu/~jpg2/
  • OK, fine. Believe whatever you want to believe. Just remember that more than a few times science has proved religion inaccurate, to put it mildly. For example, do you still believe in creationism?

    You show your ignorance with this statement. The debate is not "science vs. religion." That's absurd. Science is not in opposition to Christianity. Both are about what the Truth is. In fact, science, in it's purist form (which we rarely see) can do nothing but affirm Christianity, since they are both about Truth.

    Do I belive in Creationism? That depends on what you mean. I don't like believing "isms." I choose to spend my life seeking out what is True, not which "ism" fits me best. Evolutionism vs. Chreationism is another absurd debate. I believe the Universe was created by God ex nihilo. I don't know exactly what means he used to create it. I personally tend to doubt the belief that he created it in a literal 6 day period. Given the nature of the language of Genesis 1, it's very likely it is poetic, like the Psalms, rather then meant to be a scientific account of how God created the Universe. The Bible was not meant to replace science, or to be in opposition to it. Not at all. Science has never proven Christianity wrong, ever..and it never will. In fact science has affirmed it. I can dig up examples if you want.

    I'm glad we are in agreement: You and I both realize that we can believe whatever we want, and no one can convince us otherwise. And we both realize that we can't believe whatever we want and and be right. Truth is not dependant on what we believe. In fact it doesn't give a damn about what we believe. No matter how hard you believe that God doesn't exist, it won't affect his existence. And no matter how hard I believe that God exists, it won't affect his existence.

    Peace,

    ------------------------------------------------ ----
    Jamin Philip Gray
    jgray@writeme.com
    http://students.cec.wustl.edu/~jpg2/
  • It's hard enough for me to deal with a computer that follows a program in a perfectly logical and predictable way ... even when I'm the one who wrote the program.

    Why would I want a machine that's unpredictable by design?

  • It is possible that one day a massive database system will come to self-awareness - this is also mentioned in Douglas Coupland's excellent book "Microserfs". However its intelligence will almost certainly not be of human type.

    This is not a bad thing, I'm not trying to create Frankenstein-panic. Think what we could accomplish, human beings with another intelligence to compare ourselves too. Maybe we would find that these theoretical AI-capable machines were superior intelligences, and they could govern our affairs, preserving us from self-destruction. Also, think what an OS an AI could write! :-)

    Very much up in the air, you will agree, but certainly interesting.
  • I always enjoy a debate about AI because it seems to pull in everybody. The mathematicians, computer scientists all kinds of religious views etc.

    I have come to the conclusion that nobody here has the slightest idea what they are talking about.

    We can't even seem to agree on the definitions of the words. When someone thinks that a baby has no "intelligence" but somehow grows it with age. Then they have either never spent any time with children or they have a different meaning if the word than I do.

    But lets start at the beginning. What do we mean by "Artificial Intelligence" ?

    I have a vague notion of intelligence as the ability to learn and solve problems. So if someone or something has this capability then it is intelligent. There is nothing artificial about it. So let's dump "artificial".

    So we come to "learn" and "solve problems"

    Well a data logger collects a lot of information and could be said to "learn" but that is not a lot of use to it unless it unless it can use that information to solve problems.

    Well what does it mean to "solve problems". Exactly what is a problem and what is a solution ?

    A mathematician may devise a problem like - what is the value of x when y is zero for some function y = f(x) but these logical exercises are only a "problem" if there is a desire or a need to solve it.

    I may have only one can of beans in my kitchen and no can opener. But that is only a problem if I am hungry. If I don't desire to eat then there is no problem the can can stay closed.

    So it seems there are no problems except where there is some desire, want, or need involved. Any state of the universe is as good as any other state until there is some desire to change it. That "desire" creates the "problem".

    But what the hell is "desire", "want", etc. These are emotions. Things that I feel and experience. Maybe you have them to, although I can't prove it.

    So it seems there is no intelligence with out emotion.

    I said that I cannot prove to myself that you have emotions. And I am totally convinced that a light switch has no emotions. Well a computer is just a huge collection of switches so I can't bring myself to think that there will ever be emotion in a computer.

    So to summarize: Intelligence is problem solving, problems only exist if there is emotion, computers have no emotion, so computers can never be intelligent Q.E.D.

    All this talk about emotions brings us to the consciousness and souls and God arguments at which point we all lose the plot.

    DAMN IT ! I by reversing the above argument I could say that if you can demonstrate intelligent behavior then you must have emotions !

    Like I say no one knows what they are talking about here. But at least we all feel something !

  • I don't think you really understand the basic idea behind complex systems theory (and thus emergent behaviour). If a system is simple enough it can be understood by analyzing the logs/components/etc. But as the number of variables grow, the compexity of the system grows exponentially, quickly reaching the point where it cannot be analyzed in traditional ways (ie. by examining the components and how they interact).

    To understand these systems, new methods and terminologies are needed (eg. quantum mechanics, chaos theory, complex systems theory, etc). An emergent behaviour is just a way of referring to a behaviour that can't be analyzed from, or understood in terms of, the components.

    Check out the work being done at the Santa Fe Institute [santafe.edu] if you want to find a group of people doing real work in this area.

    BTW, I also have degrees in pychology, philosophy and AI. :)

    ---

    "A society that will trade a little liberty for a little order will deserve neither and lose both."

  • I was just thinking of a game over the Internet that could be fun.

    Just imagine some AI that's alone against every human players. Give it much much much resources so and nearly no intelligence. But give it the possibility to train itself in confronting to humans. With time and experience it would increase its intelligence.

    The core question is : can we build such a mechanism that can get smarter and smarter and then can we compete against it ? When will we loose ?

    My faith (even not a point of view since there is strictly no argument) is that there will be a point where no matter how much we will be together, it will dominate us all...

    Funny and phylosophical game...
  • how do you know everything ?
    did someone encode it into your brain ? nope

    so why not trying to have an AI that learn things rather than trying to put things into its "head" ?
    The problem is it would probably need much time, and researchers lack time, they want things that can work rather quickly.

    Just imagine a system that surf over the internet, there are so much stuff there that the AI will learn plenty of things by itself, and it can become fluent in plenty of languages ;-)

    moreover, let have one AI for many "parents", that is many people who train the AI in explaining new things, the AI will be the synthethis of them all... and probably crash ! ooops ! yes because one will say : weapons and evil & the second will say weapon are needed to protect against silly dictators

    I guess that is the bigger problem : having coordinated knowledge...
  • If an artefac is made to have a simle intelligence, will it be possible for itself to understand its own reasonning ? From its point of view, conscious arise from unconscious matter (wires and even at a lower level minerals & metals). But we may guess we know how and what makes it think.

    The matter can be adapted to man. The problem is you have no proof than someone can understand what you cannot understand : your mind. In fact the only solution to this is faith, but it does not help building the artefac ;-)
  • I am ready to agree to this, but only about algorithms

    the thing is if an algorithm is something predictible and has no real intelligence in itself, the algorithm can manupulate knowledge tokens that in turn may carry intelligence.

    Our bodies are made by molecules and there is no intelligence in relations between 2 pieces of proteins. I think that intelligence does not lay into algorithms but in what they manipulate. Look at the brain : my thinking is generated by electric pulses and chemistic elements

    I wonder when Penrose wrote that book ? Ok I did not read it but what you present seems biased, but I really agree about : "This implies that human consciousness consists of more than a mere algorithm".
  • unless you consider bugs are "emergent behaviour" ?

    I guess noone would reasonnably argue that bugs are some kind of intelligence... It would rather be the lack of ;-)
  • NT server = girlfriend
    3com switch = foreigner trainee student
    linux server = nextdoor secretary (expert in powerPoint)

    yep they have "intelligence" and gender
  • read Descartes demonstration about "I think therefore I am", its not so tautological
  • then will have to wait Windows 2010 or 2063 or even 3010 ;-)
  • not necessarily, you may simulate an environment, if it is developped enough, you may get Intelligence/conciousness. And what a richer digital/virtual world we have if not the Net ?

    Anyone read/saw "Ghost in the shell" ?
    quote : "I am born in the sea of information" ;-)
  • "i think its brain would melt.." : not necessarily because there is some very well structured sites

    I suppose there should be some selecion of sites (discard newsgroups, porn sites, and Al Gore's site ,-)
  • If you can get an URL Iam very interested, because where I will probably never come accross that paper...
  • Given the definition of :
    "Today, "emergent behavior" is often used to describe computer systems grown so complex they exhibit capabilities not programmed in."

    Then the question is : are bugs of windows programmed in or not ? If not, then does my Windows box think ?

    Maybe Windows 2000 will be Space Odyssey's HAL... oops ("I can not let you do that David !" ;-)
  • read Isaac Asimov "Robots" novels

    (and by the way also read the "Foundation" sequel... :-)
  • I work with AI, but I fear what weapons government will make with it... thinking computers without fear and without surviving instinct... Just like they made bombs with chemistry, tanks with physics, the nuclear bomb with quantum mechanics and smart missiles with electronics... makes me worried what will be next when we make a intelligent computer... undestructible soldiers, killer robots at size of insects... Government turn good things on bad things...
    --
  • In centuries past, when people considered the question of non-human intelligences, the question of whether something was "intelligent," or what "intelligence" was, wasn't even the question. The question was "does X have a soul?" In other words, are we morally obligated to treat X as a person, or as an animal or a thing?

    Of course, no one thinks about "soul" or "personhood" anymore except those regressive religious types, donchaknow. So instead, we talk about "intelligence," as if that defined personhood.

    (Side note -- as to the silly idea that intelligent, or intelligent-seeming, computers would somehow "demolish" Christianity or other faiths, I hardly think so. Certainly, assertions about how the mind works, and triumphalistic predictions of strong AI soon, should give Christianity no more trouble than the older philosphical question of whether non-human and semi-human intelligences such as centaurs and satyrs had souls. As one early theologian (St. Augustine, if I remember correctly) put it, we can puzzle that out once somebody shows that centaurs and satyrs really exist, and until then, it's hardly a serious objection to the faith. Similary with Commander Data of the Starship Enterprise -- I don't think he/it presents a moral or philosophical challenge to any faith at least until a positive feasibility study comes back. See C. S. Lewis's "Religion and Rocketry" essay.)

    This does come down to the questions of "what is human?" and "what is a person?" Are we something special, or (if you like Darwin) just animals with opposible thumbs and big craniums, or (if you like Minsky) just complex, carbon-based computation engines?

    As you have probably guessed by now, I fall into the camp of believing that our personhood comes as a gift from God and is presented to us by virtue of being human, nothing more. I do not believe it comes from being clever animals or massively parallel computers. If that were true, is a hydrocephalic baby less of a person than Lassie? Yet killing the baby is murder, and killing a dog is not.

    I submit that most /.'ers do, at heart, agree with the moral and spiritual truth that humans are persons, whose personhood is a sacred thing, even while arguing against the idea. Otherwise, why such anguish over the events in Columbine? If we are simply complex, parallel automata, there there's no need to be any more upset about what Harris and Klebold did than if they had walked into a Circuit City and trashed it, or about a router going bad and flooding the Internet with bad packets.

  • So you propose that universal truths are derived from the laws and morals that we impose on the rest of the world?


    No. I propose that the killing of a child is murder, while the killing of a dog is not. It's a happy coincidence that our laws reflect this.

    The point is not that we derive truth from laws, but that we ought to be deriving laws from truth. The reason it's (currently) illegal to kill your child, but perfectly legal to kill your dog, is that our laws are based on this notion that it's morally wrong to kill children, but permissible to kill animals. A faith-based superstition, of course, which in this "enlightened" society, we will no doubt overcome someday.

    If something is really a universal truth, it's not "derived" from anything.


    You don't say who the "we" is that's doing the "imposing" on "the rest of the world", so I'm not certain of your meaning. But, it sounds like you're in the camp that thinks the distinction between a dog and a child is simply an arbitrary social judgement, "imposed" on people.

  • In your original post, you imply that since it is a crime to kill a fetus but not a dog that somehow we (humans) have some divine gift.

    Actually, I didn't specify "unborn child," as I didn't want to engage the topic of abortion.

    If I implied what you say, I was being unclear. To say it simply, it's not that we have a divine gift because it's wrong to kill humans, it's wrong to kill humans because we have the divine gift of personhood in the image of God. Dogs, while they can be wonderful creatures, don't have the same gift, so putting your pet Lassie "to sleep" does not have the same moral character as putting your Uncle George "to sleep" would.

    While you assume that our laws are a manifestation of our God-imposed morality (which for all I know may be true), it is certainly not a sound foundation upon which to make a point, having little basis in fact.

    At least you admit the possibility that I might be right. :^)

    As for "little basis in fact," I think it's quite factual that the laws of Western cultures have been heavily influenced over the last 1500+ years by Christian thought. Whether you consider this good, bad, or indifferent is another matter, but I think it's hardly debatable that this has occured.

    You mistakenly use this fact to "prove" divine intervention. Is it not possible that cutures that did not outlaw murder simply killed themselves off? That would result in a world in which murder is illegal, without the influence of God.

    You are right, it's possible -- although I think the fact (as you correctly note) that virtue works out in the long run is hardly disproof of the divine ...

    I think we're getting hung up on the multiple senses of the word "murder." Unfortunately, I didn't think of a better example when I wrote. I meant "murder" == "morally wrong killing", where you are focusing on "murder" == "illegal killing." These are not necessarily the same thing.

    It appears that your point is that humans feeling pain over loss of human life proves that God has instilled within us a sense of what is sacred. This is simply mystification of something we do not yet fully comprehend. I do not have another explaination, but masking it with God simply keeps you from seeking the truth. Why would a complex system somehow not evince the behavior you describe? Can you tell me why? Do humans understand every interaction within our tiny twisted little heads?

    So, you don't understand how it could be so, but you're sure that it can't be God? You have at least as much faith as I do.

    And yes, I do think this demonstrates the sense of the sacred.

    As I write this it is becoming clearer to me that you may be projecting your understanding of complex systems (ie artifical life) onto your own erroneously. We make the rules in AI. We know the basic unit of change: the bit. Currently we do not know what that bit is in humans therefore, unless you posess this knowledge, you have no way of conclusively proving that it is not possible for a complex system to "feel".

    You're missing the point. Yes, I have no way of proving that a complex system will never be able to "feel." But that's irrelevant. Dogs can certainly feel, but that doesn't make them persons in the sense of participating in the Divine Image. Demonstrating that a computer could be made to feel would not, therefore, make me conclude that the computer is a person.

  • So what exactly are you trying to say here? Humans have souls just because we are human? That's utterly ridiculous!

    Yes, that is exactly what I am trying to say.

    Humans are just a synthesis of billions of years of natural evolution. Nothing more. Nothing less.

    While that is a popular belief, it's not one that I plan to adopt based on your say-so.

    At exactly which point along the line do you think humans, or proto-humans were "magically" granted souls?

    At exactly the point that God made it so. As I was not there, I don't know the exact details. Why don't you ask Him if it matters to you?

    You really think that God just decided to pick us, of all species, to have souls?

    Yes.

    Fact is, the only thing that makes us *human* is that we think better than all the other animals! DUH!!

    Your assertion that it is so does not make it a "fact."

    If what makes us human is the ability to think better than a gorilla, is a severely retarded homo sapiens human?

    Hence, the argument is over intelligence and NOT souls.

    Part of what I wanted to point out was the fact that this argument is over intelligence and not soul or personhood is due to the definition of "human" as "intelligent animal", and materialistic philosphy. Thanks for helping.

  • Before we go looking for artificial intelligence, why don't we prove the existence of natural intelligence? I'm not sure I've ever seen this done. Every real argument for the existence of mentation and identity I have ever seen (admittedly this is limited to a few undergraduate philosophy courses) from "The ego posits itself" to "I think, therefore I am" are basically tautological.

    I'm not saying AI research is wasted effort, but I think its greatest value lies in how it helps us try to figure out the nature of our own intelligence (if we have any...)
  • Joe also posts long rants on sci.nanotech, which has had a steadily decreasing signal-to-noise ratio for at least a year. His basic problem is that he sees what's POTENTIALLY possible in years to come as a valid selling point of his research NOW. Furthermore, many of his claims have no basis in physics, let alone engineering. My $0.02...
  • A Turing machine has nothing to do with passing the Turing test, except that one that can pass the Turing test can be considered 'intelligent'.

    A machine -- or person, I suppose -- passes the Turing test if a human conversing with it (via teletype, originally) can't tell if it's a machine or a human. (Heck, the Eliza program passes the Turing test for some (less intelligent) subset of humans doing the testing.)
  • The problem of other minds is a BIG problem. THE big problem actually; scientists don't like terms like 'faith'. This is why AI is such a cool subject it trancends science into philosophy.

    -ShieldWolf
  • I stated that emergent properties exist in other areas (such as car or, in your example, gas). The leap that some theorists make is that the mind can likewise be an emergent property of the tiny actions of molecules/corpuscles/quantum mechanics/etc. of the mind. You cannot make this leap. As you state an emergent property has to be explainable by all the elements under it (it simply describes the summed phenomenon of the smaller phenomena). There is no "small" mind, you can't explain the mind in terms of the elements that supposedly comprise it, that is my argument. You can either toss out the mind all together, or toss out emergentism, I do the latter. ;)
  • Thanks for the constructive criticism! ;) Anyways I am talking about emergent behavious bringing about CONCIOUS beings here. Not some sort of complexity theory. If you can kindly explain how emergent behaviour brings conciousness I will gladly shut up and thank you for solving the mind/body problem.
  • You missed my point. Emergent behaviour exists, and it produces some disturbingly real-seeming behvaiour. The problem is FROM WHOSE POINT OF VIEW? We are already concious therefore if we see something that _seems_ intelligent then we may dub it so. The question is how does the robot get the consciousness in the first place to make that decision? You see it is a lot weirder than you think ;)
  • There are actually a lot of cool theories of how the mind works that don't rely on the workings of the components of the brain (in a sense), e.g. that neuron firings are actually not important to consciousness, rather the complex quantum interplay of chemicals at receptor sites is where it originates from. There are also theories of fundamental mind. E.g. since we cannot divide the mind it is a fundamental part of the universe, like electrons, the brain may be just the transceiver. This would explain why when you damage it thought is affected. Much like if you damage a TV then the picture is affected, but NOT the signal ;). Damaging portions of the brain causes various interesting aphasias, yes, but this doesn't mean that the meat of the brain explains our conscious intelligence. Don't get me wrong it is quite obvious that our thoughts, memories and abilities are stored in our brain, my argument here is for concious intelligence.
  • Though it is _possible_ to understand the behaviour via logs, but complexity makes this impractical. My argument is not that emergent behavior is bunk (althought that's the title ;) ), but rather it is bunk for explaining consciousness. ;)
  • Descartes, however, makes a fundamental flaw in logic: he assumes his REASONING is immune to manipulation. Someone could easily be screwing with his logic process, then the only thing he knows is that he knows - he can't make the jump to reach any external conclusions. He can only show that he thinks, therefore he thinks, which is vacuous.

    ;)
  • Considering I have a degree in Cognitive Science and Artificial Intelligence I think I know what I am talking about here. First, there is NO program where the behavior is unexplainable from the programming, you simply have to see how the software _reacted_ to the environment. It might do something that even the programmer wasn't expecting (this happens often) but this can be explained by analysis of system logs etc. and all follows rather simple rules. Emergent behavior _seems_ like a really neat explanation of intelligence and thought, but it really just moves the question: where does it EMERGE from, and how? There are NO explanations of emergence that offer anything other than interesting analogies, e.g. a car is an emergent property of all the underlying parts - i.e. no one part is a fast moving vehicle. Okay that seems logical, but the difference is that the whole, while greater than the sum of the parts, is explained by them. This is not the case with the mind. No one knows how a conscious being arises from unconscious matter, and there are very good arguments that we may NEVER know. Not because we are not intelligent _enough_, but simple because we have intelligence in the first place.

    -Jeff Rankine aka ShieldWolf
    My $.02
  • I actually saw this on tv last night and they had this one set of robots that were designed, programmed and built to gather 'food' (little yellow pucks). They were also programmed to get the most food in as efficent a manner as possible. From this some would become comabtive, others would become sneaky, and basicly they would start employing tactics we expect in humans. This display was a little unnerving to me.

    Course then the reporter tried to say that when your computer crashes and nobody knows why - that's emergent behavior.
    -cpd
  • The article looked good until the first 'proof' point - Robodyne Systems, and their 'fractal robotics'. Puh-lease, I wish jernelists would take the effort to research their stories.

    Joe Michaels is the laughing-stock of comp.robotics.misc, his fractal robotics ideas are whoppers, but the technology doesn't exist to perform what he touts as simple coordinated movements. He once had a movie on his webpage detailing how two box modules could move around each other - the very cornerstone of what he proposes with fractal robotics. Unfortunately the top box was being pulled about with an almost undetectable string.

    Joe's rants and raves in the c.r.m newsgroup have alienated him from the robotics community. There is real research being performed in the areas of genetic algorithms, emergant behaviours, and neural/fuzzy logic systems - but there's nothing close to machines achieving self-realization.

    ttyl,
  • Why must an AI pass any Turing test to be considered intelligent? Given that there are an infinite number of states in the universe (or at least enough that we could not comprehend understanding all of them entirely), why would we ever expect a machine to be able to interpret and respond to every one before we can call it intelligent? Humans do not do this. We are not pure algorithmic processors. Humans, and other biological intelligence use highly complex fuzzy logic...we use neural networks...we adapt to our reality as necessary...we are not required to understand it in entirety before we can live in it. This is where intelligence "emerges"...we are intelligent because our fuzzy logic leads us to conclusions that we would not otherwise be able to come to via purely algorithmic processing (we simply don't have the hardware!)
  • So one of the current theories (not really current...it's been around a while) is that by making a computer or robot with processor(s) that reach a level of complexity roughly equal to the human brain, then intelligence will emerge. Anyone else notice how fast processors are gaining in speed and complexity lately? If we continue improving them at the current rate, then in another 5 or 10 years we'll probably have a true artificial intelligence.

    This raises some questions....

    Does it have the same rights we do? Can we control it? Do we want to? Should we allow AI's to exist?

    Think about the advantages they'd have. The human mind is an extremely powerful tool, but it has it's limitations. We don't multitask very well. We can't do basic math at mind-boggling speeds. The list goes on and on.

    An AI of approximately equal intelligence to the average Joe would be able to outthink the greatest of human minds. We'd be unequipped to defend ourselves against it.

    Food for thought.


    -Andy Martin
  • Lets get some terminology right here.
    According to Mr Flake wholists will tell you that emergent properties cannot be predicted from the botton level description and reductionist will tell you that nothing is emergent and that everything can be explained from dumb low level building blocks. Well and good. But this implies that these two terms of opposition are the only two alternatives. It is possible to say that higher level properties are emergent and explainable in terms of their lower level properties. The middle position is that higher level properties supervene on the lower level ones. Supervenience allows for the possibility ontologically robust higher level entities while allowing for these entities to be explained in terms of more basic ones.
    Take temporature for example. We can explain temporature in terms of the mean kinetic energy of molecules, but that doesn't mean that temporature is not real. Hardline reductionists might mumble objections that this is only a product of our modes of perception at this point but I think they would be in the minority. Not only that the reduction of temporature is notably non specific! We explain the temporature of gass as mean kinetic energy of molecules and we explain temporature in metals as excitation of free electrons. So temporature is different things in different materials. But it is still real; and it still identifies a set of specific causal properties and propensities. There are many other properties like temporature. Reductionist have difficulty with this.
    The mind may be like this. It may be that mental activity is realisable in silicon and tin etc. as much as it is in blood and tissue.
  • Engineer Joe Michael believes the applications could include clearing landmines, manipulating chemical solutions and optimizing weapon systems -- a clear example of the dangers of emergent behavior.

    What is this about? Look out! Studies show that explosives can be fabricated from fertilizer and fuel oil - a clear example of the dangers of gardening.

  • The naive man accepts life as it is, and regards things as real just as they
    present themselves to him in experience. The first step, however, which we take
    beyond this standpoint can only be this, that we ask how thinking is related to
    percept. It makes no difference whether or not the percept, in the shape given
    to me, exists continuously before and after my forming a mental picture; if I
    want to assert anything whatever about it, I can only do so with the help of
    thinking. If I assert that the world is my mental picture, I have enunciated
    the result of an act of thinking, and if my thinking is not applicable to the
    world, then this result is false. Between a percept and every kind of assertion
    about it there intervenes thinking.

    The reason why we generally overlook thinking in our consideration of things
    lies in the fact that our attention is concentrated only on the object we are
    thinking about, but not at the same time on thinking itself. The naive
    consciousness, therefore, treats thinking as something which has nothing to do
    with the things, but stands altogether apart from them, and turns its
    consideration to the world. The picture which the thinker makes of the
    phenomena of the world is regarded not as something belonging to the things,
    but as existing only in the human head. The world is complete in itself without
    this picture. It is quite finished in all its substances and forces, and of
    this ready-made world man makes a picture. Whoever thinks thus need only be
    asked one question. What right have you to declare the world to be complete
    without thinking? Does not the world produce thinking in the heads of men with
    the same necessity as it produces the blossom on a plant? Plant a seed in the
    earth. It puts forth root and stem, it unfolds into leaves and blossoms. Set
    the plant before yourself. It connects itself in your soul with a definite
    concept. Why should this concept belong any less to the whole plant than leaf
    and blossom? You say the leaves and blossom exist quite apart from a perceiving
    subject, the concept appears only when a human being confronts the plant. Quite
    so. But leaves and blossoms also appear on the plant only if there is soil in
    which the seed can be planted, and light and air in which the leaves and
    blossom can unfold. Just so the concept of the plant arises when a thinking
    consciousness approaches the plant.

    It is quite arbitrary to regard the sum of what we experience of a thing
    through bare perception as the whole thing, while that which reveals itself
    through thoughtful contemplation is regarded as a mere accretion which has
    nothing to do with the thing itself. If I am given a rosebud today, the picture
    that offers itself to my perception is complete only for the moment. If I put
    the bud into water, I shall tomorrow get a very different picture of my object.
    If I watch the rosebud without interruption, I shall see today's state change
    continuously into tomorrow's through an infinite number of intermediate stages.
    The picture which presents itself to me at any one moment is only a chance
    cross-section of an object which is in a continual process of development. If I
    do not put the bud into water, a whole series of states which lay as
    possibilities within the bud will not develop. Similarly I may be prevented
    tomorrow from observing the blossom further, and thereby have an incomplete
    picture of it.

    It would be a quite unobjective and fortuitous kind of opinion that declared of
    the purely momentary appearance of a thing: this is the thing. Just as little
    is it legitimate to regard the sum of perceptual characteristics as the thing.
    It might be quite possible for a spirit to receive the concept at the same time
    as, and united with, the percept. It would never occur to such a spirit that
    the concept did not belong to the thing. It would have to ascribe to the
    concept an existence indivisibly bound up with the thing. . .

    It is not due to objects that they are given to us at first without their
    corresponding concepts, but to our mental organization. Our whole being
    functions in such a way that from every real thing the elements come to us from
    two sides, from perceiving and from thinking.

    The way I am organized for apprehending the things has nothing to do with the
    nature of the things themselves. The gap between perceiving and thinking exists
    only from the moment that I as spectator confront the things. Which elements
    do, and which do not, belong to the things cannot depend at all on the manner
    in which I obtain knowledge of these elements.

    Man is a limited being... It is owing to our limitation that a thing appears to
    us as single and separate, when in truth it is not a separate being at all.
    Nowhere, for example, is the single quality 'red' to be found by itself in
    isolation. It is surrounded on all sides by other qualities to which it
    belongs, and without which it could not subsist. For us, however, it is
    necessary to isolate certain sections from the world and to consider them by
    themselves. Our eye can grasp only single concepts out of a connected
    conceptual system. This separating off is a subjective act, which is due to the
    fact that we are not identical with the world process but are a single being
    among other beings.

    The all-important thing now is to determine how the being that we are ourselves
    is related to the other entities. This determination must be distinguished from
    merely becoming conscious of ourselves. For this latter self-awareness we
    depend on perceiving, just as we do for our awareness of any other thing. The
    perception of myself reveals to me a number of qualities which I combine into
    my personality as a whole, just as I combine the qualities yellow, metallic,
    hard, etc. in the unity 'gold'. The perception of myself does not take me
    beyond the sphere of what belongs to me. This perceiving of myself must be
    distinguished from determining myself by means of thinking. Just as, by means
    of thinking, I fit any single external percept into the whole world context, so
    by means of thinking I integrate into the world-process the percepts I have
    made of mysel£ My self-perception confines me within definite limits, but my
    thinking is not concerned with these limits. ln this sense I am a two-sided
    being. I am enclosed within the sphere which I perceive as that of my
    personality, but I am also the bearer of an activity which, from a higher
    sphere, defines my limited existence.

    Our thinking is not individual like our sensing and feeling; it is universal.
    It receives an individual stamp in each separate human being only because it
    comes to be related to his individual feelings and sensations. By means of
    these particular colourings of the universal thinking, individual men
    differentiate themselves from one another. There is only one single concept of
    'triangle'. It is quite immaterial for the content of this concept whether it
    is grasped in A's consciousness or in B's. It will, however, be grasped by each
    of the two in his own individual way.

    This thought is opposed by a common prejudice which is very hard to overcome.
    This prejudice prevents one from seeing that the concept of a triangle that my
    head grasps is the same as the concept that my neighbour's head grasps. The
    naive man believes himself to be the creator of his concepts. Hence he believes
    that each person has his own concepts. It is a fundamental requirement of
    philosophic thinking that it should overcome this prejudice. The one uniform
    concept 'triangle' does not become a multiplicity because it is thought by many
    persons, for the thinking of the many is in itself a unity.

    Rudolf Steiner, 1894.

    http://home.earthlink.net/~johnrpenner/Articles/ ActofKnowing.html

  • Julia, a MUD robot, was often mistaken for a real
    human mudder... It's getting easier to pass the
    test when people demand so little intelligence
    in interaction on the 'Net...

    Find stuff about Julia at
    http://foner.www.media.mit.edu/people/foner/Juli a/

    By the way, this is not news, it's just a resurgence
    of the HAL syndrome.

If you didn't have to work so hard, you'd have more time to be depressed.

Working...