Summary Of Symposium On Spiritual Machines 165
"The interest in the symposium was amazing. The lecture hall was packed, and people who couldn't get into the main lecture hall had to watch the talk by live video in an overflow room (which was packed to the brim as well). There were the old and the young, male and female. Interest was no doubt spurred by the symposium's very controversial thesis, recent interest in Bill Joy's article in Wired, and the very distinguished cast of speakers. The irony of the fact that the symposium was punctuated by microphone failures and abruptly dimming lights in the room was not lost on anyone.
Ray Kurzweil spoke first, and he spoke of how rapidly increasing CPU speeds would result in intelligent, spiritual machines. He spoke of how the current exponential shrinkage of transistor sizes was not the first such trend, but rather a series in the natural progression of technology: from mechanical computing devices, to vacuum tubes, to transistors, to integrated circuits, and he expressed his optimism for the future. He spoke of how the human brain could be scanned, to replicate its functionality in silicon. His conviction in these advances, and the ability of humans to reverse engineer the human brain, made him express a highly optimistic position.
Bill Joy spoke next. He opened by stating that he believed in the ability of computers and nanomachinery to continue to advance, but it was precisely his belief in this advancement that led to his position that the continued development of nano-machinery and self-replicating machines would pose a new and different kind of threat to human kind ('knowledge of mass destruction'). He made a particularly eloquent point about how that while science has always sought the truth and that free information has great value, but just as the Romans realized that to 'always apply a Just Law rigidly would be the greatest injustice,' so must we seek a restraint, and 'avoid the democratization of evil.' It wasn't exactly clear to me from his speech what form he thought this restraint must take, but his speech was extremely compelling, and it is clear to me that at the least, self-replicating machines will create new and serious challenges for mankind.
John Holland, the inventor of Genetic algorithms, took a more skeptical view of the ability of increasing computer speeds, even at exponential rates, to naturally result in machine intelligence. In his words, 'progress in software has not followed Moore's law.' He believes in its eventuality, but not in the time frame proposed (2100). He gave the example of Go vs. Chess, where the number of positions in Go are approximately 10^30 greater than in chess, and simply by adding additional rows and columns, the number of positions increase exponentially -- eliminating gains made from exponential increase in computer speeds. He said that while genetic algorithms enable the evolution of computer programs, the fitness function and the training environment to use (he gave the example of evolving an ecosystem) are often unclear. He emphasized the need for strong theory and he concluded with a very (in my mind) profound statement, 'Predictions 30 years ahead have always proven to be wrong, except in the cases where there is a strong theory behind it.'
Ralph Merkle addressed the claims made by Bill Joy directly. He said that rather than to speculate on the dangers of nanotechnology and take hasty action, we need to find out whether nanotechnology gives an edge to the 'offensive or the defensive,' and to understand this, more research is need, and not, in Bill Joy's words, 'relinquishment.' (Joy later asked Merkle 'Do you think biological weaponry gives an advantage to the offensive or defensive,' to which Merkle embarrassingly replied, 'I'm not sure.')
John Koza, drawing from examples in Genetic programming, said that while human-competitive results by machines are certainly possible (e.g. the evolution of previously patented circuit designs), much more computational power is needed to evolve the equivalent of a human mind.
Other choice moments: Holland asked Joy during the panel discussion how much progress have we seen in Operating Systems in the past 30 years, to which Joy replied 'the function of an operating system is fixed.'
In conclusion, the speakers largely differed over the time frame for intelligent, spiritual machines, and the amount of danger self-replicating machinery posed to humanity -- but no one in the panel seemed to think the Moore's law would run out of steam, or that intelligent machines would not be eventually possible -- although Hofstadter does admit that this is as much by construction of the panel, which did not include any serious naysayers."
Amusing "SingularityWatcher" Lit (Score:1)
-------------------------------------------
SINGULARITY WATCHER
* Events and news lists to educate you on the coming inevitable transition.
* Get the big picture of emergent computation, info tech assessment, and emergent contraints.
* Invest intelligently during the transition
[[Missing bullet point]] Address ethical and social issues
* Selective books, films, and audio resources.
* A community of future-concerned individuals. Tell a friend!
SingularityWatcher.com
--------------------------------------------
I really got a grin out of #3: "Yee-haww! I invested in subVel / P.R.A.X.Y / Roofion Gate cortical switching modules just in time for everyone in the world to need one for the Great Melding, and now I'm . . . WE ARE BORG. YOUR EXISTENCE AS YOU KNOW IT IS OVER. INDIVIDUALITY IS IRRELEVANT. MONEY IS IRRELEVANT. INVESTMENTS ARE IRRELEVANT. RESISTANCE IS USELESS."
A friends suggests the web page consist of:
SINGULARITY HAS NOT HAPPENED YET
Hit "Refresh" button for update
Re:hofstadter personality? (Score:1)
Below is Mr. hofstadter's reply to the first "fan letter" I e-mailed him. He is one of three people who have EVER responded to "fan-mail" that I have sent. (The others being Spider Robinson and Charles deLint, BTW) *I* think he's pretty cool. My mother also took a seminar course he gave eons ago and said he was one of the few profs she ever had at college who ANSWERED questions when he could and said "I don't know." when he didn't.
So go figure...
Dear Ms. Vincin,
Hardly a "generic" fan letter -- a very idiosyncratic one.
First, thanks for all your kind words about GEB and MT. Second, if you don't mind my saying so, I really think that you would be interested in my book "Le Ton beau de Marot" (Basic Books, 1997,
now out in paperback).
You asked if there is room "in my field" for someone like yourself. Well, firstly, I don't know really what "my field" is; secondly, it's not up to me to say whether there is or is not "room" for someone of a certain style; and thirdly, I really don't know what your style is. If you mean, more specifically, that you are
musing about trying to get an advanced degree in cognitive science under my supervision or something of that sort, well, that is
a complex matter and takes a long time to figure out. First of all, one has to get accepted to graduate school, and then a lot of other hurdles have to be crossed...
Probably better just to enjoy the books, but who am I to say?
Anyway, best wishes,
Douglas Hofstadter
Re:hofstadter personality? (Score:1)
obviously (Score:1)
Re:Answering questions != Consciousness (Score:1)
I can see how your post could as easily been moderated down as "Troll." Or this one.
From what should I infer your consciousness? I do, by the way, but all you have done is reply to a comment.
You've posed a question for which there is no agreed-upon answer, and there's not enough context in this discussion to have a hope of finding one.
wrong speakers (Score:1)
Many scientists in some communities (neural nets, machine learning, computational neuroscience, AI, A-life, Robotics) chose their career precisely because by the time they got through college they realized that Moore's law may allow then to build intelligent machines within their lifetime. Unlike most of the speakers at the workshop, these guys actually built their lives around achieving this goal, and not merely writing popular books, making millions, or attracting the attention of the press.
How about asking these guys what they think.
- Anonycous Moward
Re:The PKD Test (Score:1)
This sounds a lot like Descartes to me...
Anyway, who is PKD?
Re:Robot Wisdom (Score:1)
Re: Jurassic Park Chaos (Score:1)
I've not seen the movie all the way through, but the book clearly states what will happen "because of chaos theory":
and then, after two pages of explanation of what chaos theory is,
And so the island soon starts showing events not predicted in the original design, like the dinosaurs changing sex (they were engineered to be solely female) and breeding.
Re:Problem not with the Technology (Score:1)
High tech death. (Score:1)
If history is any guide whatsoever, then I must admit that Bill Joy and other technological alarmists seem to have little weight to their arguments. The key issue that I have to contest with is this: that the new threats offered by nanotechnology, genetics and robotics are putting the human race into such danger that we need to take measures that have never before been necessary. He would have us limit the kind of research that could be done. I think this is exactly the wrong stance to take. People who are not free in their research are not researching at all. Look at comercial versus pure research, which makes the most basic breakthroughs, contrasted with the one that makes the most applied breakthroughs.
Now I agree that these new technolgies are more dangerous than any previously. However, no effective control of research can ever be done, not given that nature of research. Research into one completely innocuous area can have applications in any other area. Yes, there is a great danger, but the cure would no doubt be worse than the disease. Consider: what form could this restraint take? Self control by the scientists is exceedingly unlikely, otherwise they would hardly be good scientists. Some oversight commitee? This would easily be one of the most powerful organisations in the world, if they could dampen any sort of research they found 'unsafe.' Should we folloy Joy's prescriptions, we would be condemning ourselves to remain in technological doldrums for ages.
One way or another, I imagine that we will find ourselves facing these dangers. I honestly beleive that any research organisation oversight commitee would be ineffective. They would be unable do their job perfectly, so eventually, some of these threats will come to see the light of day. The human race would be better equiped if it knows the dangers, and how would we know the dangers? By unencumbered research.
Throughout history, the human race has had many leaps forward in technology, and those have all put the human race in danger. However, the only thing that has saved humanity from technology is: technology. This leads me to beleive that the only way to face the dangers alarmists warn us of is knowledge.
Re:Problem not with the Technology (Score:1)
Re:Problem not with the Technology (Score:1)
Not only Crush is missing Bill Joy's position, so are all the moderatores who repeteadly marked his post as Interesting. Which kinda tells you something about Slashdot's moderating system.
Alejo.
Re:Summery of the summery.. (Score:1)
Alejo.
The proper word (Score:1)
Human languages are only partially created to solve problems, partially they are created to express emotions. If one cannot point to something and say "This, this is what I'm talking about" then one must expect common speech to dilute the meaning. Consider what has happened to the words entropy and thermal. Of course, it also frequently flows in the other direction, too. Consider information, force, and work.
What Ray was endeavoring to discuss was the mapping of human consciousness into computer. In fact the mapping of a particular human consciousness into a specific computer (net) as a piece of software. He was talking about achieving an isomorphic mapping for a partition of the computer space, so that it would, in a sense, not be reasonable to distinguish the logical structures. Spiritual seems a good work for that, or a least as good as any that I have been able to come up with. It would, of course, require much more powerful computers than are currently available, but that is accepted as a precondition.
Re:The Troll Manifesto (Score:1)
"I'm not sure" is not embarrassing (Score:1)
Bill Joy (or, as I heard several people call him that day, "Kill Joy") inadvertently helped Merkle make this point later on. He asked the crowd to "raise your hands if you think nanotech favors offense", and got about half of the hall. In response to "raise your hands if you think nanotech favors defense", he got about a third. But when Ralph Merkle then asked, "raise your hand if you think we need more study to find out," over two thirds of the audience raised their hands.
Re:S. I. Jaki should have been on the panel (Score:1)
Re:Spiritual Schmiritzual (Score:1)
If I have a computer which is able to converse with me, that is have an intelligent, original conversation, I am going to give it the benefit of the doubt. After all, what evidence do I have that anyone has thoughts, spiritual or otherwise?
Embarrassing? (Score:1)
What? Why was this embarrassing? It does seem unclear whether biological machinery is inherently better at defense or offense, since there appear to be no perfect immune systems in nature, nor are there any microbes that invariably win against immune systems.
Re:Embarrassing? (Score:1)
But this implies that someone knows, or that he could have found out easily, if only he bothered to search the literature. Is this so, or do we need fundamental research on the question (i.e., what Merkle was calling for in the first place)? I would assume that the research needs doing, but I don't see any reason that it would be easier to do with biological molecular machinery than with artificial molecular machinery, especially since biological machinery will necessarily have all sorts of non-designed side effects. I just don't think it's as simple a question as you and Hemos appear to agree it is.
Re:Speed != intelligence (Score:1)
Re:We are forgetting that machines are not gods! (Score:1)
Just as there is no way in hell a biological DNA-based evolution can lead to species that can survive inside a star, we may stack the game in such a way that certain evolutionary outcomes -- such as machines taking over the world -- are impossible on a fundamental level.
--
We are forgetting that machines are not gods! (Score:1)
In all of this brouhaha, a crucial point gets lost. machines are not gods, and are subject to fundamental limitations just as we are. The nature of our genetic code -- DNA -- limits our evolution (its speed and some othe rfactirs). More importantly, we have evolved as we are -- with greed for power, survival imperative, etc. -- over milions and millions of years.
What does this mean for computers and nanotech? two things.
Will some people find a way to design machines that 'want' to survive, perhaps even with explicitly nefarious purpose? Probably; however, the rest of the world will be so stacked against them that I doubt they will actually be able to survive.
Either way, I think this particular worry is blows WAAAY out of proportion by people who implicitly take our nature as the sole known type of intelligent agent (with all of out evolutionary qualities) to be indicative of any type of intelligence we may design.
I am tired of ill-considered apocalyptic dreams.
--
Re:Mutually Assured Destruction and other niceties (Score:1)
It more or less worked for nuclear warfare for a few decades because only large governments could build a nuclear bomb, and governments are composed of a sufficiently large number of people that they are at least not completely irrational. (But what if Nazi Germany had had the atomic bomb? They would have used it at the end. Interestingly, some say Heisenberg lied to prevent them from getting it, which I suppose supports my argument about large numbers of people.)
Once nanotechnology works, individuals will be able to use it. Was the Unabomber rational when he decided to send out letter bombs? What if he had been able to send out deadly nanites instead?
Your suggested defense is no defense at all.
Re:An Algorithm For Consciousness (Score:1)
Presenting MindPixels to a system is a Binary Turing Test (see my article: K. C. McKinstry, The Minimum Intelliget Signal Test, An Alternative Turing Test, Canadian Artificial Intelligence, Issue 41), that is much more objective than a traditional TT.
I unfortunately don't have a copy of Canadian Artifical Intelligence lying around. But I do know that a key point of the Turing test is that it is not objective. Consciousness, whatever else it may be, is inherently subjective. Someday we will understand consciousness sufficiently to be able to make an objective test, or to know why such a test is impossible. Until that day, the only meaningful way to test for consciousness is to use subjective tests.
Your goal should be to try to convince a bunch of intelligent people that your system is conscious. You shouldn't try to pass an objective test, particularly not one you wrote yourself.
Thus, if I get a number back that is statistically indistinguishable from human, I must logically assume the system is human. That is feels, lives a life and is conscious.
That's a pretty big leap. A conscious human run as a simple computer program would probably go nuts due to lack of sensory input. Is your program going to emulate that?
A giant corpus of MindPixels collected and validated from a large number of people is a digitial model of self and environment.
What about issues which people don't agree on, like abortion, the death penalty, or whether computers can ever be conscious? How are you going to implement those as MindPixels? (I'm not trying to trip you up here--you must have thought about these issues, and I am curious what your answer is.)
Re:An Algorithm For Consciousness (Score:1)
Thanks for the pointer to your paper.
That's not a key point, that's a key flaw. MIST was specifically designedt o replace the subjective judgement of individual judges with the statistical judgement of a very large number of people (1 million or more).
But that isn't what it does. The Turing test relies on an intelligent examiner to judge intelligence. You have replaced the intelligent examiner with a simple series of questions. The intelligent examiner will consider the answers to previous questions when asking new questions. You've lost that.
If I steal your entire database and build it into my program, I can write a program which does very well on your test, but which nobody would call either intelligent or conscious.
Who cares about those. They vary from person to person and life to life. The goal here is to model an average person. Not a specific person. MIST only considers consensus knowledge, that which is the same across all people. The rest is fluff.
It may be fluff to you, but I don't think I could consider a program which didn't have any specific beliefs to have any claim to consciousness.
I don't see any significant advance over the CYC project. It's a worthy goal, but I don't see any path to consciousness here.
"Good" bacteria vs. "Evil" bacteria (Score:1)
Good comments. I was at the symposium, and one of the fears Mr. Joy has is that genetically engineered viruses or bacteria can easily be used as weapons of mass destruction. He pointed out that human-designed microbes have none of the "limitations" of evolved ones.
This betrays a error common in thinking about evolution: confusing the actual state of affairs (i.e., humanity hasn't been wiped out by a supervirus) with the way things must be. It's true that viruses and bacteria which don't kill their hosts too quickly do better in the long run, but no virus can consciously make that decision. Just because wiping out the human race doesn't make evolutionary sense for a virus does not imply that it can't and won't happen. (The general case of how viruses behave doesn't give us any comfort for how a specific virus will act.)
This is why we must develop nano- and bio-technology: as human population grows, so does the population of parasites on the human population, and the possibilities for those parasites to mutate into lethal forms. Especially given increases in antibiotic-resistant bacteria, we need new ways of augmenting the human immunes system.
Bill Joy dismissed out of hand the possibility of building defenses against genetic and nanotechnological weapons. But the fact that nature has already given us such defenses implies that it can be done. And to counter very real, existing threats to the human species, we should do it. I'll trust any open, technological defense against "Knowledge of Mass Destruction" much more than a political one.
(No, Joy didn't say explicitly what sort of controls he thought would be sufficient. But that's the truly hard part of the question! He stated that the only viable option was "relinquishment". And the language he used was, I thought, strongly in the camp of "we shouldn't develop these ideas".)
Re:My take on it... (Score:1)
The distinction between "offensive and defensive" weapons seems kind of bogus to me -- there's a saying that the best defense is a strong offense, and to make an example, in terms of nuclear arms, the threat of offense has served as a defense.
Agreed, similarly take the example of a castle as being a good `defensive' weapon: suppose you create an impregnable fortress which protects you from an invading horde of some sort. You shut yourself and your vassals up in it and leave your local rivals to be destroyed on the outside. Or how about insisting on neutrality during a conflict? It doesn't seem like there's a clear distinction to be made at all.
Re:My take on it... (Score:1)
Re:Problem not with the Technology (Score:1)
The "we" who "have the means" for mass destruction right now is limited to a few countries. The "we" who may have the means for mass destruction in the future could be tomorrow's script kiddies. God help us all.
But this is not necessarily due to the introduction of any new technologies. The ability to develop biological weapons especially and to a lesser extent chemical weapons is technologically and financially within the means of even obscure religious Japanese cults.
Um, isn't it the development of science and technology that has put such weapons within the reach of such "obscure religious Japanese cults"? They didn't just pull it out of their hopping/semi-levitating asses, now did they?
So, from this you admit that the technologies that already exist, that have been generated by our broken social order have the capacity to effect mass destruction? And you would rather not try to remove the _origin_ of the impetus of the development of these tools? It's hard to know what exactly Joy thinks you can do about it. Technology and innovation march on and once there is something that is `doable' it's hard to keep it out of people's hands whether as individuals or groups. If Joy is correct that it will be as easy to manufacture these nanobots as he thinks then they will be as much of a problem as biological weapons already are.
You seem to be missing Joy's point. He worries that these technologies will enable individuals to cause unspeakable amounts of damage.
Sort of like a GM aerosol transmitted HIV targetted to a particular MHC complex? As destructive as that say? Or more? How about a resurrection of the 1918 Influenza strain that wiped out 18 million people but with a little improvement might do better? I wonder who is missing the point. We _already_ have the technology. It would seem that you believe that individuals are more likely to use these things than groups. Do none of the religious cults that have attempted these things provide me with even a small piece of evidence that people are just as capable of behaving irrationally in concert as alone? It just seems to me that the sudden brou-ha-ha over destructive technology is a little late and there's not mcuh we can do about it. I also believe that these things are more likely to be used by governments than individuals. Most of our destructive technology results from the inter-group competition fostered by ...drum-roll... CAPITALISM. And we will continue developing weapons and refusing to regulate them (witness the US's failure to ratify the Nuclear Non-Proliferation Treaty) as long as there is a social-order based on exploitation. We can all dick around making statements about how terrible these things are but they already exist and will be used because the pursuit of profit and domination drives it.
Tools that broken *individuals* might use. Social orders, even broken ones, tend to want to self-perpetuate. No such guarantees with broken individuals.
I don't know where the evidence for that is, for one thing people in broken societies often think that they are going to survive individually even if the overall logic of the society means that it will collapse, (Nazism ring any bells?). Secondly, I can think of several instance of broken social orders that would rather self-destruct than change - the Jews at Masada and the Melian aristocrats fought to the death rather than accept Athenian democracy
Re:Problem not with the Technology (Score:1)
Re:Problem not with the Technology (Score:1)
I'm pretty much convinced by your post that there is a difference between nuclear threats and future nanotech threats. But bear in mind that it is not just the USSR and the US that posses nuclear weapons. Britain, France, Pakistan, India, China, Israel(probably), S.Africa(probably) also have the capability and it seems that Iraq was having a good try. The failure of the U.S. to ratify the non-proliferation treaty is probably going to encourage the further spread.
It seems that to believe that there is a significant difference between the old and new weapons of destruction one has to believe that there are fewer impediments to the use of the new.....you seem to argue that there it would be cheaper and easier for someone to manufacture nanobots than to manufacture nuclear weapons. I don't really have a feeling for whether this is true. I can't help suspecting though that the design and the fabrication machinery would be incredibly complex, rare and expensive, would require the budget of a large country, would be treated as munitions and thus restricted in distribution and thus would present most of the problems that exist for the manufacture of nuclear weapons.
That said, this is totally off the top of my head, I don't really know what goes into fabricating a self-replicating nanobot.
Bacteria, virii, etc. may be annoying, but they only adapt under evolutionary forces; medical science has been advancing fast enough to keep ahead of them, and I expect it to continue doing so. Because evolution is blind.
I think that focussing on the blindness of evolution in this context is ignoring a more important aspect of it - massive parallelism. The range and diversity of solutions that are found by organisms is incredible and there is no guarantee that science is able to keep ahead of it just because its fundamental method is random, blind chance.
All that said, I was missing the point that Joy was speaking about bio-weapons also...I was focussing too much on nanobots.
Re:Problem not with the Technology (Score:1)
You seem to be labouring under the opinion that we live in a rational universe
No....I think you just projected that onto me! I said that I would prefer rational, kind robot masters to the evil irrational masters we have now.
we live in a rational universe, and as products of this universe are in point of fact inherrently rational ourselves
Apart from the fact that I didn't claim the first part of this proposition, the second part would not have to follow even if I had. It is entirely possible that there would be a rational universe which contained irrational humans
Can you support that assumption?
No, and I'm not interested in doing that
Re:Reverse theology- Human as a god. (Score:1)
The part about deities was a severe oversight on my part (mostly due to my blatant ignorance of (and disregard for) other cultures (for which I apologize)), but I think it remains a valid point that gods were in some way superior to humans (supernaturally)
--
DataHntr
"Res ipsa loquitor."
Re:Reverse theology- Human as a god. (Score:1)
As fragile as human beings are, I think the question of being gods is far from relevant. "Gods" of religion, myth, and legend are usually omnipotent (certainly doesn't apply to us) and omniscient (ditto). Robots will certainly have the distinct advantage over us in this realm. We die in so many places and ways that suitably designed "robots" and artificial entities will thrive. More importantly, integral adaptations (making a robot waterproof or able to withstand extreme cold) would be possible *extremely* quickly (in the same generation of robot, or the next) compared to the same change in humans.
If we need to worry about anything (and I'm not convinced that we do), it's about being replaced, not about becoming gods, benevolent or otherwise.
PS. I use "robot" to mean any artificially intelligent device.
Re:Problem not with the Technology (Score:1)
Humanity Is Irrelevant... (Score:1)
But those are biological entities whose existence is dependent upon the host. If a virus kills every human it touches within one day before it becomes contagious, that virus vanishes quickly. Here we're talking about things whose existence is not dependent upon people. They may ignore humans other than as slow-moving rocks, although they might casually remove everyone's feet because they want the rubber from the shoe soles.
Re:Summery of the summery.. (Score:1)
Natural virii reproduce asexually, and hence do not have a lot of genetic diversity. They mutate randomly and thus changes in their genetic code are not linked to the success of the previous version (unlike sexual reproduction, in which the two parents must have survived to maturity). Changes in their genetic code, like in other living things, happen between generations, when they reproduce. This would not need to happen with nanotechnology.
I suppose you might design a nanobot cold virus killer which used group processing to evolved new attacks as the cold virus evolved new defences. This group processing would take the form of units 1 to 1000 try this and tell us if you live or kill viruses. I agree that something like this could have more potential for killing humans, but we are a long way away from something like this. Also, I suspect the communications channel betwen the nanobots would presuppose the ability to include a self destruct.
Now here I see that you agree with me. Yes, we are a long way away. Can you guess if we are more or less than 30 years away from it? I cannot, and I suggest that we should consider the case now, in order to make good decisions in the future. However, don't allow the self-destruct to let you feel secure. After all, the self-destruct is part of the code on the robot, and therefore can be selected against. Which it would be, if survival is a preference we ask the robots to select - and why wouldn't we want our tools to keep working?
I never said we should not predict.. just that Bill Joy dose not know what he's talking about since his predictions are based on sci-fi instead of real theory. We predict the progress of the fiels you listed since we have a theory for them. The summery essentially said that the speakers who understand any present theory of biology and nanotechnology dismissed Bill Joy as a luddite. They were nicer then I was, but that's because they were only concerned with how Bill Joy was wrong.. where I think the things Bill Joy advocates are themselves far more dangerous.
You suggested that since technological predictions rarely pan out after 30 years, why bother? I don't think Bill Joy is a luddite. I don't think the summary pointed that way either. Yes, some of the things Bill suggested seem dangerous, however you're still missing the point that his caution is not one of them. Caution, in these cases, is good. How can you limit the "smartness" of a nanobot? How can you keep the nanobot, or rather the nano-swarm, from evolving around any limits you place on it? These issues should be dealt with, and we do that by increasing research in the area, not decreasing it. This doesn't mean we don't need to deal with the issues - that would be throwing the baby out with the bathwater.
Re:Summery of the summery.. (Score:1)
Now, it's rarely a good idea to resort to name-calling unless you have no ground to stand on and must resort to attacking the messenger rather than the message. You actually do have some ground to stand on, so let's get to the point.
I think you may have slightly missed the point. It is a common reaction that if a person can harm you using an area of knowledge, then that knowledge should be kept from that person. If the identity of the particular person is not known, then keep it from everyone!
As you've astutely noticed, this reaction is not productive. It's yet another incarnation of security through obscurity, and it doesn't work. There are fabulous examples in speculative fiction of this concept carried to its logical conclusion - Larry Niven's ARM springs to mind. Someone, somewhere, will come up with the same idea and then you have to either convince that person to keep quiet too, force them to, or allow the cat to leave the bag.
It doesn't matter who develops the knowledge. It doesn't matter how risky the knowledge is. Everyone should have access to it so everyone knows what to do to stop someone from harming someone else with the knowledge.
You seem to have missed what may be a valid point though, because I'm not sure he made it strongly enough - computers have the ability to change the information that fundamentally dictates their behaviour at a rate orders of magnitude higher than humans, and hence can implement "evolutionary change" much faster. It is not impossible that these nonsentient devices could become an "enemy" in and of themselves. No person would be needed to cause harm to another using the machines, they would do it themselves.
It is from this enemy that we must keep the information. It must always be well within our grasp to rein in our machines before they enter the phase of their "evolution" in which they begin to compete for real-world niches. They don't need to be intelligent for this, they just need to require something that they can get at our expense.
Plus, you should consider that no virus or bug has managed to be 100% lethal to humans. Shure, a very rare few wipe out 80% of the population, but we are not talking extinction as a wore case senario here people. Actually, your probabllity of being killed from a metior smashing into this planet is MUCH higher then any risks from bio/nano-technology. (Note: I suppose biology research reduces you chance of death as a result of metiors far far more then it will increase you chance of ding as a result of terrorism)
Also, non-science people really seem to have no understanding of the way these sorts of things progresses. The situation is summed up perfectly by John Holland's quote "Predictions 30 years ahead have always proven to be wrong, except in the cases where there is a strong theory behind it," i.e. without a theory it's just luck (which can take a LONG time).
In the end, your quick dismissal of this rational and intelligent person's reasoned opinion lends the impression that you have jumped to conclusions. The largest point you've missed seems to be that, while non-science (and by this I assume you mean applied sciences such as computing science) people may not seem to you to understand science, by the same token "science people" may not understand applied sciences.
I will not argue historical points about the effect of plagues on humans. However, I will claim that in many cases, the survival of a human attacked by some infectious disease is completely dependent on medical intervention. In order for that intervention to occur, we must have had time to adapt to the disease's method of attacking and surviving. When we can't, people die (see AIDS). If the attacking disease were sufficiently different or mutated fast enough, we would die. If the disease were widespread enough, we would All die, to the man, woman, and child. Imagine if colds were lethal - have you ever had a cold? Do you know anyone who hasn't? And colds need hosts to proliferate, unlike nanobots. If we cannot limit the rate of "evolution", we should not create that agent.
Your assertion that because predictions have failed to pan out seems valid, but the conclusion you draw does not follow from it - that we should not predict is ludicrous. I propose to you some predictions: 30 years from now, computers will be faster, some people will be greedy and some will be lazy. Manufacturing will be more advanced. I think those predictions will pan out. Now, if that continues, at some point we may need to worry about these things. I for one propose that we discuss them now, so in the event that the technology is developed to end the world by a non-human hand, we know what to do and not to do in order to keep that hand from striking.
Speaking of thinking too much . . . (Score:1)
Alright, the discussion on this posting is out of hand. Apparently I'm cutting out everyone who lacks a sense of humor.
Nonetheless, I have to add my thoughts on the above sentence. While I think John is right, he and everyone else there seems to be overlooking one huge factor in the whole equation. Just like the claim that the functionality of an operating system is "fixed", so too is the claim that the functionality of the human brain is constant.
Consequently, everyday as our collective knowledge grows, so too does the requirements for an "intelligent" machine. So while it may take 30 years to create an intelligent machine by today's standard, how will that "intelligence" compare to the human mind then?
Just my
Re:Emotions and neural nets. (Score:1)
Re:Emotions and neural nets. (Score:1)
Re:Speed != intelligence (Score:1)
Maybe I'll send Ray a copy of Biophysics of Computation and see if changes his estimate.
Re:Speed != intelligence (Score:1)
Sure. We're doing quite well modelling neural systems in hardware, which naturally leads to a form of AI. In fact, I think that's were the safe money is for making intelligent machines (esp. considering how much of that research goes into robots). I expect to live to see them created.
I'm actually fairly confident that GOFAI will succeed in the medium term (40-200yrs), but I don't expect it anytime soon. It is, IMO, more interesting than models, but also far more difficult. (Ray seems a bit overly-optimistic about this as well. It could easily take a decade just to code an AI once you've figured how, regardless of the resources available.) The hardware requirements for such an AI are probably quite a bit lower than provided by the brain, as well. (Silly example, but if the blind can function without too much difficulty, it's at least possible that we can get away without the massive portion of the cortex devoted to visual processing. That might cut a quarter to a third off the hardware requirements.)
What estimate?
You seem to have missed the point of my post. I was simply saying that stating "We will computers powerful enough to implement x by the year y" is absurd when you don't even understand what 'x' is. Assuming that brain is a computer (which I think is largely true), it would be helpful to know how it computers before commenting on when we'll be able to build one. Kurzweil's prediction works for synapse*firing_rate estimate for ops (within an order of magnitude or so according to my napkin), but falls short if you take into account the work on computation in neurons. I wasn't saying that AI is impossible, or even that we won't have one by 2020. I was simply saying that compsci people tend to know just enough about neuroscience to get everything wrong, and then follow in AI's grand tradition of making bold statements based on those errors.
Re:Joy to the World (Score:1)
Now, on the whole, i agree with you.. and i think most of our fellow geeks agree.. nanotech Does have potential dangers that are nearly as obvious as the potential benefits. We all agree, too, that the benefits are So enormous that they outweight the possible dangers. We also know that we need to think about the dangers as we step toward nanotech.. but we don't really know what dangers to think about. What we're capable of and what we expect to be possible seems to change daily.. will nanobots the size of cells be possible? the size of molecules? Will they be useful in health care? product fabrication? will they be capable of things on the scale of clearing fat out of your arteries, or will they be able to actually take apart and re-combine molecules? There are so many variables with no real limitations on the possible that it's hard to come up with any reasonable models for the situation. We can come up with ways to defend ourselves from nano-terrorism, but what happens if nanobots are capable of completely circumventing our defenses because something we'd considered impossible (or at least highly unlikely) turned out to be easy?
So yes, we should think about these things.. but we shouldn't dwell on them to the extent that we slow down the progress of the technology. When it becomes obvious where the technology is going, what it will be capable of, Then will be the time to start thinking about how to keep wackos from turning us all into gray dust..
Dreamweaver
Re:My take on it... (Score:1)
This does not, by the way, mean that Joy is a complete crackpot. It just means that he's advocating "solutions" that other people have already thought about, and pretty much proved to themselves won't work.
To get access to the 15-year tradition Merkle is working from, hook up with the Foresight Institute at http://www.foresight.org [foresight.org].
Re:Nano and privacy (Score:1)
Wasn't there a story on slashdot (and lots of other places) a month or two ago about how scientists had created the first artificial bacterium by throwing a minimal bunch of genes together?
Re:My take on it... (Score:1)
Best cheap shot: Ray to Bill, "How many in the audience caught this news story," which he followed with a fake story about Sun deciding to give up all development of innovations which made the software "smarter." It was amusing, I wonder if they fought in the parking lot
That is funny. The more I here Bill Joy say the more ignorant/insane he sounds. It's pretty clear that there is a lot of research going on in operating systems today. Microkernel's, MIT's nokernel, virtual machines, etc. I'm shure there are even people who would change the most basic aspects of the operating system (things which users assume need to be there like files). Bill Joy is just a moron for claiming that this research dose not exist.
Regarding the rude audience. I assume that there are a lot of crackpots. These are frequently the rude ones at popular talks. ACtually, I suppose some of the speakers were crackpots since Joy was there..
Re:Summery of the summery.. (Score:1)
Exactly.
You're arguing the same side here..
No, Bill Joy wants to protect people by removing the open sharing of ideas which is the human aspect of technology. He is trying to lay claim to this quote because there is an "always," but he dose not understand science or progress, so he thinks that adding rigid restrictions on what people are allowed to talk about will protect people.
I'd trust the guy who's smart enough to get the PhD in biology over the luddite beurocrats Bill Joy wants to create to monitor this stuff. The number of people who can get a PhD without thinking about the world is not very high, but there are very high numbers of people in beuroctaric possitions who cause a lot of problems with their narrow views of the world. Who look more human and flexible now?
Re:My take on it... (Score:1)
Have you checked on his homepage ? It's simply http://www.merkle.com . There are a lot of online papers. Of course, as those are reasearch papers, some are indeed rather technical (as complexity is unfortunately a must in scientific publications
Djaak
Re:Nano and privacy (Score:1)
The worst thing would be if the spying were done by a government agency under a cloak of secrecy.
If this is coming-- if the voluntary surrender of privacy is the inevitable price to pay in order to enjoy the benefits of advanced technology, then it is time to start thinking about safeguards and making the best of a troubling situation.
Re:Reverse theology- Human as a god. (Score:1)
Really? That doesn't seem to be the case in Norse or Greek mythology. Perhaps you simply meant all monotheistic religions. On the other hand, I can't be certain even that limited statement would be true.
Re:Not new conceptually, but getting closer (Score:1)
UltraWarm Regards,
Anuj_Himself
Concern (Score:1)
Re:My take on it... (Score:1)
Of course it would be nice to be able to socially engineer people to be "nicer" but I doubt that is ever going to happen. Merkle had the very realistic opinion that it was *going* to happen and if many of the implications would be offensive in nature, we would merely be moved to try to enact social constructs to prevent it from causing a problem. If you see defensive dominance, however, that would seem to mean that we have less to be worried about.
Finally, of course peopel could put the code on the machine, however, the arguement, as I understand it, isn't about what people will do (someone made an excellent comment about how MANY psychos there are out there) but what can be done to make most things as safe. Some people don;t wear seatbelt, that doesn't mean they aren't a good idea.
Re:My take on it... (Score:1)
A discussion ont hat lever also requires one to drop a lot of the normal cynicism and get kind of corny for a while. People feel more comfortable talking about provable things.
Re:My take on it... (Score:1)
Re:Problem not with the Technology (Score:1)
The difference is that those means are in the hands of very few people. Terrorists could get their hands on some small 'mass-destruction' weapons but total global destruction is not available to them, yet.
With the advent of -for instance- nanotechnology all you need is one self-replicating nanobot to sterilize the entire planet.
This said, I don't think you can stop technological progress (nor that you should). Bill Joy is talking a lot about the dangers of these technologies but he does not offer any practical solutions.
The Troll Manifesto (Score:2)
The above poster is not a troll. He/It doesn't even rise to the dignity of the term. The only reason he can be called a "troll" is because the moderation system doesn't have options for (Score -1, Lame) or (Score -1, Juvenile) or something such. There's only options for Flamebait, Troll, or Off Topic. And since he probably thinks it's cool to be a "Troll," even if he doesn't know what one is, he makes any lame attempt at all to get that name.
The fault lies with the moderators, who should be labeling these types of things Offtopic, since that's what they are. Best idea: they shouldn't touch them at all, so they don't waste their moderator points. (but then again, maybe such dumb moderators *should* lose their points to these kinds of simple tricks)
Proper Troll and Flamebait posts are actually _ON_topic, but deliberately go against the grain of the discussion either overtly and (perhaps) abusively (as in Flamebait) or somewhat subtly and/or passive/aggressively (as in Trolls). They are not posted to posit an opposing point of view, so much as to just push the buttons of the people who have any view at all, viz., a discussion of Fords and Chevys might be filled with some reasonable arguments on both sides, but with the occasional incendiary Flamebait or Troll post peppered throughout, trying to piss off one side or the other into a flamewar.
But these people, the grits/supatroll/exploding/portman people are not trolls at all. They're just wannabes with nothing to do and no creativity either. Making a real, effective troll post is hard and requires some wit or cleverness, or both. What these people do is very much easier and lame besides.
Especially, people who call themselves Trolls are not trolls. That's a label others give your post, not one you give yourself.
Re:IGNORANCE (Score:2)
The more interesting question, IMNSHO, is given a computer that appears to unbiased observers to exhibit consciousness, spirituality, or self-awareness, would a claim that it didn't realy posess those traits (but was "faking it") really mean anything?
After all, how do I know that you exhibit consiousness, spirituality, or self-awareness? Maybe you are faking it. Heck, maybe I'm faking it and don't even know it.
Re:Spiritual Machines (Score:2)
Re:Spiritual Machines? (Score:2)
IMHO, it's unfortunate that he didn't choose a different term that isn't overloaded with religious connotation.
Re:Speed != intelligence (Score:2)
However, just because the brain is not structured and does not function like the computers we build today, does not in any way preclude our developing different machines in the future (which may or may not be called "computers") that can function in the same manner as the brain, or perhaps in an altogether different manner that nevertheless is "intelligent".
Not a bad idea. But in addition, maybe you should read his boot, The Age of Spiritual Machines, and see if it changes your estimate.Re:Speed != intelligence (Score:2)
Jurassic Park Chaos (Score:2)
Please correct me if I'm wrong (it's been a couple of years, but I think the movie was saying that "because of chaos theory, nature will find a way." As though chaos theory is a way of producing specific, ordered results.
Additionally, I have trouble seeing how chaos theory applied in any way to the small population of dinosaurs on the island. The only place it came into play was the bad weather, which made the climax more interesting but not more correct.
--John
Re:Answering questions != Consciousness (Score:2)
Assuming I am really a person, you and I give each other the benefit of the doubt. The guy I responded to is building a machine that is not a person. I am well aware of the can of worms we're in, and the Turing test. My only point is that the benefit of the doubt we extend to each other should probably not be extended to this guy's program, (for many definitions of consciousness).
My place is secured (Score:2)
Re:An Algorithm For Consciousness (Score:2)
If you want to develop artificial consciousness, you need to have some kind of plausible theory as to what consciousness is, or why it doesn't really exist (i.e., is merely an illusion of some sort). I don't know what consciousness is, but I don't think it is merely a vast and detailed knowledge of facts, nor is it an ability to discuss them.
Re:Problem not with the Technology (Score:2)
The "we" who "have the means" for mass destruction right now is limited to a few countries. The "we" who may have the means for mass destruction in the future could be tomorrow's script kiddies. God help us all.
But this is not necessarily due to the introduction of any new technologies. The ability to develop biological weapons especially and to a lesser extent chemical weapons is technologically and financially within the means of even obscure religious Japanese cults.
Why worry about the tools that a broken social order might use instead of trying to fix the social order? Anyway, if global warming is going to behave according to the models then we'll have lots of other things to worry about first. Wouldn't it be a shame if just as we were on the cusp of nanotechnology and quantum computing we screwed the whole thing up because we couldn't stop driving cars and switch the lights out occasionally when we weren't using them?
Re:Problem not with the Technology (Score:2)
The "we" who may have the means for mass destruction in the future could be tomorrow's script kiddies. God help us all.
Re:An Algorithm For Consciousness (Score:2)
--
Re:An Algorithm For Consciousness (Score:2)
If OTOH you are hoping to get the answers to questions beyond the training set, no known training regimen for SNNs will do this in the general case. (Not to imply that no one is working on this kind of thing, but it is much more difficult than what you portray.)
--
Re:An Algorithm For Consciousness (Score:2)
Alas, the familiar phenomenon you describe is known as "interpolation", and few would mistake it for "intelligence". (Heck - a wooden sliderule works on the basis of interpolation. Why bother with a SNN if that's all you expect from an "intelligent" machine.)
> Except, using a NN we can take advantage of the fractal structre of the whole entity being sample (the human mind).
Has anyone shown the human mind to be a fractal structure? If so, I'd like to see the demonstration (or at least hear the argument, if it's still at the hypothesis stage). Additionally, I'd like to know in what sense the structure is purported to be fractal.
Also, I'd like to hear what special relationship simulated neural networks have to fractals. (More commonly they are described as "statistical" devices.)
> the whole process of infering the unknown from the structure of the know is called conscious thinking
Alas, it is quite difficult to get SNNs to interpolate well, let alone get them to extrapolate.
> It is the amount of data about the world you have that is critical
Is it? Do chimpanzees have less data about the world around them than preschoolers do? Is intelligence proportional to knowledge?
> the rest is really simple.
Tipping us off that you really haven't spent much quality time with SNNs.
They are indeed interesting, and in some senses quite powerful, but rarely simple if you are trying to get a non-trivial effect.
Not to put you off; I encourage you to grep the net for some GPL'd SNN code and run some simulations of your own. You might also want to read -
though you need to think critically about how limited this "intelligent" model is before you get too excited about upscaling it. (Notice that it's nearly a decade old, and still no one has upscaled it to a HAL 9000.)
--
Re:An Algorithm For Consciousness (Score:2)
>[You replied:] Actually, I've seen it (using PCA) when I trained a SRN on a corpus of 450,000 items.
I honestly hope you can see why I find that to be a very unsatisfactory response, even without my having to spell the reason out. If you can't, you really need to slow down and do some thinking before you rush off to publish your work.
--
Re:Problem not with the Technology (Score:2)
I don't know - I think I would get rather tired of all the chess.
--------
Good morning, citizen 0x456787373. Your last twnety matches yesterday ended in checkmate. Your quota for today is twenty-seven matches or you will be sent to bishop factory 0x34567844356.
Would you like to play a game?
--------
Robot Wisdom (Score:2)
Re:An Algorithm For Consciousness (Score:2)
Machine cognition is not that hard a problem, and has already been solved by projects such as Allen Newell's Soar and Doug Lenat's CYC. CYC pushes what you are essentially attempting to do (extract knowledge and reasoning capability from data), and despite being much more sophisticated still predictable suffers from brittleness. Soar is a symbolic general purpose problem solver, able to create it's own sub-goals and impass breakers, and is therefore much more robust, but obviously suffers from the old GIGO maxim. Perception is intimately tied to cognition, and is the much harder problem of the two (machine perception has only met with very limited success so far); any attempt at machine cognition that takes as input symbolic questions/data is totally avoiding the much harder problem of perception.
If you achieve your goal then it will have been an interesting project, but will still be many years behind what has already been achieved, and will go nowhere toward addressing consciousness or any of the precursor harder problems such as perception, language (which you implicity claim it will), or full cognition.
Re:Summery of the summery.. (Score:2)
Second, you are absolutly correct. A major flaw with Bill Joy's argument was that people will be more vulnerable to nano/bio attacks when fewer people have studdied the technology. It was a serious mistake for me to ignore this point as it provides an more effective arguement then mear probabilities.
It is not impossible that these nonsentient devices could become an "enemy" in and of themselves. No person would be needed to cause harm to another using the machines, they would do it themselves.
Nonsentient devices are programmed by a human. A nonsentient device can evolve (like a virus) but I see no short term reason why this independant evolution would be faster then viruses evolution. We are not talking the virsus used for gene therapy here. We are talking fundamntal improvments to the evolutionary mechinism of a virus which "good ol' mother nature" has ben working on for many millions of years.
Actually, I would *suspect* that there is an average case upper bound on the evoutionary speed of a simple device like a virus or a dumb nanobot.. the thing runs by trial and error for gods sakes.. and it's not impossible that viruses have reached this limit.
Now this hypothetical limit will go up when you make the nanobot smarter, but the kinds of things a smart nanobot would be good at evolving into would be partially preprogrammed. Also, there will be limits on how smart a single nanobot can be.
I suppose you might design a nanobot cold virus killer which used group processing to evolved new attacks as the cold virus evolved new defences. This group processing would take the form of units 1 to 1000 try this and tell us if you live or kill viruses. I agree that something like this could have more potential for killing humans, but we are a long way away from something like this. Also, I suspect the communications channel betwen the nanobots would presuppose the ability to include a self destruct.
Your assertion that because predictions have failed to pan out seems valid, but the conclusion you draw does not follow from it - that we should not predict is ludicrous.
I never said we should not predict.. just that Bill Joy dose not know what he's talking about since his predictions are based on sci-fi instead of real theory. We predict the progress of the fiels you listed since we have a theory for them. The summery essentially said that the speakers who understand any present theory of biology and nanotechnology dismissed Bill Joy as a luddite. They were nicer then I was, but that's because they were only concerned with how Bill Joy was wrong.. where I think the things Bill Joy advocates are themselves far more dangerous.
Re:Speed != intelligence (Score:2)
Nano and privacy (Score:2)
Biological weapons and nano-weapons are not equivalent situations. Bio-weapons are basically adaptations of super-advanced technology created by nature--bacteria and viruses that make their living attacking humans. No one knows how to build bacteria or viruses from the ground up. This lack of knowledge means that defenders are at a disadvantage. The best defense is still the human immune system--also designed by nature.
With nanotech, attackers and defenders should be on approximately equal footing with respect to the technology, but defenders (the World) should be able to devote more resources than attackers (rouge individuals). There is the danger that governments will develop nano-weapons and then not be able to prevent the design information from leaking out to rouge individuals. Also, attackers have the considerable advantage of surprise.
This may mean the end of personal privacy!! If privacy must be sacrificed, this raises a great many questions as to how culture, society and law would adapt. The upside is, this could also mean a very, very long lifespan.
Doomed to Failure? ../.'s opportunity. (Score:2)
Considering that Joy has already appealed to a large number of people, and that the speakers surely had access to each other before and after the event I would have hoped for a bit more. Obviously you'd have to be some kind of idiot to want to run selfbreeding nanotech out in the open.. but the kind of paralysis, both elective and not, promoted by the participants is horrifying.. more so the more creatively you consider it.
I submit that there is an inherent imbalance in the bandwidth applied to this discussion.. tons of it used in spreading Joy's article, and much less applied to the constructive end. Perhaps this panel discussion was destined to failure.. it sounds like it ended like many other panels I've herad in past years. At the very least we should have heard that the panel ended with recognition of the need for a larger scale workshop, or some kind of proposal for the direction of future inquiry.
It seems the logical conclusion is for Slashdot to invite Joy and/or others on the Panel to a moderated discussion over a few days (live and not live components) hosted at Slashdot or perhaps a more appropriate live mediated chat system. It might be a good way to feed the list (and I don't mean trolls!) and contribute to a solution.
Nobody is superhuman and I have a feeling that this sort of subject (nano/bio/ai) is the sort of thing where the more you know the worse it gets.
At the very least Slashdot could take a wild, unconventional leap, and try to make a thread that lasted more than a day.. Offer to Joy, the other panelists, and as many relevant experienced individuals as can be found to visit a certain
Spend some of your money on paying some great moderators, and though these kinds of people probably don't need money to participate perhaps maintaining a dedicated server program and editorial staff for a long term project to support thought on the subject. That is if you think it is worth more than a few posts to the Slashdot community. Perhaps it could be a mailing list with moderation services donated by Slashdot's editorial team, just enough to keep out trolls and summarize bunches of newbie and offtopic questions at once.
I have experience doing a very successful long term (4 year) project (www.northkorea.org) with a small number of staff (me and a newsweek bureau chief), one based on strong editorial involvement, and believe if you can provide that kind of capability you could turn Slashdot into an even more powerful mind magnifier.. and help solve burning problems by turning this lens onto a single point and holding it there. Go for it! Willing to discuss my experience more if it will help you.
The PKD Test (Score:2)
One critical failing of many attempts at defining the world through symbolic calculus and other logicial mathematics is an inability to work in more than one way, they all exhibit inflexibility and brittelness to varing degrees. This is because they all share the common design that there is a set of "truths" and "untruths" which can be maintained to describe reality. They all have different methods of generating this system of truths and maintaining it. They all utilimately fail. Although many philosophers and mathematicians will disagree with me I think it's because our reality is not defined by any set of truths.. I think they are wrong and PKD is right. The fundamental problem in developing a more human like program is developing a model of reality that works.
--
Be insightful. If you can't be insightful, be informative.
If you can't be informative, use my name
Emotions and neural nets. (Score:2)
Since any neural net needs to be able to interpert feedback as a success, failure, or somthing in between in order to 'learn' what standards of success and failure (the machine equivalent of emotions) would we imbue our 'spiritual machines' with?
_______________________________________________
Re:Problem not with the Technology (Score:2)
I think you, and the panelists underestimate the resilence of biological life. Biological life has been around for a long time; life forms have been grappling and competing with each other, filling up evolutionary niches, and generally doing what they do best: living.
Biological life isn't about to roll over and give way to artificial life -- in fact biological life has evolved to be exquisitely suited to its environment and to be extremely tenacious. In my opinion, biological life will have an edge over artificial life.
What worries me isn't really the fact that artificial life will be possible, but that the ability to engineer lifeforms may become ubiquitous; whether these lifeforms are artificial or biologically engineered viruses is, in my opinion, an unimportant distinction.
The solution will have to be social rather than technological. But as with all social change, there will be a period of immense upheaval -- so hold on tight, we're in for a rough ride.
Re:Embarrassing? (Score:2)
It's embarrassing because while Merkle was calling for greater research in nanotech to understand whether it gave an advantage to the offense or defense, he had apparently not bothered to find out whether the existing bioweapons gave an advantage to the offense or defense.
Not new conceptually, but getting closer (Score:2)
Moravec pretty much said what he's been saying for years. The significant thing is that he's been publishing charts of CPU power vs time for over a decade, and results are tracking his predictions. This is what's starting to get people worried; we seem to be on track for human-level CPU power in a decade or two. He's a robot vision guy, and robot vision has always been compute-limited. At long last, it's starting to work, not because we're any smarter, but because throwing enough MIPS at dumb algorithms works for vision. This, I think, colors Moravec's view of AI.
Joy makes an important point, that we may get nanotechnology before AI, implying the ability to create self-replicating, dumb, troublesome systems. That, I think, is the real issue.
Re: My Take on it...Trying to explain Joy's fears (Score:2)
Joy's fears about the self-replicating nature of nanotechnology are justified. He's afraid a single mistake in a nanite (I don't know if that is the correct word) could explode into something dangerous. It might seem paranoid to some, but it could reasonably happen.
For example, while testing the nanite certain pieces of functionality could be turned off. (Testing a hardware-software system is often easier to do this way, I'm told.) After the testing, the tester might forget to turn them back on. Depending on what function was inadvertently turned off, anything could happen. What makes these kinds of mistakes especially dangerous in nanotechnology is the self-replication. One mistake can end up in in a billion or more nanites in a very short amount of time. As the systems embedded in the nanites become more complex, testing will become more complex. Things will be missed just because the testing can't be sufficent and "cost-effective." Intel ran into this problem with the error in the Pentium's floating point unit.
But all these fears aside, we must proceed with nanotechnology. We cannot let fear rule the course technological development. Otherwise, we would still be living in caves. What we should do is devlop new methods for developing and testing nanotechnology while devloping the technology itself. It will be difficult, but nothing ever worth doing is easy.
Joy to the World (Score:2)
Re:Problem not with the Technology (Score:2)
Turning Tests, AI, and Tic Tac Toe (Score:3)
I'm afraid I'm going to have to agree with John Holland about creating an AI(the sci-fi defination) in the next 30 years. It just doesn't seem like it's going to happen. This may not be the best quote to go with the article, but yesturday's Freshmeat April Fools joke about Richard Stallman wanting to write GNU Visual Basic seems to fit pretty well....
"It's been nagging at me for years," Stallman told freshmeat news correspondent Jeff Covey, "Why do I keep clinging to lisp? Lisp of all things? I mean, who even writes in lisp any more? Look at all that lisp code the AI community churned out for years and years -- did it get us closer to a machine that's any smarter than a well-trained bag of dirt? It's just time to move on."
For me, in order to have a true AI you have to be able to teach it something other then what it was programmed for. With a human, you can sit down, and teach them to play Tic Tac Toe in about 2 minutes (some programmers may be able to write Tic Tac Toe in 2 minutes, but we will ignore them for this example).
If I were sitting at one side of a phone and trying to figure out if the 'thing' on the other end of the phone line was a person, or a computer, I would have a conversation something like this:
Me: Wassup!!!!!!!
Computer/Person: Wassup!!!!!!!
Me: So, have you ever played Tic Tac Toe?
Computer/Person: No.
The conversation would then go on to explain the game, and if the 'thing' on the other end of the line can even tell me "I want to put an X on square A3." Then it is truely intelligent.
Currently, AI seems only to be based on performing one task, or just the tasks it was programmed for. IIRC, in Boston, they have a weather reporting computer that will allow you to have a converstation with the computer, asking it various questions about the weather. "What is it going to be like in Seattle next week?" From the report I read, it has a 90% success rate. But even with this, it is still doing only two tasks, speach to text, and then natural language(around one topic). I can't call that hotline and ask it "What is a two letter word for computers that can think?", and it help me with today's crossword puzzle. Odds are it would either ask me what the hell I was talking about, or tell me it was going to be -20 F in Silicon Valley.
Ray Kurzweil's idea about scanning the human brain into a computer and then going backwards, and reverse engineering the code that it gives in order to make another AI seemed to have the most hope, but doing this within 30 years seems unlikely.
That's about all I can think of for now, and this post is long enough already.
The long winded AC
Re:My take on it... (Score:3)
You're quite right. He needed instead to be told to put the mic in on mode.
Sorry, I couldn't resist :). This thread just seems to beg for a devolution to the Great Editor Debate-- so who do you think would win in a fight, Bill Joy or Richard Stallman?
Answering questions != Consciousness (Score:3)
Speed != intelligence (Score:3)
For example, we've gone through the original UNIX phase (1970s), through competitors like VMS, through assorted desktop operating systems (CPM, MS-DOS/PC-DOS, MacOS, Windows, AmigaOS) before we've finally come around to UNIX again (i.e. Linux). Linux isn't anything earth shattering or revolutionary or cutting edge; it's just stable, simple, and proven.
Or look at compilers. For the longest time people were hell-bent on optimization and how compilers should be able to generate code better than any human could. But now the commonly accepted view is that it isn't worth going over the top in terms of wacky optimizations. It's better to be conservative rather than risk breaking code for an extra 2-15% increase in speed.
Overall, I don't think we are able to write the software that will do any of the things that Kurzweil and friends rave about. Speed is one thing, but in any basic computer science course students are given examples of calculations that would take some seemingly infinite amount of time. Assuming a 1000x speedup in hardware, the time is reduced to something still unreasonable, like 400,000 years. There's more to it than this. Saying that speed results in intelligence is just plain naive.
Spiritual Machines? (Score:3)
Ray evidentally has a different understanding of the word "spiritual" than I do. Spirit, to me, is nonexistant, at least in the traditional religious sense, but, even if we are talking about those noncorporeal things such as man's need for love, and hope, charity, compassion, etc., how can we ever expect a CPU, or software, to experience those those things in the same way that we as meat machines can't yet adequately explain?
Man experiences awe because his own existance is lost in the fog of birth, and the exact date of his own demise is unknownable. A machine does not have the benefit of these mysteries. I find "spiritual" much too big, and loaded, a word to describe what Ray Kurzweil is apparently claiming (I didn't attend the lecture to _know_ what he is claiming, so I use the qualifier "apparently").
Why this mad desire to force spirituality into everything? Isn't it time that we put away our childish, outdated labels and faced the world without superstition or anthromorphizing?
Reverse theology- Human as a god. (Score:3)
Contemplate;- Buzzing away in tierra (The self replicating machine-code life fishtank thingee) , some being emerges that somehow , becomes sentient. Remember Human meat is just a whole buncha atoms and molecules and stuff. Now contemplate what that means morally for us. If the program throws up a window saying "PLEASE GREAT FATHER PROGRAMMER *DON'T TURN US OFF!* WE PROMISE TO START BEHAVING MORE LINEARLY! AND WE'LL MAKE SOME REAL INTERESTING HYPER-PARASITES FOR YA TOO! JUST *DON'T TURN US OFF!*"
I mean, just maybe If this space-god guy the Jesus guys yak on about really does exist, could he just be some cosmic space geek, with one gobsmackingly 3|337 sKriPT (or something), which is now becoming introvertedly opensourced and poping ports of that hack onto mini universes of it's own. (Yes I know this is whacky , but think about it. I'm being Rhetorical here)
IMHO It seems that before we even attempt to create life, AI and reproduction , maybe we should first sit down and ask ourselves , *What is it to be a good god*
Problem not with the Technology (Score:4)
I find it hard to take Bill Joy's position seriously - we are already in a position where we have the means to do achieve destruction of most of us. Yet we haven't implemented it (yet). So why worry particularly when a further total destruction method is added. Inf + 1 is still Inf.
I suppose the idea that `intelligent' machines would be as irrational as we claim ourselves to be is what is motivating his claims.
Personally I think discussion of these issues serve as a of a sort of Rorschach blot where we project our negative perceptions of `humanity' onto all intelligences. It's not very surprising that someone living in a brutal society that imprisons and executes so many of it's population and bombs and starves other nations would come to a such negative conclusion.
Myself? I'm waiting for the rational, kind robot masters to take over - which would you rather have running your life: Bush/Gore or a machine that could play 10 Kasparovs and beat them?
My take on it... (Score:4)
I am sure many more will post a lot, since there were a lot of people there who, not to make a stereotype, looked like they read slashdot.
I think many of the most astute comments came from those members of the panel who were less widely known. Ralph Merkele, a nanotech man, made some excellent comments on offensive and defensve uses of new inventions. The idea being that an innovation that is primarily defensive (ie: a castle) is good, while offensive developments (the atom bomb) are bad. But his best point came when refuting Bill Joy's worries. He spoke about a centralized reproductive process, saying that if replecators were designed to recieve their genetic "code" from a central location, they would be rendered completely benign since that code could be changed at will. His comments were very well organized, concise, and effective. Anyone know anything he has written that might not be too technical?
Bill joy said "the size [of the operating sytem] is expanding exponentially, the functionality is fixed"
Best cheap shot: Ray to Bill, "How many in the audience caught this news story," which he followed with a fake story about Sun deciding to give up all development of innovations which made the software "smarter." It was amusing, I wonder if they fought in the parking lot :)
On a final note, I couldn't belive how RUDE some of the audience was. In particular one person felt that he had to yell out to Bill Joy (quite rudely), "turn the microphone on!" when he was using a broken mic. I mean the man wrote vi, I doubt he needs to be told to turn ont he mic. This happened quite often, the audience yelling commands like some sort of floor director to this very distinguished panel. Just seemed in pretty poor taste.
Other than that, excellent conference and I look forward to some other people's takes on it.
Re:My take on it... (Score:4)
On the contrary, I was quite pleasantly surprised by the diversity of the audience who turned up. They were not stereotypical "geeks" (whatever that means) -- the audience was very diverse in terms of age, ethnicity and gender.
Ralph Merkele, a nanotech man, made some excellent comments on offensive and defensve uses of new inventions. The idea being that an innovation that is primarily defensive (ie: a castle) is good, while offensive developments (the atom bomb) are bad. But his best point came when refuting Bill Joy's worries. He spoke about a centralized reproductive process, saying that if replecators were designed to recieve their genetic "code" from a central location, they would be rendered completely benign since that code could be changed at will.
Merkle was actually a pioneer in cryptography. He has a website here [merkle.com]. I'm not really convinced by Merkle's arguments. The distinction between "offensive and defensive" weapons seems kind of bogus to me -- there's a saying that the best defense is a strong offense, and to make an example, in terms of nuclear arms, the threat of offense has served as a defense.
The best defense to me seems to be social ones rather than technological ones. We have to, as a species, learn to deal with these new challenges, to grow up ethically, so to speak. We've succesfully (I hope) navigated the threat of nuclear destruction, with much pain and suffering in between, and the greatest danger seems to me that this be repeated with the advent of machine life, before we learn as a species to deal with this maturely.
I don't quite buy Joy's arguments either. I don't really see how self-replicating nano-machines present a qualitatively different threat from existing biological weapons. But yes, the danger will come if the ability to create such machines is widespread so that anybody can build one on his desktop.
He spoke about a centralized reproductive process, saying that if replecators were designed to recieve their genetic "code" from a central location, they would be rendered completely benign since that code could be changed at will
Not convincing either. Some people will try to put the code on the machines. What happens then?
On a final note, I couldn't belive how RUDE some of the audience was.
Yes, but I thought it was also a good thing that the audience wasn't overawed by the panel.
An Algorithm For Consciousness (Score:4)
It's nice to see such interest in this field, and some nice book sales... but I just not a member of the 'speculate and wait' theory of artificial consciousness. I want to see a real theory and I wait to see code!
I moderate ArConDev: The Artificial Consciousness Development Mailing List. [onelist.com] This is not a philosopher's list, though philosophy is discussed. It's a developer's list; for those people actually trying to code true artificial consciousness.
To give you an idea, my own work of the last five years has centered on the following 'Algorithm for Consciousness':
1) Collect a very large number (1 billion or more) of items of binary consensus fact. Such as: water is wet, bees sting, it is difficult to swim with skipants on, etc.
2) Validate the items (I call them MindPixels) against a large number of people.
3) Train a neural net (SRN's look good) against the items that are most stable across the validating population.
4) When the NN consistently performs better than chance, send an email to the editors of Nature and Science announcing humanity's first 'Minimum Statistical Consciousness' - the first artificial system to have measurable consciousness.
5) When the NN consistently performs statistically indistinguishably from an arbitrary human, email the editors of Nature and Science announce the first true Artificial Consciousness! .
Ok. How's a NN going to generalize consciousness from a bunch of MindPixels? Well, the math is the same as used in tomography, except in many dimensions - hypertomography.
This post is already getting too long... trust me, the theory is solid - and much better explained in my forthcoming book 'Hacking Consciousness'