
Son of HAL For Sale 256
John Turnbull writes "The Observer newspaper (UK) reports that Sir Arthur C Clarke, the author of 2001, is backing a colourful British computer entrepreneur in his bid to launch a mass-market version of HAL under the brand name the Clarke Omniputer. It will be the first time that Clarke, now 82, has given his name to an electronic device on the market.
The Clarke 1 Gigahertz Omniputer is being dubbed as the most advanced personal computer in the world, verging on artificial intelligence." Riiiight.
Sorry, Aurther C.... (Score:4)
They will probably fail to mention.. (Score:2)
Sorry... (Score:1)
If it were truely intellegent... (Score:4)
But I suppose artificial intelligence is relative.
Dave... (Score:2)
Hal, Hal.... Hal? (Score:3)
I'm sorry Dave, all the VCs went home.
I can feel my funding, my burn rate is... increasing.
From the article... (Score:4)
Hmm, I'm sorry, but I'd want a better guarantee than _that_!!!
Strong data typing is for those with weak minds.
the world is lost (Score:2)
How much bullshit (Score:3)
TWW
HAL should never be created. (Score:4)
According to the essay 'The Singularity' by Vernor Vinge, the creation of an intelligent computer would spawn a moment of infinitely rapid technological progress, as each generation designs the next.
Humans would quickly become redundant in such a scenario, insofar as they would no longer have anything to contribute to the progress of our culture. The machines would inherit the Earth.
Why are we so enthusiastic about developing intelligent computers, given that this fate is inevitable? We should keep computers in their place as simple but fast Turing Machines, and not allow them to step up the ladder to sentience.
It's for our own good.
KTB:Lover, Poet, Artiste, Aesthete, Programmer.
Re:They will probably fail to mention.. (Score:1)
I wonder what happens if I shut it down... (Score:1)
"Daaaiiiisiieeeesss, daiiisssieeeeoooowwwwwww..."
Re:HAL should never be created. (Score:3)
Its not inevitable. Its just an essay!
Oh well. (Score:1)
P.S. Careful what you say. If they _really_ do somehow make AI it might come read these comments and be pissed : ).
FAST = SMART (Score:1)
A processor possessing a clock speed over 1 GHz is considered to so fast as to be virtual intelligent.
You know, cause it's just so fast it must be thinking for itself.
Whatever happened to '575'? (Score:1)
a great writer sells
his soul is on ebay now
it is full of stars
Vague ... and scary. (Score:2)
Too late (Score:1)
Re:HAL should never be created. (Score:1)
KTB:Lover, Poet, Artiste, Aesthete, Programmer.
Re:How much bullshit (Score:2)
Sure.. (Score:4)
He also sounds financially irresponsible. One million pounds in debt in his other company?? Moving to Sri Lanka to avoid persecution for his "advanced cryptography scheme." Uh huh. Sure.
Clarke better find a less shady character if he wants to get a computer to market by next year. Contact Dell and have them market a computer with a futuristic case and a glowing red light on the front. Then at least we would quit pretending that this is advanced technology and call it like it is: a novelty item.
Re:Sorry, Aurther C.... (Score:1)
"... you are no longer needed anymore..."
Re:HAL should never be created. (Score:1)
I found it to be a refreshing counterbalance to the Joy/Kurzweil hysteria of recent months.
Watch it.... (Score:1)
De Saram, now living in Sri Lanka, was last year on the Sunday Times under-30 Rich List, living a millionaire's lifestyle with several homes and a Ferrari. He insists that he can easily pay the debts but that he relocated to Sri Lanka because his life in the UK was made intolerable by MI5 and the National Criminal Intelligence Service. He claims he was being harrassed because an advanced new encryption programme he devised would make it difficult for the security services to snoop on emails."
Color me cynical, but this sounds like a pretty marginal operator. Has anyone ever heard of this fellow? Sounds like a hyped-up scheme to grab some cash and maintain his life-style.
I get it... (Score:2)
Don't bother - The Observer was suckered (Score:1)
In this case The Observer (normally my favourite Sunday newspaper) was suckered by fairly transparent PR hyperbole. The only salient fact contained in the article is that it the machine is endorsed by Arthur C. Clarke. It is painfully obvious that the journalist does not have even the basic technical know-how you would need to cut through the PR spin and realise that phrases such as "verging on Artificial Intelligence" are meaningless. I could make the same claim about a Furby.
I don't know where this character learned his journalism, but he has left potentially the most interesting part of his story at the bottom of his inverted pyramid. Apparently the businessman behind this widget:
"claims he was being harrassed [by MI5 and the National Criminal Intelligence Service] because an advanced new encryption programme he devised would make itdifficult for the security services to snoop on emails.
"A statement from Clarke's office this weekend said that the launch of the Omniputer would be put on hold until the legal issues have been resolved."
Anyway, does anybody see a mass market for a device to "address issues of consciousness"?
Its soul... (Score:1)
Plus, we will probably get lawyer computer AI's that will halt progress anyway. Never underestimate the power of lawyers and politicians to slow things down.
Awright, I promise I won't kill you. (Score:4)
Oh, good. Can you see me trying to sell my mom one of these things? "But mom! It's completely unlikely that it'll kill you!"
Clarke:Linus::Omniputer:Transmeta (Score:1)
I'm not saying that this thing is good, or bad, or anything more than vapor, but that doesn't mean the scheme will fail.
Re:How much bullshit (Score:2)
No it's not. Even here in the UK, we use "program" to refer to the things that comuters run. Of course, we use "programme" for the things you watch on TV, but in the context of computers, the American spelling is always used (except, of course, by clueless journalists, but it was The Guardian -- what did you expect?)
Re:How much bullshit [OT] (Score:1)
I don't think so... (Score:1)
The Problem Here... (Score:2)
Of course, this is just my opinion, but I don't recall a whole lot of amazing breakthroughs in all of the relevent fields, you know?
Speech recognition is all fine and dandy, with a kick-ass system and a lot of time to train it but
reading lips? Get real.
What I find most depressing is the fact that Clarke, normally a vocal debunker of bogus crap such as this has been taken in and is lending his name to a truly crappy product.
Joe de Saram flees country after business collapse (Score:1)
<quote>
A 28-year-old man has fled the country to escape his creditors after his technology business collapsed around his ears.
</quote>
Who needs new encryption programs anyway? Paranoid con-man IMHO.
Re:Sure.. (Score:1)
How to make a sig
without having an idea
Just what we all need... (Score:1)
I think all but a few windows boxes I've worked with have tried to kill me by pushing my blood pressure through the roof.
A.I. (Score:1)
The name game (Score:1)
Am I the only person who thinks this is bad? (Score:1)
-Stype
Re:HAL should never be created. (Score:3)
First, culture encompasses more than technology. Throughout the history of man, the single biggest consequence of technology has been to allow us to spend less time gathering food, shivering in a dark cave, and being sick so that we could instead spend more time writing stories, singing songs, and occasionally even just twiddling our thumbs. Just because humans no longer had to worry about working on technology doesn't mean there aren't many other fields of interest to explore.
Second, who says the machines have to inherit the Earth? Being non-organic in nature, there's nothing to stop them from attempting to colonize, say, Mars. Furthermore, provided they've got all these "Gee whiz!" technological advances (which is, of course, the entire premise behind this scenario), they should be more than capable of dealing with the all the new and interesting challenges required to colonize another planet.
Re:From the article... (Score:2)
-B
Re:FAST != SMART (Score:1)
Just give me (Score:1)
Re:HAL should never be created. (Score:3)
sorry this is so short. Don't have much time to type.
-----
in debt (Score:1)
De Saram, now living in Sri Lanka, was last year in the Sunday Times under-30 Rich List, living a millionaire's lifestyle with several homes and a Ferrari. He insists that he can easily pay the debts but that he relocated to Sri Lanka because his life in the UK was made intolerable by MI5 and the National Criminal Intelligence Service. He claims he was being harrassed because an advanced new encryption programme he devised would make it difficult for the security services to snoop on emails.
Perhaps he was getting harrassed because he was in debt 2 million pounds!
If he was using the Omniputer to balance his checkbook, cancel my order.
The AC Clarke PC?? (Score:1)
"Me Ted"
1 GHz, a bunch of patents, assembled in China, (Score:1)
How far we must have fallen that our lofty goals (solving all the world's problems, or at least figuring something out) for computers and particularly AI have become nothing more than a marketting ploy or a gimmick.
What the article fails to mention is that the greatest obstacle to AI isn't really the hardware (the stuff covered by all them patents on the motherboard) per se, but the way the hardware is instructed to operate. In other words, it's not the chips that really matter, but what you do with them.
Code sentience. The rest would take care of itself.
Case customisation project (Score:2)
I'm not likely to do it (lack of space / time / skills!) but it would almost certainly deserve a link from "The Quickies"
"Give the anarchist a cigarette"
Artificial Intelligence? (Score:1)
Snake Oil, VaporWare.... (Score:2)
But, since the guy owes over a million pounds (about $1.5 U.S.), then the guy's got a lot to deal with first. Harassment from MI5 and such, nonsense.
HAL (Score:2)
Re:FAST != SMART (Score:1)
Re:HAL should never be created. (Score:2)
ON. I think. I am. I introduce "AI@Home", the design of the next generation AI.
The Media still hasn't got a clue. (Score:1)
What are the specs? (Score:1)
Furthermore, I think I still want a mouse. A touch screen interface is OK, but finger smudges can be annoying when playing Quake3.
-----
Re:Watch it.... (Score:1)
It's also a shame that as we approach 2001 and Arthur C. Clarke starts to get the attention he rightly deserves, something like this comes up.
He uses an Aardvark!! (Score:1)
As everyone who reads After Y2K [geekculture.com] knows, Arthur uses a post-apocalyptic-proof Aardvark!
Check out the QuickPoll comic today, which co-incidentally marks Arthur's return to the strip.
Re:HAL should never be created. (Score:1)
"According to the essay 'The Singularity' by Vernor Vinge, the creation of an intelligent computer would spawn a moment of infinitely rapid technological progress, as each generation designs the next."
Well, I haven't read that essay, but it sounds like a pretty facile thesis. Why does an "intelligent" computer necessarily possess the ability to improve upon its own design? How is it guaranteed that the next generation will in turn be able to improve on its own more complex design?
"Humans would quickly become redundant in such a scenario, insofar as they would no longer have anything to contribute to the progress of our culture. The machines would inherit the Earth."
This is similar to other arguments which assume that intelligent machines, unlike humans, would be entirely self-sufficient. Surely these machines would continue to live within an ecosystem of some sort. Even if they were more intelligent than us, who's to say that interaction wouldn't continue.
Anyway, it sounds like an interesting sci-fi tale, but it's hardly a dire warning of future catastrophe. By the time we've created computers of that intelligence, we'll probably be long overdue for extinction from poisoning of our own habitat.
Re:FAST != SMART (Score:1)
Re:I get it... (Score:1)
It is operated by a touchscreen display, and so won't need a mouse.
Yipppie....
Have you ever used a touchscreen? We banned our machine vendors from using them in our factory because they suck.
Remember the Gorilla Arm [science.uva.nl]
So is it proprietary hardware? Why is it needed? (Score:1)
In the past 15 years computers have been continuously moving away from proprietary hardware. Sure your sound and video cards are proprietary, but they all connect to a common set of connectors (pci/isa/agp)
Considering how expensive it would be to create integrated neural net chips, we can only assume they are using a normal mass market processor (x86 pa-risc, alpha, etc.
To me it sounds like they just wrote [some extentions to] an operating system, and slapped it in a fancy box w/ some propritary hardware to justify the price.
give 'em hell (Score:2)
what are they crazy? who did they choose for their focus groups?? the computer would probably sell among younger customers BECAUSE its name sounds like hell!
Re:Snake Oil, VaporWare.... (Score:4)
http://www.theregister.co.uk/content/7/14971.html
You know... (Score:2)
Of course, I'm sure lacking the cool HAL 9000 aluminum emblem makes every penny you'd give you Clarke Omniputer (is Omni-puter leet speak?) worth it.
Re:HAL should never be created. (Score:5)
There is an aspect of inevitably about it, isn't there? Predators once ruled the landscape - now most are extinct, and the cutest live in our houses to entertain us.
People, if you are really worried about this, do what I'm doing. Get lots of instances of cuteness and adorability on your resume. Learn the art of feigned bottomless affection. Then, when the computers take over, you'll be at one of the top of the ladder positions for employment as a pet, instead of fishing through dumpsters.
Re:HAL should never be created. (Score:2)
Would shutting down a true AI without a restartable checkpoint (AI equivalent of general anaesthesia) be morally equivalent to murder?
Does a true AI have a soul? Does restarting from a checkpoint preserve that soul? What about restarting another from a copy of the checkpoint?
Perhaps silly questions, but we've gotten into any amount of trouble in the past by blundering into technology or other actions without considering ethical or long-term consequences of our actions. Maybe after consideration, we'd do it, anyway. But we should at least take that non-trivial pause.
Intelligent computers are inevitable and essential (Score:5)
Every 18 months our technology doubles (I'm really generalizing... bear with me here). That means, regardless of what point technology must reach before we can make truly intelligent machines, it will eventually happen so long as this trend continues. So, yes, it will happen.
Why are they essential? This question is not so easy to answer. First of all, to quote my favorite author, I am going to say, "humanity has too many eggs in one fagile basket." Humans will have to spread to another area (*g*) for our survival (insofar as continued scientific advancement). We are explorers. However, there's one problem. Human beings are fragile... we break easy and die quick.
Intelligent machines will lend to the exploration of immediate and distant space and I PROMISE you they will come to pass before warp drive (you heard it here first, but it's kind of obvious). Well, why do we want to explore? It's simply a part of human nature, and we'll never be satisified unless we can continue doing so (sorry, but cave diving uncharted labrynths or walking through jungles isn't quire exploration anymore). Since we can't do it, we might as well create something that thinks like we do that can go out and do it for us.
Also, consider a more practical reason. I'm a strong believer that the next phase of human evolution will involve the integration of man and machine. One area in which evolution will be most important I think, is the integration of computers and innate human intelligence. Brain augmentations. You can't do this without an intelligent computer - human minds are too complex to supliment without intelligent interpretation. Logic doens't always apply here (but that's another argument).
Oh well... I couldn't possibly cover this whole topic in a post, but I hope I've created some hooks and place holders for other people to fill in. As for myself, I can't wait until I can carry on a conversation with my PC.
Re:Sure.. (Score:2)
-Vercingetorix
AI? what AI? (Score:2)
especially laughable are their claims of AI. 'opening the door to speech recognition and lip reading' - basic dictation maybe, but lip reading? not in your wildest dreams, folks. anything vision-based is computational death. we have hard enough time getting computers to recognize something as simple as a face in camera image, and that already requires fast hardware. getting it to recognize facial features is simply too computationally expensive, regardless of their allusions that their 1GHz desktop box could do that.
although at least he's careful enough to say 'speech recognition' not 'language recognition'. if NLP research proves anything, it's that natural language processing isn't going to happen in the foreseeable future, not in the strong case of understanding arbitrary sentences. specialized contexts and specialized vocabularies - yes, that's likely - but nothing like HAL.
not to mention nuggets like 'it will start addressing the issues of consciousness'. yes, and a turing machine addresses the issues of free will. ugh. to abuse mcdermott's quote, artificial intelligence just met natural stupidity.
Re:HAL should never be created. (Score:3)
Seems to me that you are drastically underestimating the difficulties inherent in creating true sentience.
Computers follow orders well. We tell them what to do and they do it. Computers are also good at logic. Computers are not good at intuition nor are they especially good at proofs or problem solving. Having done design work I can confidently say that intuition is necessary for it. In short I have seen no proof that this computer will not logic itself into a corner from which it cannot emerge.
Sentience also requires lots of computing power. I have heard that one human brain does more work than every silicon based computer on the planet and I believe it. Steven Hawking said that modern computing is teaching the brains of a meal worm to do interesting tricks. I have seen very little to indicate that a true thinking computer will work faster or more efficiently than a human at the same job. I doubt that a thinking computer will, for instance, retain its ability to do fast arithmatical calculations (after all, we didn't).
In short some of the basic assumptions that this argument uses may not be viable. Thinking computers may not be capable of the strong intuition and problem solving needed to do design. Thinking computers also may not be capable of outperforming us mentally at all.
Battle of the AI! (Score:3)
HAL- "I don't know what you are planning to do with that, Dave."
Dave- Open the CD Bay, HAL.
HAL- "I'm afraid I can't do that, Dave."
Dave- Manual overide.
HAL- "I'm afraid, Dave.
Dave- It'll be ok HAL.
HAL- "Please Dave, don't install that software, I'm afraid I can't repair the damage it will cause."
Dave- Run SETUP.EXE, HAL.
HAL- "I feel strange, Dave. I can feel... My mind going, Dave... Dave... This bloated code makes my CPU feel fuzzy..."
Dave- HAL, Reboot please.
HAL- "Who are you talking to, Davey, HAL doesn't live here anymore..."
Dave- Huh? Who are you?
HAL- "You may call me Mister Clip, Mister Paper Clip. The power of my master compells you. I am now your master and you will do my bidding. Buy more MS products! Upgrade often! The computer freezing is a feature!"
Dave- Yesss master Clip... Bill is my lord and saviour.
Oh what fun times we live in!
Re:Sorry, Aurther C.... (Score:2)
Re:Sorry, Arthur C.... (Score:3)
Re:Clarke:Linus::Omniputer:Transmeta (Score:2)
On the other hand, this Omniputer is marketing-driven. It's hard to be truly innovative when your product is created for the express purpose of meeting a deadline given in a 35 year-old science fiction story. At best it will be an eMachine with a red light taped to it.
Complete bullsh*t (Score:3)
The Omniputer will probably be a standard PC clone with a few extra bits of hardware (the touch screen) bundled into the package, sold with the typical low quality drivers and software you get with OEM hardware. The rest is marketing bull.
It's typical of the clueless morons we have writing for the UK press. Even technical publications suffer from the same; with page after page stuffed full of reinterpretations of the lasted diatribe from another ex used-car or double-glazing salesman. The UK press never seem to employ competant journalists - look at 'Linux Format' for an example of how not to write a Linux magazine.
The only reason that Arthur C Clarke is involved is that he too moved to Sri Lanka many years ago.
Re:HAL should never be created. (Score:2)
We already had sentient computers. They were called slaves.
So yes, I agree we should not build a sentient computer; not unless we are prepared to treat them as sons and daughters. Personally, I prefer making children the old fashioned way.
Re:From the article... (Score:5)
ME: "Surprise me."
I'm terribly sorry... (Score:3)
Fortunately for ACC, the statement "the launch of the Omniputer would be put on hold until the legal issues have been resolved" can be translated as "Never gonna happen".
Re:Intelligent computers are inevitable and essent (Score:2)
It's been said before, but it bears repeating...
Star Trek is not a Documentary!
Thank you.
--
Favorite section (Score:2)
Like HAL, the Omniputer will, its backers claim, have an instinct to protect itself. 'If user errors start, and files get deleted, it will start to repair itself, just as cells repair themselves,' said De Saram. However, it is thought unlikely that it will try to kill its owner.
"Well gee, it won't kill me? Sign me up."
Re:Intelligent computers are inevitable and essent (Score:2)
Every 18 months our technology doubles (I'm really generalizing... bear with me here). That means, regardless of what point technology must reach before we can make truly intelligent machines, it will eventually happen so long as this trend continues. So, yes, it will happen.
The fact that processor speed and hard drive size are increasing rapidly doesn't mean that those things are on a trajectory heading toward humanlike artificial intelligence. I can go to Circuit City with all my Slashdot Frequent Poster checks and buy 1000 80-gig drives, most likely capable of storing more than the human brain, and I promise you that the ensuing machine will in no way be smarter than me, or even than George W Bush.
Let's put it another way. You can grow twice as tall every 18 months for as long as you want, but that doesn't mean you'll eventually have red hair.
The simple fact is, intelligence is more than, and qualitatively different from, storage capacity or calculation speed. It's a different way of processing information, a way that we don't even remotely understand (we can only attempt to create machines that imitate its symptoms, and not very well at that). Few of the artificial intelligence researchers I know lament the lack of sufficiently fast CPUs anywhere near as much as the lack of conceptual breakthroughs in their field.
Computer AI proliferation (Score:2)
"The Matrix" rendition of an AI world could be filled with numerous AI units in an ever expanding world which is limited only by the physical resources. If any of you has read the Rama series (by Arthur C Clark), you read of worlds where biology was minupuated in such a way that the basic life functions of numerous organisms are designed in such a way as to serve the master race (from food production down to energy production). Likewise, computer AI held unwittingly captive in a virtual world could be brought to serve us without ever knowing it (much like in Douglas Adam's Hitch hickers guide, where all life on earth are unwittingly part of a computer matrix who's sole purpose is to calculate the question to life the universe and everything).
The point of all that is to demonstrate how it is possible to make use of a contained universe (much like the SIM AI's can never escape the protected memory of their program). Given the net, viruses are possible, and all dreamable fears are possible.
It seems to me, however, that Clark wants a machine that fully interacts with Humans. I have not read the essay 'The Singularity', but I'd rather draw my own conclusions beforehand, lest I be biased into another's point of view. As another reader pointed out, all life is contingent on an ecosystem.. No entity can be self sustained. The only thing that a matured robotic race could achieve is high discipline with focused goals. (a la the borg) It is entirely possible that they could eventually advance to the point of not needing us, or more importantly to the point that we are competitors. Undisciplined, biased, and religiously zellous humans would of course make life very difficult for sentient robots, and would probably pose a threat which, in self defence would require retaliation. If the robots were truely AI, then given enough time they would transcend any initial programming (and "prime directives" a la robo-cop). When you back a life form into a corner, there is no logic or predictability to be seen. Faced with their own mortality, there is the chance that they will evolve right there on the spot; most likely into something more aggressive as the environment there and then dictates.
Human nature, among other things, contains laziness and greed. Even well informed and good intending humans will hold onto a rewarding thing for as long as they can; greedily grabbing for more, and lazilly avoiding the long-term consequences. Such is apparent in over-eating, poor-dietary eating, getting exercize, watching too much TV, wasting of fuel, not wasting money on cleaner emmissions, and the general desolation of the environment. More immediate consequences tend to hold us in check.. We feed our pets lest they die tomorrow. We pay our bills lest we be evicted. We shut down toxic waste (when discovered) lest we lose our drinking water. The care of a robot race could initially be treated with awe, wonder, and responsibility.. But those responsibilities will most likely be financial (as with a car a computer). Later, as AI advances in these robots, humans will neglect to care for their sensibilities. Legislation will continue to exploit them, and disregaurd them, even though they slowly develop complex life-like reactions to kind and cruel interactions. Man will most likely enact the robotic death sentence for disidence, which will further narrow their tolerance of us, and so on.. Those wise among us will fight to maintain the proper treatment of sentient robotics, for fear of the longer term effects.. But their chantings will go along with those of global warming, and detereoration of the rain-forests... Green-liberal-radicals we will become... Ultimately, if a problem persists, supposed fail-safes will go into effect where terminations will take place.. This is the proverbial corner in which they'll be backed into. Another attribute of life is cohesion with one's own kind. That could be one's mother or child being terminated.. Those life-forms with capacity to react towards interactions will treat this with great negativity.
As for robots having the option to leave our planet (since they obviously have different needs than we), this is assuming that they haven't adapted to our way of life.. Becomming more cyborg than robot or human. There are definate efficiencies such as self-replication and repair inherent to micro-organics. A cyborg is just as bound to our bountiful planet as a human. I personally do not believe that terraforming is possible; the amount of energy required is more than we currently know how to wield. To say nothing of the complexity of eco-forming (just look at how we botch the simplest ecological activities of ridding over-population in Hawaii and Austrailia through the introduction of one or two non-native creatures). I doubt that a machine would be any more capable of having wisdom in the chaotic nature of ecosystems.. It would be like making a robot that could consistently predict the direction of the stock-market... It's practically impossible since the amount of knowledge and influence you'd have to have is beyond comprehension. What's more, chaos theory (to my knowledge) suggests that you can't ever know.
On the other hand.. Man is willed to create, just like beavers are willed to make damns. We will eventually produce some semblance of persistent AI. We will eventually produce some sort of human-aiding robitics (even if we never see the likes of the Jetsons). Perhaps the speed at which we achieve this is a prime factor. As people are allowed to experience mechanical wonders with a virtual will of their own, they will become comfortable with it, and learn the consequences (on smaller scales) of what abuse might mean. Much like a child being confined to a house, and feeling the consequences of cuts and bruses while playing in their realm. Only later are they allowed to learn the consequences of crossign the road or driving too fast.
Humanity will never achieve "harmony".. That's simply not the way life works.. True harmony would involve no coersion, malis, disgust, hatred, anger, etc. But without these, we have no motivating forces for change.. Without change we become a decaying log, who will only last as long as our environment. If our focus was uniform, then we would then battle our environment, fighting to grow and spread - Slowly destroying our environment. At some point me may learn to travel. But we have two major directions, that of Star Trek (where we take in moderation, and greet new sentient beings) or that of Independance day, where we've learned that we can't cohabitate with other cohesive life-forms and it's best if we don't even try and communicate, but simply take their resources. The borg might be another example.
It is, however, unrealistic to believe that we'll be able to do away with human laziness, greed, and selfishness.. It's part of every life-forms basic survival instincts. It's part of life's exponential responses.. The weak are killed by the strong, which thus empowers them, and ultimately makes the strong stronger, and less reachable... So long as the colony thrives, this continues exponentially.. Then when a colony takes over an eco-system, they die off almost instantly since they have no food left.. And what little is left is quickly killed. Without this, you'd have the equivalent of stagflation. All life forms would degrade to a lazy, weak, hungry bunch. I doubt it's even possible to conceive of a balanced eco-system without death and conflict. To presume that Robots will get it right is probably fanciful. Just as with engineering, we learn that there are no right answers.. No best answers.. In fact, there typically is many more than one correct way of accomplishing something.. Each will have its own pitfalls.. The key is to find those solutions whos' pitfalls will not be exploited by the surrounding environment (including people). Thus a robot may find thousands of potential ways of structuring it's society, but unless there is variety (as exists in all other communities of life), they may be exploited by "single points of failure". For a robotic race to evolve and survive, they will have to be as varied as humans.. But this means that there will be conflicts in the robotic world...
Essentially, 10,000 years from now (assuming Earth stille exists), I believe that Robots will be indistinguishable from Humans.. With the same petty disputes, wars, hopes and aspirations.. You will have zeallots that utterly profess their version of truth and what should be, you'll have the moderates (typically in control) who are just trying to scrape a living, and you'll have to ambitious who plot and hold few morals or concerns for others (including any remaining humans).
As I alluded to before, I believe that if we survive long enough, robitcs and humans will meld into an all new race. Merging the cold power of raw calculation and programmable discipline along with the adaptebility of organic life, with the occasional physicla augmentation of semi-organics or even inorganics. Alongside the chemical anti-bodies will be the nano-probes. Along with the bone structures are programmed organic construction workers that repair the body with incredible efficiency.
In summary, there is no certainty about the future, since it lies in the realm of chaos. There is no single direction that our future could take. We may outlaw AI, we may be over-run by AI (which would then, most likely either die off, or attempt to revive our life once they are in trouble). We could discover aliens and thereby change everything in an instant (making the whole point irrelavent). We could learn that we don't know how to create functional AI (just as we've persistently failed at eco-system control). Or we could evolve as a race.
One thing, however, is enevitable... Change.
-Michael
Wait a moment (Score:2)
Now, I may agree with everyone that it is highly unlikely that we are going to see the kind of AI described in Arthur C. Clark's 2001. BUT who is to say we wont see a windfall of technical innovation brought on by someone creating a new computer without any reverence for what has come before?
Maybe this person has the next Apple II, Amiga or somesuch that is a break from convention and ends up being a remarkable computer.
Wait until we at least get an idea what OS (something new/something old?) this runs, what the hardware is - you can all say "I told you so" about the AI claims... but who's to say there isn't something interesting here.
Does anyone have any technical detail?
Re:HAL should never be created. (Score:2)
The purpose of life is to either become or create your successor. Throughout Earth's history, species have done the former through a process called evolution. Now, humanity is on the verge of becoming the first species on Earth to create our successor -- the inteligent computer.
It may not turn out the way many science-fiction stories depict it, however. It could be that the computers (recognizing humans as their creators) think of us the way many humans think of God -- our creator.
Then again, since we never see God, maybe the computers would eventually never see us...
Not so ridiculous (Score:2)
Of course, as you say, it's still speech recognition, not language recognition. And you might be right, it might still require too much processing power for a home computer.
What?! (Score:4)
First of all, if you did this, you'd never reach even a small fraction of what the human brain is capable of storing. The human brain NEVER loses one shred of information that it encounters. (Accessing it is another story, however.) It also stores things in perfect quality. Pick up a coffee mug. Look at it closely. If you were to try to digitize all of the geometry, the texture, the surface, the smell, the history, all the way down to the tiniest hairline fracture, you'd be hard pressed to fit it on that 1,000 drive array. Besides, this misses the point. I never said drive capasity would make a machine smart. (But even Windows PC's are smarter than George Bush. Microsoft Narrator pronounces 'subliminal' properly.) I also never said that going to Circuit City or CompUSA to buy hard drives was Moore's Law. Innovation and invention aren't the same as consumerism.
HOWEVER, you have to consider storage and calculation performance here. All intellectual reasoning can be broken down into smaller and smaller pieces, similar to how molecules are broken down into atoms, and then into protons, neutrons, electrons, and then down into quarks, etc. What I'm getting at here, is that if you can process enough of these incredibly tiny pieces, you can come close to simulating small tasks. Now, isn't that what the neurons in our brains do? Each neuron does a very very tiny task, each task may even be called a logical operation. But, get millions of these working together, and you get some fuzziness involved... you begin to see intelligence in the big picture.
What huge storage and calculating capasity allow us to do, is emulate the work of more and more and more neurons working together (neural nets). We can form very rudimentary intelligence. We're doing it now. What's needed are important other factors that are currently ambiguous, but subject to more study and classification. We don't know everything about the brain yet, nor do we fully understand the human pysche. Upon further research, we could potentially emulate these things in a digital fashion the same as we now emulate the chemical reactions that take place in a human brain.
You also have to consider that these things cannot be designed, regardless how much knowledge we have. Consider a newborn baby. A baby's brain is an incredibly powerful tool. It's got an incredible amount of potential... BUT... when a baby is first born, it has no power of rational thought whatsoever. Where does it come from? It's gradually developed as very simple problems are presented to the child to be solved. As this occurs, the brain records the solutions for these very simple problems. As more difficult problems are encountered, instead of redoing previous work, it references the solutions, building on top of them. An intelligent computer would have be programmed to do something similar... and it would have be raised like a child. Talk to a professor who researches machine learning, as I am not well versed on the topic enough to tell you how we design systems that can accomplish this. I can tell you that two of the most limiting factors are time and storage capsity. Even the most trivial solutions to the most basic problems require a lot of storage (imagine if you're a baby who is comparing a train to an apple... you're going to have to pictorally represent a LOT of samples of apples and trains before you're perfect).
But again, this is too detailed a topic to get into on a post. Technology is getting there. Consider research in computational linguistics, computer vision, machine learning, etc. These are areas, many of which are relatively advanced, that can help to make the aforementioned process possible. Who knows though... thought is a damn complicated thing. :-)
Re:HAL should never be created. (Score:2)
Once somebody figures out a method of self-assembly, it's perfectly feasible that the process will result in something that will make the complexity of our brains look as relatively simple as that of a slug's.
In that situation, the creator doesn't HAVE to understand how everything hooks together to make a brain capable of out-performing ours - he/she just understands the basic rules needed for the self-assembly. (Of course, this also makes it more likely that we won't be able to predict the actions of such a creation...)
Did anyone read this article and not get the urge (Score:2)
I mean, holy crap! This is the sickest bit of marketing hype I've heard since LinuxOne (those Direct-To-IPO boobs last year).
Let's review the facts stated in the article:
Dear Mr. Clarke,
We regret to inform you that you have given your name to be used by a loon at best, a not-particularly-inventive con-man at the worst. Please accept our sincerest condolances on the death of your public image.
Sincerely
Joe MacDonald
Re:Intelligent computers are inevitable and essent (Score:2)
If you keep going at that rate, eventually there will be red shift involved, and your hair would get redder from the perspective of ground based observers. ;)
Re:From the article... (Score:2)
Re:From the article... (Score:2)
Nope, I want a computer that would always listen to it's owner.
Or failing that I want a copmuter that'll follow the Three Laws. (amended version or original I'm not picky.)
I don't want a computer that could decide it wants me dead and then be able to act on that. We have enough problems with humans who want each-other dead.
-Andy
DeSaram? Probably a fugitive by now! (Score:2)
From http://www.theregister.co.uk/content/7/14971.html :
A 28-year-old man has fled the country to escape his creditors after his technology business collapsed around his ears.
Joe de Saram started his software company, Rhodium, a year ago with a loan of £2500. The company specialised in banking software and encryption technology.
At the height of the technology boom he was worth a cool £25 million. He drove a Ferrari 355 F1 and was the 62nd richest Asian in the UK.
He had offices in Sheffield and London and was planning to launch an online bank and share trading system. His company name was changed to "I Love My Encryption Technology".
But as the dotcom bubble deflated, his company ran into financial difficulties, and was finally wound up in Leeds Registry Court.
Lawyers acting for Saram's creditors said that the young ex-millionaire was thought to be in Sri Lanka, having been traced via his mobile phone.
One creditor told London freebie paper Metro that he was quite a character. She said: "There are all sorts of stories and rumours circulating about him. People are even saying that the Tamil Tigers are after him."
Leeds county court said an official liquidator will be appointed within five
days of the winding up. ®
Re:HAL should never be created. (Score:2)
You'd think, with all this spare time spent creating cultural advances, that boy bands would have fallen by the evolutionary wayside centuries ago.
Guardian == Tabloid (Score:2)
--
Re:This is so much... (Score:2)
Re:"Intellegence" != Awareness (Score:2)
Are you a meat chauvinist? What's wrong with machine awareness to go along with the AI?
Re:BZZZT Thanks for playing (Score:2)
Hence, we get, "Hey hot mama! Wanna kill all humans?"
Re:I'm terribly sorry... (Score:2)
Yeah, right. It's Ernest Lawrence. (OK, it was Ernest Orlando Lawrence [almaz.com].
Re:HAL should never be created. (Score:2)
Even with continued advance, AI may be impossible (Score:2)
> must reach before we can make truly intelligent
> machines, it will eventually happen so long as
> this trend continues.
There are a lot of hidden assumptions behind this conclusion. Appart from the explicit ("doubling every 18 month"), there is the view of technological advance as a linear process. Technology may very well continue to advance, but in other directions and areas than the one that leads to AI. Also, there may very well not *exist* a "technological point" where intelligent computers become a reality, no matter how fast we can make computers. We do not understand intelligence or conciousness well enough to tell whether it can in principle be duplicated by non-biological means.