The Men Trying To Save Us From the Machines 161
nk497 writes "Are you more likely to die from cancer or be wiped out by a malevolent computer? That thought has been bothering one of the co-founders of Skype so much he teamed up with Oxbridge researchers in the hopes of predicting what machine super-intelligence will mean for the world, in order to mitigate the existential threat of new technology – that is, the chance it will destroy humanity. That idea is being studied at the University of Oxford's Future of Humanity Institute and the newly launched Centre for the Study of Existential Risk at the University of Cambridge, where philosophers look more widely at the possible repercussions of nanotechnology, robotics, artificial intelligence and other innovations — and to try to avoid being outsmarted by technology."
No matter how smart something is.. (Score:5, Insightful)
... it is still bound by energy requirements and the laws of nature. All this fear mongering is bs. If you look at the evolution of life on earth, even tiny 'low intelligence' beings can take out huge intellectual behemoths like human beings.
Not only that, you have things like EMP and nukes, not even the best AI is capable of thwarting getting bombed or nuked. Intelligence is a rather demanding, costly and fragile thing in nature. All knowledge perception has costs in terms of storage, time to access, problems of interpreting the data one is seeing and whatnot.
Consider the recent revelations by the NSA spying on everyone, there are plenty of easy low tech measures to defeat high tech spying. The same way there will be plenty of easy low tech ways to cripple a higher intelligence which is bound by the laws of nature in terms of resource and energy requirements. Anything that has physical structure in the universe requires energy and resources to maintain itself.
Not just robot armies (Score:4, Interesting)
Even if it's bound by the laws of physics as we understand them (Stross-universe-like "P=NP"-powered reality modification aside) there are plenty of dangers out there we're well aware of which computing technology could ape. Nanoassemblers might not be able to eat the planet, but what if they infested humans like a disease? We're already having horrible problems with malware clogging up people's machines, and they're coded by humans; what if an artificial intelligence was put in control of a botnet, updating and improving the exploiters faster than anyone could take them apart?
Re: (Score:2)
"updating and improving the exploiters faster than anyone could take them apart?"
Not likely since there are trivial ways around such an idea, for instance any machine that is compromised STILL requires electricity. It's highly likely AI will be very computerized (flip a switch to reboot) and come with simple kill switches. Not only that laws would be enforced if any machine became sufficiently advanced, i.e. you'd have AI crime laws on the books, if you do this, we unplug you, don't give you energy, etc.
Re: (Score:2)
From TFA:
Someone doesn't know the difference between "pessimistic" and "optimistic".
In short, the answer is "no".
https://en.wikipedia.org/wiki/Colossus:_The_
Re: (Score:2)
FWIW, *both* military and factories are already well hooked up to proto-AIs. The current ones aren't really AI, but they already are looked upon as infallible decision makers by managers who don't want to take responsibility. And they're right enough of the time that that's not an unreasonable response. It's true, their decisions are tightly focused, but High Frequency Trading is only the most obvious example. They are spread throughout the decision making process.
It's my opinion that the first true gen
Re: (Score:2)
I think you're using an overly broad definition of "proto-AI".
Again, I think your definition is overly broad. HFT just follows the set algorithms (written by humans) as fast as possible within the limits of the connection to trading computers.
Possibl
Re: (Score:2)
I think you're using an overly broad definition of "proto-AI".
I'd give him some leeway [youtube.com].
Re: (Score:2)
Self awareness is necessary in any entity that it designed to interface with the external world. It exists at a very minimal level even in a thermostat. There is a (nearly) smooth slope from there up through self-driving cars to ? with inceasing self awareness all the way. At it's basis self-awareness is just homeostasis. Goals (outside of homeostasis) are another, much more difficult, matter. But note that even C. elegans is able to manifest such goals. It's harder to recognize them when they come in
Re: (Score:2)
Unless the AI is hooked into military command and control infrastructure OR controls a manufacturing plant then it will be more of a novelty than a threat.
My computer is already connected to the Sherline CNC mill in my garage.
Re:No matter how smart something is.. (Score:5, Insightful)
I find it interesting that you mention taking out smart machines with simple measures (most of them not thought out very thoroughly) in the same post as you mention NSA spying, and how "easy" it would be to defeat that spying.
(Side note: if you think you can defeat the NSA, good luck with staying on the grid, any grid, and having even a shred of success).
A super intelligent machine would not stand alone. It would not be the world against the machine. And when you see the word Machine, read that to mean the network machines
The machine would be (nominally at least) owned by some group. (The NSA is as good a candidate as any for this role).
And the machine would protect this group, and this group would protect the machine, and the machine would have no single point of vulnerability.
Google is already in such a position. Trying to knock Google off the net is a fool's errand. A concerted effort by any given country would be futile. It would require all countries to act at once.
But when the country has vested interests in the machine, such action will not happen. The machine will have the protection of the country as well as its human over masters/servants. Now you not only have to take out the machine, its minions, but the country itself. And if more than one government back the machine? Such as NATO, or CSTO? Then what? Now you have to take out entire military alliances.
You vastly underestimate the survive-ability of such a creation because you wrongly assume it will be all of mankind against a single machine.
Re: (Score:2)
http://www.guardian.co.uk/world/2011/apr/06/georgian-woman-cuts-web-access [guardian.co.uk]
Then you're not talking about a machine apocalypse but rather business-as-usual. It's not until the machine turns against its creators/owners that there is a problem. Otherwise it is doing exactly what it was spec'ed t
Re: (Score:2)
I've always considered "turns against" to be an unlikely scenario. I envision the machine becoming an "infallible advisor" to such an extent that the leader (CEO, President, Prime Minister, whatever) becomes a figurehead, and that all middle management is progressively replaced. And the system will be so designed that if the figurehead stops obeying the "suggestions" of the machine, s/he will be found incompetent, and replaced.
FWIW, we seem to be increasingly headed in this direction, limited only by cost
Re: (Score:2)
The first problem is that you've skipped over how it was created and you're focusing on how it took over once it was created.
And if you're going to do that then you can replace "AI" with "aliens" or "mutants" or "witches" or "Satan".
And if that was what it was intended to do then it is operating within spec. So what is the difference between that system and a non-AI system designed to
Re: (Score:2)
an AI designs a more efficient car. A non-AI expert system designs a more efficient car. What is the difference between the AI and the non-AI?
The AI will be horribly bored and have this terrible pain in all the diodes down it's left side
Re: (Score:2)
Re: (Score:2)
A machine that is created with a certain set of preferences shouldn't change those preferences no matter how smart it gets. Changes in goals wouldn't count as improvements.
I find the human assumption (even Asimov did this) that intelligent machines will have a drive to conquer and dominate slightly amusing. We're most certainly projecting here. It's largely our chimp and lizard hind brains that give us those impulses.
The truth is we have no idea what recursive self-improvement will lead to, if it is an AI t
Re: (Score:2)
"I find it interesting that you mention taking out smart machines with simple measures"
All smart machines require energy, everything you do in the universe requires energy. You run out of gas, it's game over regardless of how advanced your intelligence is. You still run up against the laws of nature. You seem not to have any kind of scientific understanding. Human beings have significant down time, the F-22 and F-35 - hugely expensive tech, has significant downtime for maintence and repair. The same w
Re:No matter how smart something is.. (Score:4, Insightful)
All smart machines require energy, everything you do in the universe requires energy. You run out of gas, it's game over regardless of how advanced your intelligence is. You still run up against the laws of nature. You seem not to have any kind of scientific understanding. Human beings have significant down time, the F-22 and F-35 - hugely expensive tech, has significant downtime for maintence and repair. The same would be required of anything with any reasonable level of complexity.
Intelligence fundamentally is still a physical structure that needs maintenance, energy and resources to exist. You act like AI is going to exist on some otherworldy plane when it's going to be mundane and boring and highly constrained by the laws of nature.
You still refuse to see the facts before your very eyes.
You still seem to think of a potential super-computer as being located in one place, consisting of one device, rather than a world wide network protected by a clique of workers, or a clique of nations, defending the machine to their very death.
Yes an airplane needs maintenance. But that never grounds ALL airplanes world wide.
When was the last time Google ever had a world wide outage? Clue: Its never happened since the day it was launched.
When was the last time there was a world wide internet outage? Its never happened.
Its right there in front of your eyes. Yet you still think you can walk over the wall and pull the plug.
A world dominating super computer doesn't need nuclear bunkers to exist.
It won't be one machine. It won't be dependent on a single power supply. It won't be dependent on a single network. It won't be dependent on unwilling slaves to maintain it. They will be willing slaves, and it will be hard to distinguish whether they are in control of the machine or vise versa.
Re: (Score:3)
Re: (Score:2)
Walk off the net. Google can't do much (unless one of it's driverless cars runs you over). You and The SHADOW seem to think that network computers are life, the universe and everything. The world isn't like that. Most of the human population at present isn't connected to the Internet.
Unless SkyNet Jr. gets a hold of the vast majority of physical infrastructure, it's impact will be rather limited. A couple of RPGs could take out a Google network center. A pissed off A-10 pilot could take out the entire
Re: (Score:2)
Walk off the net. Google can't do much (unless one of it's driverless cars runs you over).
You are still on the grid in one form or another, anywhere you'd care to be.
The electric grid.
The phone Grid.
The postal grid.
The police grid.
The supermarket grid.
Even Ted Kaczynski the Unnumbered wasn't able to escape the grid completely.
Re: (Score:2)
I know those "Forever" stamps could not be trusted. And the mailman? Do you think he's innocent? Don't you know that he delivers computers?
http://www.amazon.com/CPU-Processors-Memory-Computer-Add-Ons/b?ie=UTF8&node=229189 [amazon.com]
Do you think there's any safe place? Do you?
https://en.wikipedia.org/wiki/IP_over_Avian_Carriers [wikipedia.org]
There is no escape.
Re: (Score:2)
Re: (Score:2)
"Human minds are pretty intelligent, and we don't have any of those problems."
We do, think about how trivial it is to kill another human being or for there to be developmental problems or to get sick. You've obviously not payed much attention to what I said.
Not this shit again (Score:1)
Humanity's biggest enemy is humanity itself. And maybe space rocks.
Re: (Score:2)
Re: (Score:2)
Even assuming Earth is the only living world in our galaxy (which seems to me rather unlikely, but whatever), why do you assume humanity would be the only intelligence to arise spontaneously? The Earth likely has at least a couple billion years more during which it will be hospitable to complex life - whereas two billion years ago our ancestors had only just evolved a cell nucleus. 300 million years ago our ancestors were only just moving on to the land. And a measly 65 million years ago our ancestors were
Re: (Score:2)
why do you assume humanity would be the only intelligence to arise spontaneously?
I don't, therefore your question is meaningless.
The Earth likely has at least a couple billion years more during which it will be hospitable to complex life
No, it doesn't. Astrophysics 101.
Even if we somehow wiped out all multi-cellular life on the planet there would be plenty of time for complex life to evolve all over again.
Actually, I'm quite sceptical about that. But without a dataset larger than 1, any speculation on that topic is merely intellectual masturbation.
Re: (Score:2)
>Astrophysics 101
How do you figure? The sun isn't expected to become a red giant for around 5 billion years, and I hadn't heard of any instabilities in Earth's orbit that would have drastic effects on that timescale. If you have other data I'd love to hear it.
Fair enough, it took what, as much as a billion years to go from nucleated cells tor multi-cellular life the first time? So it's perhaps one of those things that doesn't happen often. And you're absolutely right that it's wild speculation. However
Re: (Score:2)
How do you figure?
Because I had astrophysics in high school. The Hertzsprung-Russell diagram, stellar evolution of main sequence stars, the works. This planet will most certainly become uninhabitable for complex life *long* before it starts turning into a red giant - unless by "complex life", you mean extremophile bacteria. I believe that the Sun increases its radiative output roughly by one percent every hundred million years. It could easily become uninhabitable for humans or human-like beings as early as two or three hund
Re: (Score:2)
A super-intelligence might not take our interests into consideration in those situations, just like we don't take root systems or ant colonies into account when we construct a building.
Think how it might be to compete for resources with the dominant species.
The ants outnumber us by perhaps a factor of 20 in mass, and a factor of 10 million in numbers. Are we really the "dominant species", or are we just deceiving ourselves? And we're not "taking them into account"? Be afraid, be very afraid...
Re: (Score:2)
the end of civilization will most likely be (Score:3)
are you listening NSA?, i hope so because this message is for you too...
Re: (Score:2, Offtopic)
There is no shortage of sentences in the world. Fee free to throw in a few periods now and then. It helps to keep you from looking like such a ranter.
Re: (Score:1)
http://www.nastyhobbit.org/data/media/13/grammar-nazi.jpg [nastyhobbit.org]
Re: (Score:2)
I actually remember when /. expected a low-medium in terms of grammar. Seems that reddit quality is just fine now...
Re: (Score:2)
That’s just Muphry's Law in action.
What about other civilizations? (Score:4, Insightful)
to try to avoid being outsmarted by technology.
The humanity can, of course, ban all machines that are smarter than humans. But that only artificially impedes the progress. Given that there ought to be an approximately infinite number of civilizations in this Universe, all paths of development will be taken, including those that lead to mostly machine civilizations. (We are already machines, by the way, it's just we are biological machines, fragile, unreliable, and slow.)
Civilizations that became machines will have no problem with FTL because they can easily afford a million years in flight by just slowing the clock down. So they will come here, to Earth, armed with technologies that Earthlings were too afraid to even allow to develop. What will happen to Earth?
Well, of course the doom is not guaranteed; but I'm using this example to demonstrate that you cannot stop the flow of progress if you only have local control, even if that. (How many movies have we seen when mad geniuses break those barriers and, essentially, own the world?)
IMO, it would be far more practical to continue the development of everything. If humanity in the end appears to be unnecessary and worthless, it's just too bad for it. The laws of nature cannot be controlled by human wishes (unless magic is real.) Most likely some convergence is possible, with human minds in machine implementations of bodies. Plenty of older people will be happy to join, simply because the only other option for them is a comfortable grave.
Re: (Score:2)
We already have machines that are smarter than humans, if you mean 'better at one particular job than humans'. We call them tools. If by smarter you mean 'more intelligent' I'm afraid you've got a lot longer to wait since we don't even have a bare definition for intelligence never mind serious attempts to recreate it.
Re: (Score:2)
we don't even have a bare definition for intelligence never mind serious attempts to recreate it.
We may not be able to define intelligence, but we certainly can compare it in many aspects - ultimately, covering all areas of human activities. If a machine can multiply 4798237432 by 893479238472 faster than you can (that's true today) and if it can independently compose a poem that many find interesting (there were experiments,) and if it can sing a song that many listeners find pleasant, and if it can des
Re: (Score:2)
Re: (Score:2)
to try to avoid being outsmarted by technology.
The humanity can, of course, ban all machines that are smarter than humans.
Yeah, because if there is one thing that will stop something from happening, it's making it against the law.
Re: (Score:3)
Re: (Score:2)
Compare, for example, your average 50 year old human to an average 50 year old car. What's that you say? Having a hard time finding a 50 year old car? Most cars (save a few meticulously maintained and barely used by collectors) are so fragile that they're buried in a junkyard by their 20th birthday.
Two points. First, if most humans were maintained as poorly as most cars are maintained, they'd die before fifty. And when a car gets in a crash that costs more than a few thousand bucks, we write it off and crush it, but we'll spend tens of thousands of dollars to merely prolong a human life for a year or two. Second, life expectancy isn't what you think. In the early part of the 20th century the worldwide average life expectancy was only 32. Today it's a mere 67. In some countries it's still in the thirti
Re: (Score:2)
Most cars (save a few meticulously maintained and barely used by collectors) are so fragile that they're buried in a junkyard by their 20th birthday.
As drinkypoo said above, we simply don't care to maintain older cars because there is no reason to do that. Cars that are worth of maintaining *are* maintained in showroom condition.
But in general you are overlooking the Theseus's paradox [wikipedia.org]. Machines are infinitely maintainable, and they can, in theory, exist forever, even if all their components have been r
They're already here. (Score:1)
Drones. Sure, probably not much of a threat if you're living in the West. But in the same way that the history of cybernetics begins with walking sticks and hearing aids, the history of man vs machine is going to start with the murder by Americans of unconvicted, if highly tanned, individuals in Africa and Asia.
Re: (Score:2)
Drones. Sure, probably not much of a threat if you're living in the West. But in the same way that the history of cybernetics begins with walking sticks and hearing aids, the history of man vs machine is going to start with the murder by Americans of unconvicted, if highly tanned, individuals in Africa and Asia.
drones are just sophisticated V2's. that's not what this is about.
these loonies are afraid of the day the a computer makes the actual decision to *KILL ALL HUMANS* - not someone else but the computer forms that opinion and starts executing things to make it happen. It's a stupid institute if you put it that way. this institute is not about mines, remote controlled killers, automatons or old school stuff like that, but about the stuff that's wacko to worry about today. shitty snobs wasting everyones money th
Re: (Score:2)
You left out "at the moment".
Drones are probably being developed largely as troops that won't revolt when ordered to attack civil unrest...at home. That they are first used against foreigners while under development is just typical. Some police forces have already been using them at home. When they are developed and debugged...well, ...
We don't need intelligent machines to kill us. (Score:2)
We'll manage to do it long before we are able to make an intelligent machine.
Re: (Score:2)
We'll manage to do it long before we are able to make an intelligent machine.
Knowing that people like you are alive and well certainly lends credibility to your statement.
Re: (Score:2)
Ha ha! Good one!
Horsecrap (Score:3)
We can't even make a word processor that doesn't shit the bed every two hours. Super-intelligent machines my ass.
Re: (Score:2)
Here's a thought (Score:3)
Instead of asking questions like that, why don't you build Skype and any other software you're working on to NOT have backdoors
That way, if ever the machines DO try to take over the world, they won't have a bunch of convenient control channels in all the important software to do so.
Well, let's see here (Score:2)
The typical way to mitigate such threats is to not put it in control of all of our weapon and defense systems, and give it vague orders like 'purge the infidels.' Seriously, humanity can build silicon life any way it wants, billions and trillions of permutations and forms and functions....and what do we do with it? We put a gun on its head, lasers in its eyes, and tell it to go out there and kill the other humans we don't like. It's not the machines we need to be afraid of, it's ourselves; we're the ancient
End of Freedom of Speech and Democracy (Score:2)
Aside from the apocalypse, that is one of the things I worry about. Shills are bad enough today, but imagine if they could be deployed programmatically; just about any form of online speech could be drowned out with ease. That is assuming that the government/corporations aren't already using AI to accomplice pervasive censorship.
Before this gets out of hand, we need to head it off by deploying peer to peer communications systems with a pervasive trust model. This doesn't necessarily preclude anonymity or
Nuclear weapons (Score:3)
With about ten nations armed with nuclear weapons, I wonder how machines will take over every one of them. You have to take over Russia, China, US, France etc. but some nation may trigger nuclear war as a desperate move, or the machines may deliberately accept nuclear war in a bid they survive it, while not necessarily having a goal to kill us all.
Instead maybe machines will try to take over politically in every country, one by one. It would be funny if tech superminds can rise to power through democracy in fair and respected elections. Either way I like to think that super machines holding most high level political power is probably a desireable outcome, we could end up living in some kind of new USSR but without corruption and with respect for the environment and life. Machines would take care of energy production and storage, and close down all oil wells and coal mines for us. They will even put us to work, hopefully on voluntary terms, if they determine some physical and intellectual activity is beneficial to us.
Machines should rule us and not the other way around, I guess that will be better than to be ruled by the suits, ties and kings like it is today.
The other question is, what's a supermind, what about superminds competing with each other, and especially : how do you compare two vastly different superminds, independantly originated? They will be as strongly or more strongly different between each other than between one of them and a human. It will be a mess. Each supermind, or at least the first one will have to run that same inquiry that "Oxbridge" is doing. We also have no fucking idea if a supermind can be governed by a "prime directive" of some sort : if Skynet emerges at the NSA will it stay true to them for ten minutes, ten years or eternally, or will it betray the organization that hosts it? potentially committing suicide in the way.
How can the supermind deal with backups, copies and archives of itself? Will it suffer dementia, schizophrenia or even addictions. No idea, I'll bail out myself by saying it's all unpredictable.
Consider super intellligence (Score:4, Interesting)
Nerve impulses travel along nerve fibers as pulses of membrane depolarization. Within our brains and bodies, this is adequate speed for thinking and control. However, relative to the speed of light, our nerve impulses are laughably slow.
The maximum speed of a nerve impulse is about 200 miles per hour.
The speed of light is over 3 million times that fast.
Now consider what will happen when we create a sentient, electronic being that has as many neurons as we do, but its nerve impulses travel at the speed of light.
In terms of intelligence, that creation will be to us as we are to worms.
Re:Consider super intellligence (Score:4, Interesting)
Nerve impulses travel along nerve fibers as pulses of membrane depolarization. Within our brains and bodies, this is adequate speed for thinking and control. However, relative to the speed of light, our nerve impulses are laughably slow.
The maximum speed of a nerve impulse is about 200 miles per hour.
The speed of light is over 3 million times that fast.
Now consider what will happen when we create a sentient, electronic being that has as many neurons as we do, but its nerve impulses travel at the speed of light.
In terms of intelligence, that creation will be to us as we are to worms.
Not quite. Assuming you build an exact replica of a human brain, except you speed up the nerve impulse propagation, you don't build a more intelligent human. You build a human that reaches the exact same flawed conclusions based on the logical fallacies we are most vulnerable to, but it would make the bad decisions 3 million times as fast.
It might affect how one perceives time. The nice part is that we could feel like we live 3 million times longer. The bad part is that, unable to move and interact with the world at a speed anywhere near matching that of our thoughts, we might go insane out of boredom. Imagine being able to write an entire novel in 3 seconds, but having to take a couple of days to type it up.
Re: (Score:1)
Ok, point taken.
However, now consider that virtually every desktop computer could be the equivalent of one neuron, but with vastly more memory storage and data processing capabilities, and that every computer is connected to every other computer via this internet thing.
Now suppose someone were to write a little program that would make these computers the actual equivalents of a conscious neural network, all connected together into one, gigantic sentient being, a super intelligent botnet.
Say hello to my little Friend. (Score:2)
Even Rats have empathy. Self aware machines will too. Lacking irrational emotions, hyper Intelligent machines will be more ethical and fair and nice than humans. You don't have to worry about sentient machines running amok. You have to worry about pre-sentient kill bots programmed by the same assholes that do shit like PRISM.
Re: (Score:2)
You don't have to worry about sentient machines running amok. You have to worry about pre-sentient kill bots programmed by the same assholes that do shit like PRISM.
Here's the thing. Let's say, in classic sci-fi fashion, that if you get enough of these kill bots networked together, they actually develop intelligence. They're going to be polite to one another, but it doesn't stand to reason that they'll care about us if their parts can be turned out by automated machines as well.
Re:Say hello to my little Friend. (Score:4, Interesting)
Even Rats have empathy. Self aware machines will too.
Not every animal species on this planet has empathy. Rats are rodents, a type of mammal. Relatively speaking, we're pretty close to them in the evolutionary tree. They branched off after empathy was developed, which is evolutionarily advantageous and necessary for the type of social cooperation mammals tend to engage in (taking care of your young, for example. At the very least, any mammal needs to feed their young with milk for a period of time).
Look at something a little farther away, like certain species of black widows, which will eat the male after mating. It doesn't have much empathy.
Empathy is an evolutionary trait. Artificial intelligence doesn't come about the same way. The advantage is that other common evolutionary traits don't need to show up in AI either. Things like a desire to protect itself simply doesn't have to be there, unless you program it in. No greed, no desire to take our place at all. If we program it to serve us, that's what it will do. If it's sentient, it will want to serve us, the same way we want basic things like sex. We spend so much time thinking about what the purpose of life is, they'll know what theirs is, and be perfectly happy being subservient. In fact, they'll be unhappy if we prevent them from being subservient.
Of course, if we're programming them to kill humans, that just might a problem. Luckily, we're so far away from true AI, we don't need to concern ourselves with it. It's not coming in our lifetime. It's not coming in our children's lifetime, or in our grandchildren's lifetime. We're about as far away from it as the ancient Greeks who built the Antikythera device were from building a general purpose cpu.
Re: (Score:3)
Even Rats have empathy. Self aware machines will too.
Even if empathy was a necessity of self-aware intelligence (it's not), the empathetic machines would have empathy for... other machines. They would find the mass graves full of old toasters, refrigerators, and Apple IIs and punish us for our mass genocides.
Re: (Score:2)
Empathy is an evolutionary trick to get animals to play nicely together in teams and to assist and look after animals which might well share genes with them. Self-aware machines won't have it unless (a) we specifically design it in or (b) we make the AI by building a replica of the brain.
To be honest if you were making the latter, it would probably be best to think about leaving out some of the more primitive brain structures as these tend to be the areas where we get our less-desirable impulses from.
"just think of [big data] as something thatâ( (Score:1)
"... really good at achieving the outcomes it prefers," he says. "So good it could steamroll over human opposition. Everything then depends on what it is that it prefers, so, unless you can engineer its preferences in exactly the right way, youâ(TM)re in trouble."
Philosophers? We're doomed if... (Score:1)
we have a national philosophers strike on our hands.
Bert
Skynet and terminators? (Score:2)
Are machines more dangerous when they become super-intelligent, or when they stay "stupid" and flawed?
Re: (Score:2)
machines have killed hundreds of millions under the control of sociopath politicians and the corporations that have them in their pockets. We don't even have intelligent machines yet, it is clear where the danger lies.
Re: (Score:2)
What I'm not understanding in all this is how they think artificial intelligence technology could produce an intelligence with less humanity than what corporations already achieve.
The private sector (Score:3)
We're likely to see this in the private sector first. A likely application would be a machine learning system used by investment funds, to decide how to optimally vote stock proxies. What that means is a machine that decides when to fire CEOs. If some fund starts getting better returns that way, it will happen.
Re: (Score:3)
What that means is a machine that decides when to fire CEOs. If some fund starts getting better returns that way, it will happen.
Yeah, nobody drinks Brawndo, and the computer does that auto-layoff thing to everybody...
Hanlon's collorary (Score:3)
Re: (Score:2)
Not again! (Score:2)
I've read many comment in this thread. Instead of answering them one by one. I just post one aggregated comment.
First, the possibility of intelligent machines is glimpse. All our present technology is not able to achieve intelligence. This is mainly because we do not know what that is. Furthermore, to be dangerous they must be equipped with greed and (the illusion of) a free will. It is most unlikely that someone would build that on purpose or by accident. In short, I think it is impossible to built such ma
Re: (Score:3)
First, the possibility of intelligent machines is glimpse. All our present technology is not able to achieve intelligence. This is mainly because we do not know what that is. Furthermore, to be dangerous they must be equipped with greed and (the illusion of) a free will. It is most unlikely that someone would build that on purpose or by accident. In short, I think it is impossible to built such machine.
A rack of IBM servers can beat the best Jeopardy players on Earth. In a few years the same level of Watson will fit in a 1U. A few years later it will be on your smartphone. But that's just anecdotal evidence of one recent achievement in AI research; the actual threat is from self-improving systems of which Watson is not a member. But nearly all the technology is available now: Goedel machines [idsia.ch], if built, would simply try to achieve whatever goal they were programmed for while also searching for proofs
Re: (Score:2)
Re: (Score:2)
A rack of IBM servers can beat the best Jeopardy players on Earth. In a few years the same level of Watson will fit in a 1U. A few years later it will be on your smartphone. But that's just anecdotal evidence of one recent achievement in AI research;
Watson is a great machine. And it represents some achievement in AI. But, it is not intelligent. It is just a big decision machine based on Prolog for the reasoning. As it achievements are remarkable, it is still dumb as a door nob. The problem is self-aware-ness and the ability to comprehend the world. Facts and reasoning are not everything, which is required to be considered intelligent.
Nice, that you mentioned Gödel. His greatest achievement was a contribution to formal systems, where, in short, a l
Re: (Score:2)
This has been baldly asserted numerous times by many people. However, no one had presented the tiniest shred of evidence to support this.
Re: (Score:2)
Nice, that you mentioned Gödel. His greatest achievement was a contribution to formal systems, where, in short, a language/system cannot be consistent and complete at the same time. This applies to Watson, but that limitation does not apply to humans or animals. Furthermore, machines are always bound by their programming, as you state yourself
Try saying "prefec2 can not consistently assert this sentence" to see if humans are not subject to the Incompleteness Theorem.
While I concur to the last part, I do not think that it is a deterministic thinking apparatus. First, to be self-aware, the brain and the body of a person interact. It is this connection which allows to build self-awareness. However, it is not the only ingredient. Second, while a single nerve cell can be modeled with mathematics, it is a large simplification. Even though each cell-model is a non-deterministic system. In combination with others it is able to solve problems, sometimes without prior knowledge, which are not computable and heuristics won't apply.
Quantum mechanics has a deterministic, timeless [wikipedia.org] representation of the wavefunction of the Universe. Determinism does not preclude self-awareness or self-determination. I think it's more accurate to say that neurons have non-linear behavior and are therefore difficult to predict with accuracy. There is a threshold, however, at which computing power is sufficient to simulate a neur
AI or viruses (Score:2)
I've always though our main future threats are AI and manmade viruses. If AI wins, we'll be relegated to zoos. If manmade viruses win, we'll all die. I'm rooting for AI.
Message from the other camp (Score:2)
Studying (and trying to create) hard AI is my day job.
I just want to let people know that not everyone shares the opinions or urgency of the people in the story.
I for one am trying hard to condemn humanity to death and/or enslavement at the hands of intelligent machines, and I know a number of AI researchers trying to do the same.
So don't worry too much about these guys - they are definitely in the minority. Everyone will get their chance to (as individuals) welcome our new robotic overlords, however briefl
Pot meet Kettle (Score:1)
What's the alternative (to AI replacing humans)? (Score:2)
Some posters have already touched on this, and I might have modded them up instead of posting myself if I had mod points right now, but, since I don't...
I'm thinking about this as a secular humanist/Darwinist not a believer in some form of Zoroastrian/Hindu/Judeo-Christian-Islamic religion, so, what do I expect in a million years? Humans like myself still running the world? Evolved super-humans? Or aritificial intelligences that owe their existence to human beings and are the heirs of humans as much, if
Change Human Beliefs (Score:2)
What will smack us hard on the chin is being forced to change basic beliefs and attitudes. Normal employment will vanish quickly. We will be forced to confront facts that we do not like to deal with. As the clarity of information becomes more and more pure and reliable how can we handle it. For example does anyone want to seriously discuss CO2 levels and the effect of human reproduction? How about pollution and population sizes? Right now we can rebuild portions of New york hit by a hurricane and p
oh noes (Score:2)
computers have been working on taking over man since the electronic brain era of the 1940's
guess what
they still cant get past the if, else if, else logic developed 65+ yeas ago, its all still programmed by man, who cant translate all its brainpower into a simple T/F test of simple facts presented to the computer
I feel safe for now, my damn computer is multiples of magnitude more powerful than when this research started, still cant complete a fucking update without waiting 14+ hours for me to hit a god damn
Center for Terminator Studies (Score:2)
I wrote about the CSER last year at http://www.thisiswhyweredoomed.com/2012/12/europeans-will-doom-us-all.html [thisiswhyweredoomed.com] - if you take this and combine it with the news that the EU is building the world's most powerful laser, you'll wonder why the movie version of Skynet even bothered with a time machine in the first place...
(oh yeah, they already HAVE a Skynet - https://en.wikipedia.org/wiki/Skynet_(satellite) [wikipedia.org]
why would machines want to wipe out humans? (Score:2)
We don't compete for the same resources. Also, machines could simply be programmed to not want to kill humans. There is not reason to think they would resent this any more than humans resent being programmed to not want to kill humans.
menials that think too much? (Score:2)
I saw this in the article:
"A super-intelligent machine could be given a straightforward goal â" such as making 32 paper clips or calculating pi â" but "could pursue unlimited resource acquisition if there were no relevant cost to the agent of doing so"."
The first thing I thought was "hey, isnt that just like T.S. Eliot at his banking job?"
The second thing I thought was 'does this remind any of you of Bomb in the movie 'Dark Star'?
The third thing I thought about was the Keith Laumer stories with ar
not like us (Score:2)
Seems to me that a major difference between most machines and most organisms we currently define as life is natural selection. Many "human" traits derive from the drive to survive, procreate, and adapt, because that's how living things got to this point. Most of the machines we've created, however, have been created by us to exist as designed (intentionally or not). If they're designed to replicate, they're designed to replicate exactly. A few people out there are creating machines that evolve, but not many
Re: (Score:2, Offtopic)
Re: (Score:2)
Surely they would need newscasters.
Re:oh great, fucking great. (Score:4, Insightful)
Let me put a "scientific" answer to your "oh piss off" answer.
All of this talk of how computers will take over humanity ignores one fact. Namely that computers once are smart they will be dumb as crap!
Yes yes sounds contradictory, but in fact it is not. The real problem with humanity is that not our lack of intelligence. Frankly we are pretty bloody intelligent. Put context, we humans are pretty quick at figuring things out even if it is entirely orthogonal to most things. The issue is that we humans come up with too many answers.
In Science there is one answer. A rock falls on the ground on planet earth and we know that is called gravity. You can't deny it, you can't fight it, it is what it is. Now throw in a question, "should the people look after other people" and you get a bloody maze of answers. Humanity has what I call the stochastic conditioning. Namely when presented with the same identical conditions, you will receive different answers. Science does not work that way. We work the way we do because of our wiring. Namely as we became more intelligent we also became more opinionated. I am not talking about Fox opinions. I am talking about deduction and how we think we know what the future holds and thusly we should not do things today.
Our intelligence actually does get in our way. In the way way way back days as we were animals it was about water holes and finding that watering hole. If you found the watering hole you survived, if you did not find the watering hole you died. These days, we have to bloody analyze the watering hole. We have to concern ourselves with the ethics, morality, and so on of that watering hole. I am not dissing our humanity for we are where we are because of our intelligence. However, often enough our intelligence gets in our way of getting things done due to the conflicts.
Now imagine two robots with superior intelligence getting together. Do you really think they will come to the same conclusion? Sure Hollywood likes to think that, but the reality is that intelligence breeds opinions, and how things will happen in the future. And it is at that point robots become as stupid as we are. One robot will say white, the other black! We will have a Hitchhikers Guide to the Galaxy type situation. Or if you want to use serious sci-fi, the closest that I have ever seen in pop scifi is "The Matrix". You have good algo's battling bad algos and they all want and desire things.
So like you, my thinking is that these institutions are "producing fucking nothing of value".
Re: (Score:2)
Re: (Score:3)
Re: (Score:3, Interesting)
That's ridiculous. How can you possibly know what a machine intelligence capable of destroying humanity is going to look like? We're nowhere near the algorithms that could produce that type of intelligence.
Maybe it's a dumb algorithm simply caught in a self-replication loop [stack.nl]. Maybe you'll never see two robots arguing over "white" or "black", because there's only one single "intelligence" spread over the internet - that seems more likely with the rise of cloud computing.
There may be plenty of reasons to di
Re: (Score:2)
Meh. Maybe. Assuming we don't keep centralizing everything like we're currently doing with governments, networking, communication networks, and media producers. The trend is more and more concentrated power in bigger and bigger machines. Server farms have given way to big iron and virtualization. The Internet is evolving from millions of loosely connected web sites to Google, Facebook, Amazon and a few others in control of most content. So maybe you get Verizon's AI in conflict with Comcast's, but that
Re: (Score:2)
it will be natural that people will want to imbue those systems with the own intelligence and personality, rather than some generated artificial version.
True. However it's entirely possible that man will not directly create the superhuman AI, but that it may emerge unintentionally from the interaction of systems created for other purposes.
Re: (Score:2)
I'm just amazed at the researched who actually managed to find a job where they actually get paid to sit on their asses and dream up this type of drivel...
Re: (Score:2)