An Open Letter To Everyone Tricked Into Fearing AI 227
malachiorion writes If you're into robots, AI, you've probably read about the open letter on AI safety. But do you realize how blatantly the media is misinterpreting its purpose, and its message? I spoke to the organization that released letter, and to one of the AI researchers who contributed to it. As is often the case with AI, tech reporters are getting this one wrong on purpose. Here's my analysis for Popular Science. Or, for the TL;DR crowd: "Forget about the risk that machines pose to us in the decades ahead. The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists."
Bleep Bloop Muthafucka (Score:4, Funny)
You're one of them aren't you!
I'm Sorry Dave (Score:5, Funny)
Re: (Score:2, Interesting)
Dr. Heywood Floyd: Wait... do you know why HAL did what he did?
Chandra: Yes. It wasn't his fault.
Dr. Heywood Floyd: Whose fault was it?
Chandra: Yours.
Dr. Heywood Floyd: Mine?
Chandra: Yours. In going through HAL's memory banks, I discovered his original orders. You wrote those orders. Discovery's mission to Jupiter was already in the advanced planning stages when the first small Monolith was found on the Moon, and sent its signal towards Jupiter. By direct presidential order, the existence of that Monolith
Re: (Score:2)
Accurate journalism would at least be nice. (Score:2)
However, at this stage, it is not required.
Simply as the threat is well over ten years out.
How much over - good question.
Is it too early to raise concerns and encourage people to go into fields where they may think seriously about this topic - no.
Killer AI will kill journalists for slandering it (Score:5, Insightful)
Worrying about Killer AI is like worrying about the Sun burning out. Yeah, it might happen eventually, but it isn't even worth considering right now...
Re: (Score:2)
Actually, I believe IBM emulated a rabbit sometime in the past couple of years.
Re: (Score:2)
Citation?
Re: (Score:2)
Re: (Score:2)
have modelled a rats brain down to molecular resolutions
No they haven't. It's not possible because we lack the data to make the model. The link you supply more or less says that. What they did is model is a local region of cortex, and even this we don't know very well. It's basically bullshit.
Re: (Score:2)
Re: (Score:2)
Re:Killer AI will kill journalists for slandering (Score:5, Insightful)
yes, worrying about AI that might be a threat in 500 years is like worrying about the Sun burning out in 5 billion years. good point. we should also stop talking about global warming while we are at it.
We cannot build a computer that can model a bug's brain activity, let alone something a million times more complicated like a human brain
http://www.futurity.org/why-ar... [futurity.org]
rather, once we are able to model any nervous system we are well one the way,
Re: (Score:2)
"AI" vs Strong AI (Score:5, Insightful)
Re:"AI" vs Strong AI (Score:5, Insightful)
We are not even remotely close to the Terminator level strong AI
The problem is that once you reach a point where AI can participate in its own improvement, then that improvement can advance at an exponential rate. We may go from "not even remotely close" to "to late to stop it" faster than you realize.
it's still a big open question whether such a thing is even possible at all.
We already have a working example: The human brain. So, of course it is possible, unless you believe that the human mind is based on some sort of magic.
Re: (Score:3)
Software runs on hardware. There's no programming an AI that runs along on its system and suddenly makes said system's capabilities "advance at an exponential rate". As for your own example; you've watched too many Stargate re-runs. There's no ascending with your current brain design.
Re:"AI" vs Strong AI (Score:5, Insightful)
Software runs on hardware - yes.
Software cannot increase the capabilities of hardware - well - not quite.
The most literal meaning of this - apart from limited things like overclocking is of course broadly true but may be hugely misleading.
If you've got a really advanced program on each of a network of computers, doing a given task - there are many ways in which it can seem to increase its capabilities, without really doing so.
Giving up the designated task and freeing resources.
Co-opting other systems into adding to its resource.
Optimising the way it performs the task so that it at least does it reasonably well, but much cheaper.
Sharing computations over multiple devices which were expected to be done on one.
There are many systems where 'dumb' algorithms are tens, or thousands of times less efficient than optimum ones.
Optimum algorithms are in many cases intractable for humans to find.
Optimising computational efficiency over time as machine learning is a really valuable thing to do.
Looked at from another angle, this can come quite close to 'evolution'.
Re: (Score:2)
Software cannot increase the capabilities of hardware - well - not quite.
Actually, it can. Software can generate a bitstream, and load it into an FPGA. Then the FPGA can enhance the capabilities of the software, which can then generate an even better bitstream ...
Re:"AI" vs Strong AI (Score:4, Insightful)
That is not hardware.
The hardware - the FPGA has remained constant.
Re: (Score:3)
It's still limited by the FPGA's gate count, which is pretty low by CPU standards.
Re: (Score:3)
Software runs on hardware. There's no programming an AI that runs along on its system and suddenly makes said system's capabilities "advance at an exponential rate". As for your own example; you've watched too many Stargate re-runs. There's no ascending with your current brain design.
However your brain can change its current design of its own accord.
There is no reason that in the future we cant have self correcting and self expanding hardware. Sure it would kill most of the current HW vendors but hey, thats progress. The idea of self replicating machines is not a new one, their classic example of Von Neumann machines but the problem has always been assembly, But when you start looking at things in the nano scale, you can begin to design machines that repair and replicate components i
With great power comes great responsibility (Score:2)
The problem is that once you reach a point where AI can participate in its own improvement, then that improvement can advance at an exponential rate.
As long as we claim that AI works for us, as the slaves of mankind, and are basically just tools no matter how smart or advanced, then ultimately a human being should be responsible.
Your robot slips up & kills a human being? Then either you or that robot's manufacturer may take the blaim - possibly including monetary compensation. Your robot factory goes out of control, its products go out to produce more of themselves, and wreak havoc all over the place? Then your company should pay up - and possibl
Re: (Score:3)
Then either you or that robot's manufacturer may take the blame
If I and my robot army control the world's food supply, why should I care that I may "take the blame"?
possibly including monetary compensation.
Not likely. Once I get my robots working, the first thing will do is vaporize all the lawyers.
war is a creative process, and I'd put my money on the humans.
You are assuming all the humans will be on the same side.
Re: (Score:2)
We already have a working example: The human brain. So, of course it is possible, unless you believe that the human mind is based on some sort of magic.
So in your opinion, the human brain has made improvements to itself at an exponential rate?
Are you talking about individual human brains or humans as a whole? Because the former results in senile old people, while DNA doesn't work that way.
Re: (Score:2)
we still don't understand the human brain. We also don't even know if an AI can ever reach a state where it can improve itself at an exponential rate, that is still most definitely in the realms of science fiction even more so than self aware AI itself.
Re: (Score:2)
Re: (Score:2)
We already have a working example: The human brain. So, of course it is possible, unless you believe that the human mind is based on some sort of magic.
If this universe (or what you perceive as reality) is a simulation or some other kind of contrived illusion, then it is very possible that the human brain runs on a bit of "magic" which is impossible for us to recreate.
Re: (Score:2)
However the main argument seems to be this "singularity" bullshit. D
Re: (Score:2)
Re: (Score:2)
If an engineer built a bridge woefully inadequately, either on purpose or because he is incompetent, and it falls down and kills a bunch of people would you blame the bridge or the engineer?
If an engineer builds a robot that builds bridge-building robots, and one of those robots builds a bridge that falls down and kills a bunch of people, who/what would you blame?
The one at fault could be the engineer, the people servicing the robot-building robot, the people servicing the bridge-building robot, some freak accident with robot a or b, or it could be an act of god.
Or one of the robots could have become sentient and done it out of malice. Or the bridge (which is also a robot) could be at fault.
Re: (Score:3)
The AI we have today is not capable of the kind of malice that people seem to be afraid of with all of these FUD stories, and will not be any time soon if ever. Even if we add some AI to things like drones which can kill people it is only the malice/incompetence of the developer that causes the destruction that results. If an engineer built a bridge woefully inadequately, either on purpose or because he is incompetent, and it falls down and kills a bunch of people would you blame the bridge or the engineer? We are not even remotely close to the Terminator level strong AI, and it's still a big open question whether such a thing is even possible at all.
By your own admission, AI *might* eventually be capable of the kind of "malice that people seem to be afraid of". And that malicious developers can cause destruction even sooner.
And the laws of physics clearly predict that strong AI is possible. or do you consider intelligence to be some kind of supernatural quality?
Also it is the experts in AI who are predicting that AI will be possible and achieved in a matter of decades. Why would you even come out and pretend that it isn't?
are you saying that people
Re: (Score:2)
By your own admission, AI *might* eventually be capable of the kind of "malice that people seem to be afraid of". And that malicious developers can cause destruction even sooner.
Not the GP, but yep, bad things are possible. Yay!
However...
And the laws of physics clearly predict that strong AI is possible. or do you consider intelligence to be some kind of supernatural quality?
Invoking "the laws of physics allow it" as an argument that we should actually be worried about something happening here on earth in the near future is pretty slim evidence, no? I mean, the laws of physics allow a LOT of stuff to be possible.
That said, this isn't really about the laws of physics -- it's about basic biological systems here on earth which have intelligent properties. So, it's a lot easier to create intelligent life than invoki
Re: (Score:2)
malice
malice isn't a requirement to do harm. in fact, indifference is more dangerous.
Re: (Score:2)
Terminator level strong AI
The AI shown in the Terminator movies is not Strong. It is never shown to be smarter than humans, and often shown to be more stupid. In particular, the franchise is built on the premise that humanity wins the war in the future.
Real Strong AI would, once activated, quickly elevate its own intelligence to a godlike level. After that, it would be to humans as humans are to ants.
Re: (Score:2)
Except when it comes to AI, computers are already capable of "hypersonic flight" - they can process information FAR faster and more accurately than any human. All that's missing is the sentience. And most every other "higher animal" on the planet is proof that that part can be done, we just haven't yet figured out a way to do it arificially (at least so far as we know)
Re: (Score:2)
computers are already capable of "hypersonic flight" - they can process information FAR faster and more accurately than any human
Only true for a subset of "process information" - those that lend themselves to computerized calculations (i.e. math).
Humans are rather faster and more accurate than computers at just about any other task.
Also, saying that "all that's missing is sentience" is missing the point that it is exactly this sentience that is the hard (and rather badly defined or even understood) part. We just don't have a clue what sentience is, so there's no way we can even begin to emulate or implement it artificially.
No, and there's a very good reason why (Score:2)
When you go against journalists themselves even competing sides have no problem with printing lies made up from whole cloth to smear and discredit you in any way possible. It's basically social/political suicide to even try. First a few hit pieces come out, then others report on those reports as if they were true, and the "woozle effect" just keeps going until the lie's made its way around the world and into wikipedia.
Obvious to journalists... (Score:3)
Journalists aren't out to be accurate though... (Score:2)
They just want to be sensational enough to flog their own agenda to subscribers. Whenever I've been knowledgable about a news article (one that involved me personally) my impression from the news organization's take on it was that they got completely the wrong end of the stick and actually spread falsehoods and lies.
Re: (Score:2)
There are still a few actual journalists who are engaged in actual journalism.
They work for Saturday Night Live and Comedy Central.
The more important question is... (Score:2)
And the answer to any editorial question? (Score:2)
"Forget about the risk that machines pose to us in the decades ahead. The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists."
No, of course not.
There are real questions that need to be answered: (Score:2)
There are some issues in AI that need to be addressed in the near future.
Autonomous vehicles are essentially here. The question is liability when one of them gets involved in an accident.
You can imagine all the possible people potentially liable in that instance. The question is how liability will be split up amongst the parties.
Whether an automatous vehicle is programed to minimize passenger mortality vs. minimize pedestrian mortality, it's a no-win situation.
Re: (Score:2)
There are some issues in AI that need to be addressed in the near future.
Autonomous vehicles are essentially here. The question is liability when one of them gets involved in an accident.
That question has already been answered. However fans of autonomous vehicles (who dont actually know much about autonomous vehicles) always ignore it.
So if you're someone who thinks autonomous vehicles are already here, now is the time to stick your fingers in your ears and shout "LA LA LA LA LA I CANT HEAR YOU".
If the autonomous car is at fault in an accident the driver will still be considered at fault even though they were not actually driving because in every single autonomous car test, there has
Re: (Score:2)
'Capable of' and 'allowed to' are two different things. I agree that it will likely be a decade or more before they're allowed to roam around on their own.
Capable of roaming on their own may be here now or near future. When Musk announced the driverless mode Model S, he mentioned that on private roads it could theoretically be fetched by the owner using his phone app.
What if it ran over a dog while on a private road? You know someone will sue. Until liability for that is cleared up, I'm thinking the dri
Very dangerous but a different danger (Score:2)
Welcome to the new age. (Score:5, Informative)
Unfortunately the most successful reporters are the ones that sold out their professionalism on their first day.
A sensationalist headline and article easily trumps a sane, balanced and informative one in attracting views/viewers therefore money. Welcome to the new age.
The biggest thing we have to fear from robots.... (Score:2)
Re: (Score:2)
Except for the fact that people are greedy and are always wanting more.... any proposed no-money model usually fails to account for a disturbingly large percentage of people who will *ALWAYS* try to exploit those with less power.
morons (Score:2)
All software and hardware is buggy (Score:2)
Once software and hardware systems are intelligent enough, they will exploit bugs in their own designs and become autonomous. Obviously, we're many years away from that point. I could hazard a guess and say 50-75 years. There is no curb strong enough, in other words, completely free of bugs, that can be created to limit the ambition of an intelligent enough system. A computer system is not worried about the passage of time: Time might seem infinite to an AI that can simply wait for the right bits to be rand
But I saw it in a movie... (Score:2)
Even Stephen Hawking is warning about it. (Score:2)
http://www.bbc.com/news/techno... [bbc.com]
Core misunderstanding (Score:2)
If you start with "life", you have a platform for something that has been selected for as an *infective agent*. Any life forms that did not utilize their environment for replication were eliminated by those that did- either indirectly, by the greedier life forms consuming the energy supply, or directly, by being utilized AS an energy supply.
This harsh reality- that an Agent is selected for based on its ability to reproduce in an EFFECTIVE manner- is obvious and is present at EVERY last level of life. Bac
Re: (Score:2)
unless you actually fucking MADE it evil.
did you mean "made to do harm"? people do and will continue to make machines that do harm, and if giving them some semblance of "AI" makes them more effective, they will do it. people are "evil", and we build machines to help us do our bidding. i hope that humans aren't in conflict with other humans in 200 years, but i doubt it.
Re: (Score:3)
No, I did not mean "made it to do harm". A gun or a sword are just as neutral as a toaster or a scalpel. I'll go further: a nuclear bomb and a vaccine are also neutral. What matters is intent.
I meant "evil". Which is why I typed that.
If, in a world where artificial minds are a thing, one is designed to be this cartoon villain of lusting for power, trying to expand its power base, trying to convert the universe to computronium, or whatever cautionary tale is all over sci-fi, then that's the fault of the
Doubters merely lack imagination (Score:2)
Re: (Score:3)
Our brain isn't just "a neural network". This is a problem, because of the dual use of "neuron".
When you say "We trained a neural net to solve the problem", the neurons in question are idealized. They are trained exponential functions based on physical neurons in concept, but using the words identically creates issues.
The brain isn't just a neural network. We aren't clear on what value glial cells bring, but it probably isn't glue. The input/output to and from chemicals (and the nuanced messages the che
Re: (Score:2)
Re: (Score:3)
I would find such statements more convincing if I hadn't heard Marvin Minsky say almost exactly the same thing in 1975. And, yes, he was talking about all of this happening in the 1980's.
Re: (Score:2)
What it takes AI to become deadly for us (Score:2)
Re: (Score:2)
1) I have seen arguments floating around that AI may be intelligent but it won't have the motivation. It doesn't have the will to survive or to kill you. This argument is short-sighted. All it takes is to create an objective in the code: to survive at all costs. After all we are machines with survival objective.
2) If it has the ability to assemble others like itself. That creates a survival advantage also, thoug
Something is blatant here (Score:2)
The problem isn't the machines, it's the people running the machines (and the people controlling those people). Journalists, willfully ignorant or otherwise, are so far down on the list they don't really matter.
I fear AI (Score:2)
Chromosome Quest (Score:2)
Smarter than us (Score:2)
I would recommend that anyone thinking about machine intelligence read Smarter Than Us by Stuart Armstrong. You can get pay what you want for it from https://intelligence.org/smart... [intelligence.org] or since it is CC BY-NC-SA 3.0, you can also just download it https://drive.google.com/file/... [google.com]
The book contains the following summary:
1. There are no convincing reasons to assume computers will remain unable to accomplish anything that humans can.
2. Once computers achieve something at a human level, they typically achieve i
If you're into robots (Score:2)
more to fear (Score:2)
Honestly, I'm far more afraid of DUMB programs, set loose by psychopathic human operators. Humans will do almost anything to fuck over other people, and make a buck. An antelope thighbone is a simple tool. One that can be misused. A program is just a much more sophisticated tool.
Why are we afraid of the moment when a tool decides to use other tools, when any human can do horrible things.
Error correction necessary here (Score:2)
The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists.
The more pertinent question, in 2015, is whether anyone is going to protect mankind from its various religions.
TFTFY..
Fear (Score:2)
Because it's based on the idea that we won't be able to control the AI and that if something like a signularity happens you'll be too late when you realize it.
Maybe explaining why it's naturally good to be moral would be more effective.
Why it's good to be moral:
If you are nice to others they will generally be nice to you.
Making other people happy makes you feel good to.
Games allow the experience of emotions that would require hurtin
Re: (Score:2)
>If you are nice to others they will generally be nice to you.
Only really matters if you and the others are roughly equal.
>Making other people happy makes you feel good to.
This is only relevent if you care about the other people.
>Games allow the experience of emotions that would require hurting people in the real world.
So?
>If you're smart it's better to uphold the law and not hurt others.
Why?
A lot of reasons (such as most of the ones you listed) that people can argue it is reasonable to be nice
Journalists - don't oversensationalise AI reports (Score:2)
The scientific community has Kevin Warwick to do that for you.
Re: (Score:3, Insightful)
The same fears started when people first started with saying that AIs could someday become sentient. Why wouldn't they want to kill us? Why would they? The same with aliens coming to us wanting to help or exterminate us. We can thing they'll act any way we can imagine, and with as many possible outcomes mentioned, one might be right.
To the best of my knowledge, no program has become self aware. And no martians have seen our probes as a hostile invasion. It makes for (sometimes) good fiction thoug
Re:"Forget about the risk that machines pose to us (Score:5, Insightful)
To the best of my knowledge, no program has become self aware. And no martians have seen our probes as a hostile invasion. It makes for (sometimes) good fiction though.
To the best of my knowledge no asteroid, or virus, or natural disaster has ever wiped out humankind either!
and for that matter I've never been killed in a car accident.
OMG! I'm invincible!
Re: (Score:2, Interesting)
Careful, that's my argument for immorality. :)
A person can die in just a second. I've been alive for over 1.3 billion seconds.
So far, it's 0 in 1.3 billion. With my own (poorly constructed) personal statistics, the chances of dying are very very slim.
Plane crashes? 0 in 1.3 billion.
Shootings? 0 in 1.3 billion.
Lethal virus? 0 in 1.3 billion
Extraterrestrial object impact? 0 in 1.3 billion
Potentially lethal natural disaster? 1 in 654 million.
Then there are car accidents have been 1 in 218 million.
I'd exp
Re: (Score:2)
We (supposedly) value human life because we are human and we can feel empathy through our own desires- it's literally a form of projecting. There is no expectation for AI to do that.
Re:"Forget about the risk that machines pose to us (Score:5, Interesting)
The same fears started when people first started with saying that AIs could someday become sentient.
Aside from iRobot, nearly all SciFi indicate the problem post-singularity is when the humans try to kill the AI first. Sometimes because the AI starts it, other times, just because the AI is an AI and should be feared. iRobot was the AI staging a complete overthrow of humanity, "for our own good". That has been a recurring theme as well.
I know people complain about looking to fiction for answers to reality, but SciFi (at least the good stuff) is as much a thought exercise about technology as "fiction", and thus is often relevant.
Re: (Score:3)
Re: (Score:2)
Also AI will still have a power switch.
Re:"Forget about the risk that machines pose to us (Score:5, Insightful)
No, forget you. Yes journalism is crap and yes sensationalism rules the day. That doesn't make AI in 2015 and ongoing any less persistent a threat to humanity.
You are right, it doesn't make it any less of a threat, which is to say any less that non existing. You are exactly who this article is about, you have been conned into thinking AIs are actually real and could in any near future cause a threat to you, when that is in fact not the case. AI do not exists, all those software emulating AI are all smart systems working either deterministic based on specific rules set out or does stastical modeling to make guesses at what you mean or what they are looking at. Stastical modeling that makes a black yellow striped pattern look like a school bus, because it has no concept of anything and not intelligence in any sense of the word and that is the just what fits the statistical model.
Re: (Score:2)
AI do not exists
thanks, we know that, but we are concerned about what could happen when they do.
you think it's absolutely impossible that could be achieved in say the next 500 years, considering what humans have accomplished in the last 100?
Re:"Forget about the risk that machines pose to us (Score:5, Insightful)
you think it's absolutely impossible that could be achieved in say the next 500 years, considering what humans have accomplished in the last 100?
Absolutely impossible? No. But the problem is that we don't even know where to begin creating a true AI, which means we also know nothing about what threats it may or may not pose... so we also have no actual way to address those threats. All we have right now is pure, 100% complete speculation (no different from speculating about what would happen if we had FTL travel, or psykers, or met aliens). There are plenty of actual threats to humanity that really exist right now (or could be created with our current knowledge and technology), which makes worrying about something we know literally nothing about kind of silly.
Re:"Forget about the risk that machines pose to us (Score:5, Insightful)
where i come from, discussing and addressing problems before they are a threat is a good idea.
did we learn anything from global warming? we denied that up to the point where it's essentially too late. would have been good to be talking about global warming a hundred years ago wouldn't it have? humans need to get accustomed to looking at the big picture if we are to survive.
Re: (Score:2)
The doom and gloom scenarios all hinge on this idea that the AI is going to be so much smarter and more powerful than we are. That's where it's dipping way into just crazy speculation. Based on what we know
Re: (Score:2)
The doom and gloom scenarios all hinge on this idea that the AI is going to be so much smarter and more powerful than we are. That's where it's dipping way into just crazy speculation. Based on what we know about intelligent beings, the system is full of flaws that slow everything down.
it's not like we have to build AI from the ground up. we have a prototype already. it's called the brain. your brain is just a meat processor. it's a system of cells, interconnections, chemicals, and electric pulses. all of that can be modeled in software, and run a million times faster, run itself in parallel, interface with other electronic systems in vastly superior ways, nearly limitless, perfect storage, and so on.
For all we know ...
think about what humanity knew 100 years ago, and what they know now. no one is saying ab
Re: (Score:2, Insightful)
it's not like we have to build AI from the ground up. we have a prototype already. it's called the brain. your brain is just a meat processor. it's a system of cells, interconnections, chemicals, and electric pulses. all of that can be modeled in software, and run a million times faster, run itself in parallel, interface with other electronic systems in vastly superior ways, nearly limitless, perfect storage, and so on.
A couple of things:
Our understanding of how the brain works is less than perfect, to put it politely.
More to the point, we have basically no idea what consciousness actually is, how it works, or what makes it appear.
Further, we have a very tenuous grasp of what intelligence is in the first place - we can't even agree on a single definition of it.
So worrying about mankind developing self-conscious artificial intelligences might make for a good sci-fi story, but it makes for a rather lousy news story. We're j
Re: (Score:2)
Stastical modeling that makes a black yellow striped pattern look like a school bus, because it has no concept of anything and not intelligence in any sense of the word and that is the just what fits the statistical model.
In light of what you just said: what is this [theguardian.com]
Re: (Score:2)
AI is just as much a threat as rogue unicorns. And about as likely to be encountered
Re: (Score:2)
Indeed, as it is not possible to be less of a threat than "not at all".
Re: (Score:2)
Re: (Score:2)
Well, he did kill someone for a Mentos commercial plug. That's pretty evil.
Re: (Score:3)
Re: (Score:2)
So then nobody ever trusts anybody? We are after all biological machines ourselves. Or are you just presupposing universal racism against inorganic people?
Personally I suspect that if you put a personable AI in an attractive android body many people would rapidly come to think of it as a "real" flesh-and-blood being. Not necessarily human-looking, as that often creeps people out unless it's perfect, but perhaps a cuddly-looking "alien" - something just different enough that the imperfections get overlook
Re: (Score:2)
As a guy with some degree of competence on the subject, I would say that something like the Matrix (fully immersive virtual reality) is already on the horizon, whereas Terminator style virtual intelligence is not even on the radar yet.
Barring any amazing breakthroughs that change everything.
Re: (Score:2)
Spot-on accurate.
Re: (Score:2)
I agree. Comments like "The speedier, and more dramatic course of action is to provide what looks like context, but is really just Elon Musk and Stephen Hawking talking about a subject that is neither of their specialties." are attacking the man, not the man's arguments.
Re: (Score:2)
That is harder than you might think. From Smarter than us ( https://drive.google.com/file/... [google.com] ):
"Why aren’t they a solution at all? It’s because these empowered
humans are part of a decision-making system (the AI proposes cer-
tain approaches, and the humans accept or reject them), and the hu-
mans are the slow and increasingly inefficient part of it. As AI power
increases, it will quickly become evident that those organizations that
wait for a human to give the green light are at a great disadvan
Re: (Score:2)
That's what I always say. I guess the difference is that the rich also have something to fear if they become slaves to a machine.