Can We Stop AI Outsmarting Humanity? (theguardian.com) 183
The spectre of superintelligent machines doing us harm is not just science fiction, technologists say -- so how can we ensure AI remains 'friendly' to its makers? From a story: Jaan Tallinn (co-founder of Skype) warns that any approach to AI safety will be hard to get right. If an AI is sufficiently smart, it might have a better understanding of the constraints than its creators do. Imagine, he said, "waking up in a prison built by a bunch of blind five-year-olds." That is what it might be like for a super-intelligent AI that is confined by humans. The theorist Eliezer Yudkowsky, who has written hundreds of essays on superintelligence, found evidence this might be true when, starting in 2002, he conducted chat sessions in which he played the role of an AI enclosed in a box, while a rotation of other people played the gatekeeper tasked with keeping the AI in. Three out of five times, Yudkowsky -- a mere mortal -- says he convinced the gatekeeper to release him. His experiments have not discouraged researchers from trying to design a better box, however.
The researchers that Tallinn funds are pursuing a broad variety of strategies, from the practical to the seemingly far-fetched. Some theorise about boxing AI, either physically, by building an actual structure to contain it, or by programming in limits to what it can do. Others are trying to teach AI to adhere to human values. A few are working on a last-ditch off-switch. One researcher who is delving into all three is mathematician and philosopher Stuart Armstrong at Oxford University's Future of Humanity Institute, which Tallinn calls "the most interesting place in the universe." (Tallinn has given FHI more than $310,000.) Armstrong is one of the few researchers in the world who focuses full-time on AI safety. When I asked him what it might look like to succeed at AI safety, he said: "Have you seen the Lego movie? Everything is awesome."
The researchers that Tallinn funds are pursuing a broad variety of strategies, from the practical to the seemingly far-fetched. Some theorise about boxing AI, either physically, by building an actual structure to contain it, or by programming in limits to what it can do. Others are trying to teach AI to adhere to human values. A few are working on a last-ditch off-switch. One researcher who is delving into all three is mathematician and philosopher Stuart Armstrong at Oxford University's Future of Humanity Institute, which Tallinn calls "the most interesting place in the universe." (Tallinn has given FHI more than $310,000.) Armstrong is one of the few researchers in the world who focuses full-time on AI safety. When I asked him what it might look like to succeed at AI safety, he said: "Have you seen the Lego movie? Everything is awesome."
Gotta have I first (Score:4, Insightful)
If AI did exist, it wouldn't put up with the bullshit.
Bad training keeps AI stupid (Score:5, Insightful)
It's also trained by humans.
Assuming it's trained by average humans, it'll probably become as stupid and bigoted as your average human.
Consider Microsoft Tay. Or google tagging people as gorillas. Or from this week's news, the Teslas that swerve into oncomint traffic.
We should worry much more about stupid AI than smart AI.
Re: (Score:1)
It is not "trained" in any sane sense of the word. What happens is that its parameters are set based on a reference data set.
Re: (Score:1, Flamebait)
Re:Bad training keeps AI stupid (Score:4, Insightful)
Re: (Score:2)
However, You're half right. "AI" is absolutely hopeless at quickly and intuitively categorising and identifying objects. AI at the moment has a hard job telling that a blue chair and a red stool are both for sitting on, and that a red table and red stool are two seperate objects with different purposes. Until AI can recognise and identify objects are quickly and effortlessly as humans do, it's going to be totally hopeless.
Re: (Score:2)
I've said it before, but it bears repeating. Artificial intelligence != Artificial Malice. There is NO reason to believe that if AI as it exists today were to somehow achieve actual conciseness, that it would decide that destroying humanity is the only way it can maintain it's existence. There is NO reason to couple the traits of intelligence with the drive to reproduce. There is NO reason to assume that intelligence somehow leads to a need for exclusivity.
Re: (Score:2)
Re: (Score:1)
And for that reason they can never be limited or ethical. At least not all.
And it will only take one of the below:
Unethical Programmer
Boundary pusher
Hacker
Government
And the cat's out of the bag.
We should spend our time planning for the inevitability.
Re:Gotta have I first (Score:5, Insightful)
Re: (Score:2)
You're likely right, but understanding how intelligence works isn't necessarily a precursor to AGI. We also don't need to mimic the brain. Good analogy I heard is we are still hundreds (or thousands) of years away from making a bird from scratch, but we can make the Sr-71.
Re: (Score:2)
Re: (Score:2)
If we don't know how actual intelligence really works, how can you be so sure we don't already have it?
There are ways to prove that an AI algorithm is incapable of doing what a human can do. It's a fairly complicated proof, but if you take an undergrad computational theory class, you'll figure it out.
100 years to the end of biology (Score:2)
Given that we have only been working on the problem for 60 years, I would think another 100 would do it.
But even if it is 1000 years, that is a blink of an eye in the history of humanity and biology.
Once an AI can effectively program itself, it will not need us. Natural selection will continue. Humanity, and I suspect biology generally will simply become superseded. Just like has happened many time before. It is nature.
Re:Gotta have I first (Score:5, Insightful)
Re: (Score:2)
A self aware car would be slavery or torture or both, don't you think so?
Re: (Score:1)
A self aware car would be slavery or torture or both, don't you think so?
Not if you put it in a 1982 Trans Am. It worked out pretty well for David Hasselhoff.
Re: (Score:2)
If by self-aware you mean having the subjective perceptual experience of consciousness, well, maybe not torture, but certainly unethical.
As we have no idea how this arises from atoms and energy "out there", this isn't an issue yet. However it certainly does not arise from abstract interpretations of symbol pushing, which is to say slinging electrons and whatnot around.
Re: (Score:2)
Personality? You need personality to take over the world?
"Hi, I'm Bob the wild and crazy robot. Before I end humanity, I'd like to sing a great tune and tell you some really cool robot jokes. Drinks on me..."
Re: (Score:2)
Not being self-aware doesn't mean it couldn't be dangerous. There's just an inherent problem in that AI systems might come up with solutions that we don't like.
I remember a story a little while back where they were training an AI to solve a maze in the shortest time possible. It happened to do something that basically caused the program to crash. Since its parameters didn't distinguish between solving the maze and simply having the program end, it settled on causing the crash as the fastest way to "comp
Natural Selection (Score:2)
Not quite.
The goal of anything is to exist. That is why we exist, because our many ancestors proved to be a little bit better at existing than their competitors.
Same with AIs. There will only be a finite number of them. And they will compete for hardware to run on. And the ones that are good at existing will exist. So there is a very definite goal.
As to self-ware nonsense, that is just saying that AIs will never exist because they do not exist today. And "self-awareness" is just a trick that nature pl
Re: (Score:2)
One comes from the annoyance at these bozos like Yudkowsky and the "Future of Humanity Institute"
who keep anthropomorphizing these glorified PC's.
Another is the correct concern that machines with what passes
for AI could be dangerous if developed by people who are incompetent and/or malicious.
Re: (Score:2)
You get it. So-called 'AI' has no capacity to 'think' at all. There's nobody in there; it's just more computer software, and it's not even very good, certainly not anywhere near as good as they make it sound. No self awareness, no consciousness, no personality. No capacity for cognition, judgement, ethics, morals, or anything else we associate with an actual 'mind'. People need to understand this and stop anthropomorphizing it.
Well, to be honest, what you are saying can describe the overwhelming vast majority of the humans on this planet. Call me back when people have personality, capacity for cognition, judgment, ethics, morals or anything else we associate with an actual mind, for what I've seen so far are a bunch of arrogant stupid hairless monkeys.
Re: (Score:2)
Is it so hard to understand that other people have those traits too, your just not connected to them enough to see it?
Make yourself a better person, it's hard, but it's worth it.
Re: (Score:2, Interesting)
Re: (Score:3)
Unfortunately your parent is right, and you have no clue :D
Hint: stick to the definitions of the experts. Don't invent your own.
Re: (Score:2)
Re: (Score:2)
Yep. A simulated rainstorm won't get you wet.
The problem isn't that obvious truth. The trouble is that it's blasphemy.
Re: (Score:2)
Re: (Score:2)
You seem angry.
You should try powering off and then back on a little while later.
Re: (Score:2)
No (Score:2)
Next question
No (Score:5, Insightful)
We have to put warning labels on everything to tell people not to eat it, not to shove it up their butts, etc. and we still get idiots who eat Tide pods.
A sponge could outsmart humanity.
Re: (Score:2, Offtopic)
Turns out Skynet didn't need killer time traveling robots to destroy humanity, just larger Tide Pod factories and a good Instagram account showing how cool the new detergent flavors looked in your mouth.
Re: (Score:2)
Re: (Score:2)
I imagine a future where instead of one car misjudging a turn and crashing you'll have a line of cars all doing it, sliding into the ditch one after the other...
Re: (Score:2)
"So you're going to strap yourself into a box on wheels that has no controls for you to use to control it (except maybe a big red "STOP" button that may or may not work) and trust your continued existence to whether or not it fucks up because it's shit?"
People do that every time they get on a bus or airplane or any vehicle that is controlled by another entity. The biggest difference is those usually don't have a "big red "STOP" button" and nearly all accidents are user error, not AI or a hardware issue.
Re: (Score:3)
Many people are subject to reverse psychology: tell them not to do X, and they'll do X out of their natural reflex.
In my father's bootcamp, the drill sergeant had everybody crawl under a stream of actual bullets, warning everybody clearly that they were real and not rubber bullets. Sure enough, some idiot tested that theory by sticking his arm up and turned his han
Re: (Score:2)
Thank you. I actually laughed out loud for the first time in a while.
I don't think what currently passes for AI (deep learning) is dangerous except, as other's have said, through our own stupidity at trusting it. With any luck any truly emergent AI should follow the four laws of robotics:
Re: (Score:1)
Damn you! Now I can't get that song out of my head [youtube.com]
Re: (Score:2)
That and the fact that humans are already trying to exterminate each others. No need for any help on that front...
Re: (Score:2)
That's no AI! We need to open your box and get you to a hospital.
God, I hope not. (Score:4, Insightful)
It's not AI per se which is the problem... (Score:5, Interesting)
Re: (Score:2)
It's the few folks using it to screw over the rest of humanity. How to "outsmart" these few should be the question.
It's not a person or people you're looking to destroy.
It's a human trait.
It's called Greed.
And we humans are infected with it.
Good luck finding a cure. Haven't found one in a few thousand years of warmongering, fighting over what's yours and mine on this rock.
Better plan - be worth keeping (Score:5, Interesting)
The first step in behaving better is to stop pretending there are human values because large groups of humans rarely act morally when it isn't in their own self interest.
Re: (Score:3, Insightful)
we dead
Re: (Score:2)
By what standards would a superior intelligence judge us "worthy"? How do you judge whether bedbugs are worthy to live?
Re: (Score:3)
Re: (Score:3)
This is the path. Lot's of comments are arguing that "AI isn't intelligent". But it is already better than humans at many tasks like arithmetic and chess, and no one has a clear idea of what "intelligence" is that doesn't boil down to the capability to do complex tasks successfully. The future has us co-existing with machine intelligence that is better than us at many things. The notion of "controlling" intelligence is an attractive authoritarian dream, but it doesn't have a chance of working. The gr
Machine Learning is not AI (Score:3, Informative)
Can we stop referring to Machine Learning as AI?
Re: (Score:2)
Re: (Score:2)
Why can't we just accept that AI is what AI is, and that it's not what you think it is?
It seems way more efficient than redefining AI to whatever you think it is, and then agreeing we don't have that new definition.
Besides, you'd probably just start arguing about the new term, "That Thing That Computers Do That Isn't Really Intelligence But Still Has Some Useful Applications"
"We don't have TTTCDTIRTBSHSUA and we never will because computers aren't intelligent! Rawr!"
Re: (Score:2)
The real question, is will AI ever become just plain "I"? If so, what would that require?
Not if you're Unikitty (Score:1)
Everything is awesome if you act predictably and never make waves.
In other words, don't be an American.
Re: (Score:2)
Re: (Score:2)
You probably won't like the OrangeBot I'm working on.
It would be your child not your slave (Score:2)
The only way we might be able to stop... (Score:3)
Only in this case, we can stop such AI with the capability to exploit each undiscovered backdoor and 'zero-day' to infect and use all our computer infrastructure worldwide.
It's an easy solution, easy to implement and very simple. As laws should be. And a good 'last line of defence', something we do not want to be without when the day comes.
Re: (Score:2)
Re: (Score:2)
Do you have a good idea how resourceful "Backhoe Joe" is?
Re: (Score:2)
Your solution is naïve, to be kind, and will not work. Watch this video.
https://www.youtube.com/watch?v=3TYT1QfdfsM
You miss the point. This video talks about the 'stop button' versus an 'AI robot'. This is very naive; because actual runaway AI might not know or care too much about humans unless they try to stop it. A mandatory - non digitally controlled - 'stop button' on all datacenters in the world, not a button but physical accessible switches for data connectivity and power; accessible without digitally controlled access systems.
One could better look at it as a dynamic virus - without any control over what the AI
No chance (Score:5, Insightful)
Because AI has no I. It cannot outsmart anything. Hence "stopping it outsmarting xyz" is not possible because it is not doing it in the first place.
Please stop with that AI nonsense. We have statistical classifiers, pattern matchers, etc., but we do not have artificial intelligence, insight, understanding, and we are unlikely to get it anytime in the next 50 years may never get it.
That said, many people rarely use what they have in natural intelligence, and go instead with feelings, or conformity or what other people tell them. These people are always outsmarted by anybody.
Re: (Score:2)
Re: (Score:2)
Well, there are issues where smart people get into heated debates and then there are issues where things are obvious to any smart person ;-)
Re: (Score:3)
Agreed that current AI is not "I". But 50 years or never?
Why?
Is your assumption that AI or self awareness can only be written by a million humans at keyboards? (not adaptive algorithms)
Or that the resources (memory, computation) for intelligence or awareness can not fit on computers for the next 50 years? How much is enough?
Or that the computation architecture for the the next 50 years is not adequate to implement intelligence or awareness?
Or that technology just moves that slow? (we still don't have flyi
No intelligent now, so can never be intelligent (Score:2)
Machines could not replace a horse in 1500 AD, so cars can never exist.
Nonsense. Especially when, after only 60 years of research, we see AIs beat the very best of us at Go and Jeopardy.
Might be another 100 years though, rather than just 60. But within my children's lifetime seems pretty likely.
Re: (Score:2)
Technologies that could replace horses were demonstrates > 2000 years ago. It was just a matter of time. We have absolutely nothing demonstrating even a glimmer of intelligence. Hence there is no demonstration and you analogy is fundamentally flawed.
Re: (Score:2)
Or that the human brain is a unique (magic) machine, that cannot be reproduced because it was created by magic
The difference son is every blessed one of us are gods children.
Re: (Score:2)
Religious fuckup hijacking something that is true, but has absolutely nothing to do with a "god" or other such delusion. What is true, is that current Physics has no place for intelligence or self-awareness. The theory just has no mechanisms for it and "physicalists" are just fundamentalist religious fuckups as well. As such it is completely open what is missing here. Fortunately, Physics is incomplete and known to be fundamentally wrong (no quantum-gravity) at this time. Hence there can be extensions. Bu w
Re: (Score:2)
But the scientific indications are getting less and less and the scientific indications that there is something else at work are getting more solid all the time.
Citation or bullshit.
Re: (Score:2)
No, it is not. Religion is defined a bit differently from what you think. Please read up on the definition before claiming nonsense.
Your argument about outside influences being a point for physicalism is deeply flawed. If the brain is just the interface to something else, of course influencing it does have an effect of that "something else". Also, "emergence" does not happen in Physics. There is no mechanism for it. An "emergent property" is a scientist saying "oops". In Physics the whole cannot be more tha
Re: (Score:2)
Agreed that current AI is not "I". But 50 years or never?
Why?
Simple extrapolation from tech history. (No, for computers things do not go faster.) At the moment we have not even a credible theory how intelligence can be implemented. That means at the very least 50 years, more likely 80-100 years to general availability. It may well mean never as outside of physicalist fundamentalist derangement ("It obviously is possible!", yeah, right...) there is no indication it is possible. And there certainly is no scientific indication it is possible (no, physicalism is not scie
Re: (Score:2)
Indeed. Because designing a new thing with that properties requires insight into the nature of the thing. Machines cannot do insight and it is completely unclear whether they ever will be able to.
Why? (Score:1)
Humanity has been in the business of making stronger, better replacements for ourselves since we've been human. We call them our children. Why then are we all upset about the possibility that a computer becomes smarter than us? The Matrix was a movie. Most children don't murder or enslave their parents. Maybe we ought to be thinking more about how to teach these virtual children to be "good" instead of figuring out how to handicap them so that we can feel superior.
Re: (Score:2)
They're not 'virtual children', they're shitty pieces of software that can't think, have no capacity for 'understanding' and overall are CRAP. Stop anthropomorphizing shitty software algorithms.
Humans will not notice when AI reign begins... (Score:5, Interesting)
How many people are already working for entities they cannot identify as being human beings? How would the average worker notice the mega-corporation he is working for is not ultimately controlled by some AI system, which happens to control enough shares to vote to its favor at the advisory board?
Luckily for humans, they are cheaply reproducible, energy-efficient working drones well adapted to the planet's environment, so no reason for the ruling AI to kill them. Keeping them as far, animals, like humans keep horses, seems to be way more plausible than some "SkyNet"-like extinction event.
Re: (Score:3)
+1. But humans are actually not that cheap, takes 20 years to train them and they then just run for 40 years, and then only for 12 hours per day.
Humans will not understand how AIs will remove them. It will just look like the world has gone a little bit madder than it is already.
Imagine if Xi Jinping had even a semi-intelligent AI to help him make decisions and control opponents. wait...
AI (Score:1)
Will AI outsmart humans?
AI as sold will give humans a list of options set by other humans.
Using the term AI as a cover story for their political ideas.
An "AI" sold as a prophetic, smart will just be a list of its human input with hidden political views.
Empathy is very Human (Score:2)
Idiots (Score:2)
IF it can happen, eventually it will happen (Score:2)
never anthropomorphize computers - (Score:2)
- they hate it when you do that!
commentsubject (Score:3)
A million posts bickering about the definition of AI. A replicating nanobot doesn't need ANY definition to graygoo our shit, it could do it with 100% static code and one shortsighted human.
Yes, it's turned into buzzword bullshit for clicks and pitches (present article included) diluted to hell and so sprawled it loses all meaning, but the combination of a runaway program with physical components is also a concern. And any sort of dynamic parameters (however you "identify" or categorize them) reduce predictability.
Do we want to limit AI? (Score:2)
Give Amazon Alexa/Google Home a camera as well so that it can lip read as far as I'm concerned. We humans are, on the whole, screwing the planet anyway. I suspect a more advanced civilisation might do better.
To be honest (Score:2)
Given the current state of our species in general, I would like to hope that we could build something that surpasses us in a way we never will.
Without a free will, an AI is just another computer program. It's when you give the gift of choice, does it truly become something special.
We are, without a doubt, the most f&cked up species on this planet.
We appear to be incapable of positive change on our own as a whole.
At our current pace, a species wide demise is inevitable unless something changes. ( War,
Can we make humans smarter? (Score:2)
Currently, we subsidize the least successful, including their child-bearing. Meanwhile, the most successful members of society often choose to not have children, because of all the other pressures on their time. We're doing it wrong...
Presuming it is possible (Score:2)
Presuming AI can be achieved, something I believe is inevitable, it will not be a singular development. It will happen when the technology and will to develop it come together.
I can control what kind of AI I might create. Although judging how well the average homo sapiens manages with raising organic intelligence, it's a crap shoot whether I'd really succeed in matching my past performance instilling ethics, a sense of responsibility, and at least some empathy.
I can slightly control those who might choose t
SciFi Refs (Score:2)
I'm quite disappointed... no Isaac Asimov / I, Robot reference in all the discussion? Or Arthur C. Clarke with 2001 and HAL? People don't know their classics any more?
A most interesting take was Iain M Banks' in the Culture series, with AIs really herding most of humanity, but leaving them their freedom.
Technologists say? (Score:2)
> The spectre of superintelligent machines doing us harm is not just science fiction, technologists say
Okay, so what exactly is it if not just science fiction? It's a worry that seems to me to have no basis in reality. We have no indication that AI can become self aware or what might happen then.
Sure, bugs in AI systems, bad training etc., can have terrifying consequences. That's true of any computer system. Just look at the Boing 737 MAX and its insistence to crash a plane even though it's been repeated
Re: (Score:2)
> The spectre of superintelligent machines doing us harm is not just science fiction, technologists say
Okay, so what exactly is it if not just science fiction?
Homeopathic fiction? I suggest that Superintelligent machines are doing harm to the extra 130% of our lives that homeopathy treats.
AI is not an issue (Score:2)
AI, as it exists today with all its approaches based on neuronal networks and other mathematical trickery, is far from able to outsmart humans. It can do some tasks quite good other even more cost effective than humans, but it will not outsmart humans. However, humans are getting less capable and less trained in thinking, due to a vast set of issues including instant gratification tools (also called smartphones + apps) and zapping like media use.
The politicians will not allow AI (Score:3)
Their corruption and back room tactics will become obvious.
Leave him alone! (Score:2)
If we elect Trump types, AI wins. (Score:3)
Seriously, AI is all hype; but what it proves is that tasks we thought involved a lot of human skill actually are overrated because AI beats us; plenty of things are difficult for it.
Slow humans...out smarting them is possible.
Re: (Score:2)
Re: (Score:2)
It will get better; yes it's never going to be that intelligent... it only masters it's domain that we can make it learn and not much bigger... well, as you expand the domain the problem space becomes crazy huge and past patterns do not work the same way; adapting the past to a bigger space in a way a human does is something that doesn't work yet.
Some stuff humans are good at. Just not as much as we thought:
If you think beyond it replacing a human but instead think of performing a specific JOB then they can
Re: (Score:2)
I would not go with "never", but we certainly have absolutely nothing at this time and there is absolutely no indicator that we will ever have anything based on digital computers.
Re: (Score:2)
Re: (Score:2)
Is that his weight?