Tiny Changes Can Cause An AI To Fail (bbc.com) 237
Luthair writes:
According to the BBC there is growing concern in the machine learning community that as their algorithms are deployed in the real world they can be easily confused by knowledgeable attackers. These algorithms don't process information in the same way humans do, a small sticker placed strategically on a sign could render it invisible to a self driving car.
The article points out that a sticker on a stop sign "is enough for the car to 'see' the stop sign as something completely different from a stop sign," while researchers have created an online collection of images which currently fool AI systems. "In one project, published in October, researchers at Carnegie Mellon University built a pair of glasses that can subtly mislead a facial recognition system -- making the computer confuse actress Reese Witherspoon for Russell Crowe."
One computer academic says that unlike a spam-blocker, "if you're relying on the vision system in a self-driving car to know where to go and not crash into anything, then the stakes are much higher," adding ominously that "The only way to completely avoid this is to have a perfect model that is right all the time." Although on the plus side, "If you're some political dissident inside a repressive regime and you want to be able to conduct activities without being targeted, being able to avoid automated surveillance techniques based on machine learning would be a positive use."
The article points out that a sticker on a stop sign "is enough for the car to 'see' the stop sign as something completely different from a stop sign," while researchers have created an online collection of images which currently fool AI systems. "In one project, published in October, researchers at Carnegie Mellon University built a pair of glasses that can subtly mislead a facial recognition system -- making the computer confuse actress Reese Witherspoon for Russell Crowe."
One computer academic says that unlike a spam-blocker, "if you're relying on the vision system in a self-driving car to know where to go and not crash into anything, then the stakes are much higher," adding ominously that "The only way to completely avoid this is to have a perfect model that is right all the time." Although on the plus side, "If you're some political dissident inside a repressive regime and you want to be able to conduct activities without being targeted, being able to avoid automated surveillance techniques based on machine learning would be a positive use."
Mistakes (Score:2)
Re: (Score:2)
But they can only learn if humans tech them a broad range of adulterated traffic signs.
Re: (Score:2)
But they can only learn if humans tech them a broad range of adulterated traffic signs.
Yup. The problem is called over-fitting [wikipedia.org], when the AI does well on the training data but not with real life data, and one way to mitigate that is "denoising", where random noise is injected into the training data. There are many other mitigation techniques such as dropout [wikipedia.org] and ensemble bagging [wikipedia.org].
If your AI is confused by one sticker on a stop sign then you are not a competent developer. If there are a lot of stickers, then that may confuse a human as well ... and, as has been pointed out many times before, s
Re: (Score:2)
... also, even if all of that fails, because the stop sign has been defaced or even REMOVED COMPLETELY, the SDC will STOP ANYWAY. A database of the GPS coordinates of every stop sign in America will fit in $1 worth of flash. So even if the AI fails to recognize a stop sign, the software will know that a stop sign is supposed to be there and stop anyway.
Re: (Score:2)
"denoising", where random noise is injected into the training data.
But doesn't the "de-" prefix mean "remove from"?
Re: (Score:2)
But doesn't the "de-" prefix mean "remove from"?
Yes. Noise is inserted into the training data. Then the AI takes the data and extracts the relevant features by removing the noise. This is done by having one or more intermediate layers that are much smaller than the input layer. This forces the network to learn only the important features.
Re: (Score:2)
To add to the above, search for "denoising autoencoder" as the classic way to do this. You can then take the learned intermediate representation and use it in a classifier.
Re: (Score:2)
First the AI needs to recognize it made a mistake. Humans can't recognize they made a mistake unless another human tells them or there is a failure they are already trained to recognize. How does an AI driven car recognize it just blew through a stop sign unless a human on board tells them or some other indication, like crashing into another car?
I think increased processing power will help with bringing in related processing. Not just looking for major things like a recognized stop sign or traffic light,
Re: (Score:2)
Re: (Score:2)
https://seriousaccidents.com/l... [seriousaccidents.com]
Re: (Score:3, Insightful)
If tiny changes cause these "weak AI" algorithms to fail, then they've been trained badly, or else aren't sophisticated enough algorithms at their core. That, or they don't have enough context. For instance, a stop sign should be recognizable almost purely based on the fact that it's a uniquely shaped sign (octagonal) in the US, at least, along with its proximity and relative position to an intersection. An AI looking at a photo has none of this contextual information, and so has a severe disadvantage.
Mo
Re: (Score:2)
At first, the data was sparse. Some roads did not have speed limit data, some were incorrect, normally in the placement of a change in speed limit. However it quite quickly became usable, just from crowdsourced data.
Google maps includes turn lane
Re: (Score:2)
No ... as all market-governed items, HW/SW sensor tek will RACE-TO-THE-BOTTOM vis' robust action & cost. Watch the mis-drive law-suites EXPLODE !
Yes, it's too bad the government insists on being COMPLETELY HANDS-OFF when it comes to the manufacturing and licensing of automobiles, and of course, is paying ABSOLUTELY NO ATTENTION to self-driving cars. /sarcasm
Re: (Score:2)
Yes, it's too bad the government insists on being COMPLETELY HANDS-OFF when it comes to the manufacturing and licensing of automobiles, and of course, is paying ABSOLUTELY NO ATTENTION to self-driving cars. /sarcasm
Man, you got me there! I saw the "end sarcasm" tag and couldn't find the "start sarcasm" tag, so I thought everything from the top of the page was sarcasm...
Re: (Score:2)
So... one might call it an unmatched level of sarcasm?
Re: (Score:2)
First the AI needs to recognize it made a mistake. Humans can't recognize they made a mistake unless another human tells them or there is a failure they are already trained to recognize. How does an AI driven car recognize it just blew through a stop sign unless a human on board tells them or some other indication, like crashing into another car?
Well, at least here in Norway it'd probably run a constant comparison to NVDB - the national road registry. It's among other things the working tool for route planners but it actually contains every traffic light, sign, restriction in height, weight, direction of traffic, curvature, altitude, number of lanes and other road-related elements. At least for self driving classes 3/4 I expect it'll just bail on you and say there's an inconsistency between the stored info and the observed info, you work it out. If
Re: (Score:2)
This whole "it will work until it doesn't" thing is a major problem.
I'd be worried if the engineers were saying it, but since it is just the internet pundits I'm not really worried.
Get over it, if it doesn't "work" better than the human idiot, it isn't going to be sold. It will still be only under development.
It is fairly easy for most of the sensor systems in use with cars to detect a physical traffic sign. Then it knows there is a sign there, it knows the shape of the sign, and actually now it already knows if it is a stop sign even if the workers forgot to paint it at al
Re: (Score:2)
Re:Mistakes (Score:4, Informative)
"Weak" AI (and that is what we are talking about here) cannot "learn from mistakes". That skill is reserved for actual intelligence and "strong" AI. Strong AI has the little problem that it does not exist as it is currently completely unknown how it could be created, despite about half a century of intense research.
Re: (Score:2)
Re: (Score:2)
despite about half a century of intense research.
For the first 40 years, we didn't have fast enough hardware. So, really, we've just started, and progress is pretty quick these days.
Re: (Score:2)
Progress in strong AI has been exactly zero, whether 50 years ago or today.
Re: (Score:2)
For the first 40 years, we didn't have fast enough hardware.
We also didn't have enough training data. Today, if you want to develop a NN to recognize faces, you can find petabytes of examples online. A decade ago, there was far less. 20 years ago, there was almost nothing.
Re: (Score:2)
Your definition of "Weak" AI is not standard and is not how machine learning works.
Re: (Score:2)
Actually it is and that is exactly how it works. Sure it can "learn", but it cannot recognize mistakes. So it cannot "learn from mistakes". "Learning" from mistakes requires supervised learning in statistical classificatiors, and there the identification of a "mistake" comes from outside.
Re: (Score:2)
Re: (Score:2)
Speed Bump (Score:2)
One computer academic says that unlike a spam-blocker, "if you're relying on the vision system in a self-driving car to know where to go and not crash into anything, then the stakes are much higher," adding ominously that "The only way to completely avoid this is to have a perfect model that is right all the time."
Fine, but you only need a great model that's right more often than humans.
Re:Speed Bump (Score:5, Funny)
Fine, but you only need a great model that's right more often than humans.
I don't know that I've ever heard of a human driver who ran a stop sign thinking it was a banana.
Re: (Score:2)
I have no idea what type of fruit some drivers think the signs are, but I see human drivers running stop signs every day.
I waited for a yellow light to change (Score:2)
I've waited for a stop sign to change too.
Once, a cop was behind me and the light ahead was yellow. Not wanting to take a chance, I stopped at the light. I waited at the light and after I went through I got pulled over. I was confused. Why did he pull me over? The cop was very confused - why had I gone right through a red light after apparently noticing him behind me, he asked. What? I didn't do that, I said. I stopped and waited for the light to change. Aha! I waited for the light to *change*. It changed
Re:Speed Bump (Score:4, Insightful)
That is nonsense. AIs have never surpassed human performance (of course, you always need to compare to a human expert) and there is no rational reason to expect that they ever will. Incidentally, said "great" model is currently completely out of reach, even for relatively simple things like driving a car (which almost all humans can learn to do, i.e. it does not require much). The best we will get is a model that solves a lot of standard situations from a catalog and appeals to human help in the rest. That is pretty useful and will make things like self-driving cars a reality, but some things that smart human beings can do will likely remain out of reach for a long time and quite possibly forever.
Re: (Score:2)
AIs have never surpassed human performance (of course, you always need to compare to a human expert)
You must not have heard about AlphaGo.
Re:Speed Bump (Score:4, Informative)
I have heard about it, but unlike you I actually understand what it means. It only surpasses humans in its "Big Data" aspects, not in the actual AI parts. These are so bad that the expert "beaten" thought he would have no trouble finding a strategy to beat it, and that after he had seen it play only a few times. AlphaGo had the full history of the expert's playing style, the expert had nothing the other way round before.
In short, this was a stunt. It does not show what most people think it shows. No AI expert got really excited about this either.
Re: (Score:2)
Incidentally, said "great" model is currently completely out of reach, even for relatively simple things like driving a car (which almost all humans can learn to do, i.e. it does not require much).
We have taught the car how to drive. On the race track self-driving cars are going toe-to-toe with human experts, there's no shortage of driving skills. On a desert run I'd go with the computer just for consistency and reaction time. The challenge is that it's not a closed environment and we don't really know what other general improvisation or problem avoidance / solving skills might come in handy, what the computer lack isn't on any driving test. At least I don't remember anything said about what I should
Re: (Score:2)
I suggest to google what strong AI systems are right now in production, and what they are doing. ...
E.g. IBMs Whatson
Re: (Score:2)
I suggest you find out what you are talking about. Watson is weak AI. It has no intelligence whatsoever and to expert audiences IBM does not claim differently. And actually, there are exactly zero "strong AI systems" in existence at this time and zero are expected to be created by actual experts in the next few decades.
AI from A1 (Score:2)
Is Daniel Lowd that naive? (Score:2, Insightful)
Does he really think there won't be 100,000 First World jackasses defacing stop signs for the lulz and religious terrorists hoping that defaced stop signs will cause school buses to crash into synagogues and girls' schools for every 1 political dissident fighting the good fight against repressive regimes?
Remember, this is "weak" AI (Score:5, Insightful)
Weak AI is characterized by not being intelligent. It is merely statistical classification, algorithmic planning and things like that. It has the advantage that (unlike "strong" AI) it is actually available. But it has the disadvantage that is has zero understanding of what it is doing. As strong AI is not even on the distant horizon, in fact it is unclear whether it is possible to create it at all (despite what a lot of morons that have never understood current research in the field or have not even looked at it like to claim), weak AI is all we will have for the foreseeable future. This means that we have to fake a lot of things that even the tiniest bit of actual intelligence could easily do by itself.
Of course, weak AI is still massively useful, but confusing it with actual intelligence is dangerous. It is however noting any actual expert will ever do. They know. It is just the stupid public that does not get it at all. As usual.
Re: (Score:2)
A lot of human image processing is also "weak", just a bit more advanced. When you're driving, and you see a stop sign, you don't really think about it, or "understand" in a deeper sense. You just automatically stop.
Re: (Score:2)
so if you see a stop sign next to a green traffic light, you automatically stop?
Re: (Score:2)
No, the green light automatically overrules the stop sign. It's just a more complicated pattern.
Of course, as the situation becomes more complicated, for instance, a fallen tree is blocking the road, then you become aware of the situation, and start thinking about what happened, and making a plan on how to proceed.
Re: (Score:2)
Well, smart gubbermints might put signage up in case the traffic lights fail.
Where I live, it's very common to see stop signs on the same pole that holds the traffic lights. It is understood that the signage only applies in case the traffic lights are off/broken/obscured. For that matter, if there is no signage, we fall through to 'left yields to the right'.
For that matter, if there's a traffic controlling cop on the intersection, that takes precedence over all signage and light signals.
Glad I could provid
Re: (Score:2)
That's funny. Where I am, everybody knows that if the traffic lights are all off, it becomes an "all-way stop" and the traffic alternates: one row from one side, one row from the next side. It works really well if the people know what to do.
If the people know to use that stop sign when the light is off... they already know they have to do something different when the light fails, and they already know what it is. The sign is redundant.
Re: (Score:2)
That's funny. Where I am, everybody knows that if the traffic lights are all off, it becomes an "all-way stop" and the traffic alternates: one row from one side, one row from the next side.
That sounds awfully vague. How long is a row?
It works really well if the people know what to do.
Do they, in practice? Didn't think so.
If the people know to use that stop sign when the light is off... they already know they have to do something different when the light fails, and they already know what it is. The sign is redundant.
I can't follow your logic on this one. Where's the redundancy? A stop sign next to traffic lights means something different than a yield sign next to traffic lights, which means something different than a right-of-way sign next to traffic lights, which in turn means something different than no sign next to traffic lights.
Re: (Score:2)
That sounds awfully vague. How long is a row?
One car.
You're probably oriented sideways. One car per lane. How wide it is depends on the street, of course. Even if it is 4 lane street, traffic can continue fairly well that way, one row one way, one row the other way. A few idiots get out of sync, but the swarm quickly corrects. It only works if the people are prepared to conspire to maintain correct traffic flow; many places in the world just get traffic jams when the signals go out.
Re: (Score:2)
Indeed. Humans are running quite a few things using mostly automation. Actually thinking about things takes longer and usually the automation has already started to take action when you, for example, consciously realize a situation is not what your reflexes think it it and then need to stop it. The thing that makes human automation better is not that it is fundamentally better, but that it is using supervised learning with an actual intelligence pre-classifying things and making sure it it is not off. Most
Re: (Score:2)
Most people will surely not stop after a few trips and will instead learn that that one specific instruction is false. That part we do not teach our AIs which makes them weak. The AI will fall for the prank every time
Correct, but that's because we haven't put a lot of effort in the AI to learn from its mistakes. That would probably required a different design of the network, with feedback, instead of simple layering from image pixels to classification output.
Re: (Score:2)
On the risk of repeating myself: Weak AI cannot learn from mistakes. Not possible. The best it can do is request a classification of a sample from en entity with actual intelligence (only ones available at this time: humans), add that to the training set and run the training again.
Re: (Score:2)
The problem is that an AI has no concept of a human-made mistake or a "prank". A human actually does a complex risk-analysis and risk-acceptance process here and there is no way that weak AI will ever be able to do that. Best we can hope for is that things like this get added to the map.
Re: (Score:3)
Dunno about you, but this is not how my mind works. I honor or ignore stop signs depending on the present conditions (e.g. empty roads at night, etc). How's that weak?
The recognition of the stop sign is weak. You don't go: "hey, that's a red octagonal sign with the word "STOP" in it, I think that's a stop sign", you just automatically recognize is. In your case, the recognition is then tied to a more complex decision mechanism.
Re: (Score:3)
The recognition of the stop sign is weak. You don't go: "hey, that's a red octagonal sign with the word
I don't consciously do that, no. But somewhere in my brain, various structures which have formed in my brain for recognizing stop signs are going into action. It's not any more or less automatic than what the computer is doing. The difference isn't whether something is happening automatically (by definition, what the computer is doing is automation) but that in my head, it's both analog and parallel. Lots of different parts of my brain are trying to recognize that stop sign at the same time, and whichever o
Re: (Score:2)
The difference is that an AI will have been trained to look for certain important things (like street signs) and ignore everything else - it just does not have the processing capacity to take in the whole scene. A human also looks for the important things and will make similar mistakes, however s/he will then ponder what this strange triangular, red & white object is ... and quickly realise that it is a street sign to which an advert for a disco has been attached ... then act on the street sign. However
Re: (Score:2)
Sure, for poor people using the discount model car with pure optical navigation.
But most people will have the lidar stuff and the car already knows the shape the sign, and won't even see the sticker as a triangle just a color splotch on a stop sign. The whole stop sign will probably be masked to the other processing layers since it has already been identified. Really it should be using the placement and orientation of the sign to determine who has to stop, no need to read the color at all.
Re: (Score:2)
Actually, terminology has shifted. These used to be called "cognitive systems", now they are called "weak AI". Weak AI is not intelligent, but it aims to duplicate things that can be done using intelligence by use of automation. As such, it is a proper area of AI research. I agree that the terminology is not that well chosen and I blame the incompetence of the press that still claims computers are "intelligent" all the time.
Current AI isn't... (Score:5, Interesting)
These are trainable appliances, not intelligences. They don't have the adaptability and recovery from mistakes of human or (in the case of statistical, sub-symbolic etc.) any explanatory power. To some extent, that's why I liked the ancient expert systems with a why? function, but they were also very brittle. So I think the current hype curve has inflected and this is a good thing, since, apart from this, there are some quite weighty ethical problems as well.
This is not the view of a neo-Luddite, but there's stuff to think about here.
Re: (Score:2)
Re: (Score:2)
The camera moves (Score:2)
"The only way to completely avoid this is to have a perfect model that is right all the time."
Far from true. Many pathological interpretation will solve themselves as the camera moves.
For instance, a pedestrian could blend into the pole behind. Half a second later, the perspective has changed and the pole is behind something else.
So the "tiny change" must hold true as the camera moves, or it won't cause failure.
"AI" (Score:3, Insightful)
The problem with this kind of "AI" (it's not, but let's not go there) is that there's no understanding of what it's actually doing. We're creating tools, "training" them, and then we have no idea what it's basing decisions on past that point.
As such, outside of toys, they aren't that useful and SHOULDN'T BE used for things like self-driving cars. You can never imagine them passing, say, aviation verification because you have literally no idea what it will do.
And it's because of that very problem that they are also unfixable, and unguaranteeable. You can't just say "Oh, we'll train it some more" because the logical consequence of that is that you have to train it on virtually everything you ever want it to do, which kind of spoils the point. And even then, there's no way you can guarantee that it will work next time.
Interesting for beating humans at board games, recognising what you're saying for ordering online, or spotting porn images in image search. Maybe. Some day. But in terms of reliance, we can't rely on them which kills them for all the useful purposes.
It's actually one of the first steps of humans creating systems to do jobs, that the humans do not and cannot understand. Not just one individual could not understand, but nobody, not even the creator can understand or predict what it will do. That's dangerous ground, even if we aren't talking about AI-taking-over-the-world scenarios.
Re:"AI" (Score:5, Insightful)
Re:"AI" (Score:5, Informative)
This assume the AI even needs to see the stop sign. A driverless car has many advantages over a human. It can have a database of the locations of all stop signs. It have telemetry information from other nearby cars. It can have 360 degree sensors that include cameras and lidar. It doesn't get tired or drunk. It can receive updates based on "mistakes" made by other driverless cars.
Even if there are problems with some of the information, the system can still perform an action based on the total information that is safe for the people in the situation. For example, even if doesn't see a new stop sign, it might still have enough information to see that there is another car entering the intersection.
Of course, it will make mistakes, but it just has to make significantly fewer mistakes than humans. Honestly, given the pace of progress, that doesn't seem too hard.
Re: (Score:2)
Re: (Score:3)
Sigh.
If you have to record every stop sign for the cars, you don't have AI. You have an expert system, at best, running on a database.
Also, you could just make the roads like railways. It would be infinitely safer to have driverless cars on driverless-only, fixed route highways. No pedestrians, no unexpected actions, no junctions to deal with, no need to "interpret" anything. Nobody has serious objection to automation. What we object to is pretending that these things are thinking or learning when they
Re: (Score:2)
Re: (Score:2)
It doesn't get tired or drunk.
That's what the experts are working on next.
Re: (Score:2)
For a self driving car a stop sign is irrelevant.
It always will behave at a road crossing 'as in doubt', and if necessary will give the other car the right of way. Even jf the other car had no such right.
Re: (Score:2)
Re: (Score:2)
Self driving cars are not trained.
They are hard cided. (facepalm)
SIGH (Score:3)
Re:SIGH (Score:4, Insightful)
I've been saying it before and I'll say it again.
And you'll presumably keep saying it until it suddenly isn't true, when you'll have to stop.
It doesn't matter much if auto-cars do get in accidents as long as they get in fewer accidents than humans do, as a result of the scenarios you've outlined and more. One day they will be smart enough to consider that a child might appear when a ball does, but for now they can just stop or slow down when they see the ball (which is an obstruction in the road).
They used to think computers would never beat humans at chess. Then it was Jeopardy. Then it was Go. One of the few certainties in life is that the "it can't be done!" crowd are invariably proven wrong, sooner or later.
Re: (Score:2)
Re: (Score:2)
Well,
I really wonder.
You do know that we have self driving cars since 10 - 20 years?
We only wait for regulations and legislations to change.
All your claims in this post, and the previous ones regarding AI, self driving cars, and computers ... are: wrong.
And: self driving cars are not driven by an AI.
Re: (Score:2)
Re: (Score:2)
Understanding the real world is not something computers are good at
Go back 20 years, find someone making this same comment, then bring them forward 20 years and show them what's been achieved so far and see if they change their mind at all. At the very least, they'll admit there's been a lot of progress.
That's the thing about progress. It keeps getting better, by definition.
Re: SIGH (Score:2)
Re: (Score:2)
Re: (Score:2)
Humans get blinded by the sun because our eyes can't function while staring at the sun. AI may be 'excused' by getting blinded by the sun once [...]
AI can be outfitted with sensors that we don't have. So while the visible spectrum may become confused, radar and infrared may not be. So a hexagonal object positioned near an intersection might be enough to tell the AI that it should stop, even if there are a few stickers on the sign or the sun is shining into the sensor.
Also, it's easier to make sure a light sensor feeding an AI has appropriate filters. It's tougher to remind drivers to wear sunglasses...
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
All the examples you give here are just idiotic.
If a driverless car does not see a pedestrian running into the road because if the sun, somwould a driver!
And: driver less cars have more than just cameras, it is impossible for them to be blinded by the sun.
Your ball and child dxample is just ridiculous. 'Anticipate a child'? How retarded are you!? The car is not anticipating anything. It is hardwired to break when an object comes into its path. Same way 'I'm hardwired to break'.
Do you realy think the program
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
These automated cars will be forever getting into accidents because they didn't see a child because of the sun
Funny how in many tests AIs have been better at this than humans. Your "sun" example is especially stupid.
Re: (Score:2)
This has been known for over 30 years (Score:5, Insightful)
You also see this in biological neural nets. As people get older, they tend to get set in their ways, less likely to change their opinions even in the face of contradictory evidence. (As opposed to younger people who are too eager to form an opinion despite weak or the lack of evidence.) I suspect this is also where the aphorism "you can't teach an old dog new tricks" comes from. IMHO this is why trying to lengthen the human lifespan in the pursuit of immortality is a bad idea. Death is nature's way of clearing out neural nets which have become too rigid to respond properly to common variability in situations they encounter. My grandmother hated the Japanese to her dying day (they raped and killed her sister and niece during WWII). If people were immortal, we'd be completely dysfunctional as a society because everyone would be holding grudges and experience-based prejudice for hundreds of years, to the detriment of immediate benefit.
Re: (Score:2)
The first solution was to stop the learning process and freeze the neural net before it reached this stage, then simply use it in production with the learning capability (ability to modify itself) disabled.
Which works until somebody re-enables it... [youtube.com]
Re: (Score:2)
Your example makes no sense.
Neural nets are not used to stop a train at a plat form in the correct position.
You use markers, e.g magnets, and sensors to recognize such markers.
Holding grudges is a character flaw. Plenty of societies have ways to teach: don't hold grudges.
Wrong title, not a real problem (Score:3)
The title should have read "Carefully crafted decoy using massive computation resources can fool not up-to-date AI".
Here's how it works:
1. Get access to the AI model you want to fool (and only this one). Not necessarily the source code, but at least you need to be able to use the model as long as you want.
2. Solve a rather complex optimization problem to generate the decoy
3. use your decoy in very controlled conditions (like stated in the linked paper)
While the method for fooling the model is fine (and similar work has been buzzing lately), the conclusion are much weaker than you expect. First, because if you don't have the actual model, you cannot do that. You need to run the actual model you are trying to fool. So that takes out all remote systems with rate limiting accesses. Second, your rely on tiny variation which can be more sensitive than real world variation. Take for example the sticker on road sign, if you took the picture on the sunny day, the decoy will very likely not work on rainy day or at night. Third, if the model evolves, you have to update the decoy. Here's the problem with statistical learning systems: they learn. It's very likely that the model got updated by the time you finished the computation and printing the sticker. Many people believe that future industrial systems will perform online learning which renders those static methods useless.
So yeah, actual research model can be fooled in very specific cases. However, It's not as bad as some article try to make it sound. I'm not saying it won't happen, I'm saying it's not as bad as you think it is. Hey, if you want to impersonate somebody, put some make up and if you want people to crash their car, cover the roadsigns with paint. There you have it, humans are easily fooled by some paint.
Not how it works (Score:3)
Almost every comment posted so far about this story is totally wrong. Adversarial examples are a hot topic in deep learning right now. We've learned a lot about how they work and how to protect against them. They have nothing to do with "weak" versus "strong" AI. Humans are also susceptible to optical illusions, just different ones from neural nets. They don't mean that computers can never be trusted. Computers can be made much more reliable than humans. And they also aren't random failures, or something that's hard to create. In fact, they're trivial to create in a simple, systematic way.
They're actually a consequence of excessive linearity in our models. If you don't know what that means, don't worry about it. It's just a quirk of how models have traditionally been trained. And if you make a small change to encourage them to work in a nonlinear regime, they become much more resistant to adversarial examples. By the time fully autonomous cars hit the roads in a few years, this should be a totally solved problem.
If you build deep learning systems, you need to care about this. If you don't, you can ignore it. It's not a problem you need to care about, any more than you care what activation function or regularization method your car is using.
Re: (Score:2, Funny)
else killAllHumans();
Re: Up, up, and away! (Score:2)
Re: (Score:2)
The thing is that many people will actually become obsolete, if not quite so soon. The problem is that while technically they are intelligent people, they do not really use their intelligence, and that makes their jobs accessible to automation. Of course, those that actually do use their intelligence will not get replaced successfully anytime soon and quite possible not ever. The thing the public does not understand is that at this time we have absolutely no idea how intelligence is created. There is not ev
Re: (Score:2)
And we have even less of an idea what consciousness is. According to the current scientific state-of-the-art, there is no physical mechanism for consciousness, yet it clearly exists. Of course, said AI fanatics will say nonsense like ...
If you have no idea what it is, how would you know that it is nonsense ?
there is no physical mechanism for consciousness
If there's no physical mechanism, how/why did it evolve ?
Re: (Score:2)
And we have even less of an idea what consciousness is. According to the current scientific state-of-the-art, there is no physical mechanism for consciousness, yet it clearly exists. Of course, said AI fanatics will say nonsense like ...
If you have no idea what it is, how would you know that it is nonsense ?
Simple logic. If consciousness does not exist, but is just an illusion, however illusions require consciousness, then the claim leads to a contradiction ("Reductio ad absurdum"), and hence the claim is false.
there is no physical mechanism for consciousness
If there's no physical mechanism, how/why did it evolve ?
Do you know that it evolved? Claiming that everything must have evolved is nonsense. Science does not make such a claim. It claims that our bodies have evolved, and that is a very well founded claim given genetics. It is not a 100% thing though, more like 95%. (Not predicting a "God" or such nonsense he
Re: (Score:2)
I don't agree. There are very few things we call intelligent. I'm sure they have lots of incidental correlations between them.
This is a good point. We have no scientif
Re: (Score:2)
Re: (Score:2)
They obviously fooled the slashdot AI into thinking it was news.
Re: (Score:2)
You say that as if you think that actual intelligent thought isn't just a complex set of rules under the hood.