Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Transportation

Tiny Changes Can Cause An AI To Fail (bbc.com) 237

Luthair writes: According to the BBC there is growing concern in the machine learning community that as their algorithms are deployed in the real world they can be easily confused by knowledgeable attackers. These algorithms don't process information in the same way humans do, a small sticker placed strategically on a sign could render it invisible to a self driving car.
The article points out that a sticker on a stop sign "is enough for the car to 'see' the stop sign as something completely different from a stop sign," while researchers have created an online collection of images which currently fool AI systems. "In one project, published in October, researchers at Carnegie Mellon University built a pair of glasses that can subtly mislead a facial recognition system -- making the computer confuse actress Reese Witherspoon for Russell Crowe."

One computer academic says that unlike a spam-blocker, "if you're relying on the vision system in a self-driving car to know where to go and not crash into anything, then the stakes are much higher," adding ominously that "The only way to completely avoid this is to have a perfect model that is right all the time." Although on the plus side, "If you're some political dissident inside a repressive regime and you want to be able to conduct activities without being targeted, being able to avoid automated surveillance techniques based on machine learning would be a positive use."
This discussion has been archived. No new comments can be posted.

Tiny Changes Can Cause An AI To Fail

Comments Filter:
  • Humans Learn from them so will AI's, Intelligence really is determined by how fast something or someone learns from mistakes
    • by Nutria ( 679911 )

      But they can only learn if humans tech them a broad range of adulterated traffic signs.

      • But they can only learn if humans tech them a broad range of adulterated traffic signs.

        Yup. The problem is called over-fitting [wikipedia.org], when the AI does well on the training data but not with real life data, and one way to mitigate that is "denoising", where random noise is injected into the training data. There are many other mitigation techniques such as dropout [wikipedia.org] and ensemble bagging [wikipedia.org].

        If your AI is confused by one sticker on a stop sign then you are not a competent developer. If there are a lot of stickers, then that may confuse a human as well ... and, as has been pointed out many times before, s

        • ... also, even if all of that fails, because the stop sign has been defaced or even REMOVED COMPLETELY, the SDC will STOP ANYWAY. A database of the GPS coordinates of every stop sign in America will fit in $1 worth of flash. So even if the AI fails to recognize a stop sign, the software will know that a stop sign is supposed to be there and stop anyway.

        • by Nutria ( 679911 )

          "denoising", where random noise is injected into the training data.

          But doesn't the "de-" prefix mean "remove from"?

          • But doesn't the "de-" prefix mean "remove from"?

            Yes. Noise is inserted into the training data. Then the AI takes the data and extracts the relevant features by removing the noise. This is done by having one or more intermediate layers that are much smaller than the input layer. This forces the network to learn only the important features.

            • by SnowZero ( 92219 )

              To add to the above, search for "denoising autoencoder" as the classic way to do this. You can then take the learned intermediate representation and use it in a classifier.

    • First the AI needs to recognize it made a mistake. Humans can't recognize they made a mistake unless another human tells them or there is a failure they are already trained to recognize. How does an AI driven car recognize it just blew through a stop sign unless a human on board tells them or some other indication, like crashing into another car?

      I think increased processing power will help with bringing in related processing. Not just looking for major things like a recognized stop sign or traffic light,

      • That's not true. A person can realize they have turned down a one way road the wrong way and turn around without causing an accident. An AI just shuts down and parks.
      • Re: (Score:3, Insightful)

        by Dutch Gun ( 899105 )

        If tiny changes cause these "weak AI" algorithms to fail, then they've been trained badly, or else aren't sophisticated enough algorithms at their core. That, or they don't have enough context. For instance, a stop sign should be recognizable almost purely based on the fact that it's a uniquely shaped sign (octagonal) in the US, at least, along with its proximity and relative position to an intersection. An AI looking at a photo has none of this contextual information, and so has a severe disadvantage.

        Mo

        • by wbr1 ( 2538558 )
          This. To expand on some personal experience. I use Waze. Sometime last year they added a speed limit feature. You can show your speed and the posted speed limit on the road. You can even have it alert when you are a set amount over the posted speed limit.

          At first, the data was sparse. Some roads did not have speed limit data, some were incorrect, normally in the placement of a change in speed limit. However it quite quickly became usable, just from crowdsourced data.

          Google maps includes turn lane

      • by Kjella ( 173770 )

        First the AI needs to recognize it made a mistake. Humans can't recognize they made a mistake unless another human tells them or there is a failure they are already trained to recognize. How does an AI driven car recognize it just blew through a stop sign unless a human on board tells them or some other indication, like crashing into another car?

        Well, at least here in Norway it'd probably run a constant comparison to NVDB - the national road registry. It's among other things the working tool for route planners but it actually contains every traffic light, sign, restriction in height, weight, direction of traffic, curvature, altitude, number of lanes and other road-related elements. At least for self driving classes 3/4 I expect it'll just bail on you and say there's an inconsistency between the stored info and the observed info, you work it out. If

    • And potentially all AIs will learn from a mistake made by just one of them. Similar to how Google replays all the millions of recorded self driving mileage through their self driving software whenever they have a new release, so they can assess how the new version holds up.
    • Re:Mistakes (Score:4, Informative)

      by gweihir ( 88907 ) on Saturday April 15, 2017 @12:21PM (#54240243)

      "Weak" AI (and that is what we are talking about here) cannot "learn from mistakes". That skill is reserved for actual intelligence and "strong" AI. Strong AI has the little problem that it does not exist as it is currently completely unknown how it could be created, despite about half a century of intense research.

      • Comment removed based on user account deletion
      • despite about half a century of intense research.

        For the first 40 years, we didn't have fast enough hardware. So, really, we've just started, and progress is pretty quick these days.

        • by gweihir ( 88907 )

          Progress in strong AI has been exactly zero, whether 50 years ago or today.

        • For the first 40 years, we didn't have fast enough hardware.

          We also didn't have enough training data. Today, if you want to develop a NN to recognize faces, you can find petabytes of examples online. A decade ago, there was far less. 20 years ago, there was almost nothing.

      • "Weak" AI (and that is what we are talking about here) cannot "learn from mistakes".

        Your definition of "Weak" AI is not standard and is not how machine learning works.

        • by gweihir ( 88907 )

          Actually it is and that is exactly how it works. Sure it can "learn", but it cannot recognize mistakes. So it cannot "learn from mistakes". "Learning" from mistakes requires supervised learning in statistical classificatiors, and there the identification of a "mistake" comes from outside.

          • There are many areas of machine learning. Try looking up co-training or multi-view learning. In essence, these techniques can label mistakes and improve performance without supervised labels coming from the outside.
          • I guess the only way for me to interpret what you are saying is that you think strong AI is implied by something being able to learn from its own mistakes. This is an interesting claim. What do you mean by being able to learn from its own mistakes?
  • AIs are vulnerable to these attacks because they try to come to a conclusion using as little information as possible, while humans robustly "overthink" things by inefficiently considering too many otherwise irrelevant factors. These attacks sound like something AIs will quickly adapt to, surpassing human performance once again.

    One computer academic says that unlike a spam-blocker, "if you're relying on the vision system in a self-driving car to know where to go and not crash into anything, then the stakes are much higher," adding ominously that "The only way to completely avoid this is to have a perfect model that is right all the time."

    Fine, but you only need a great model that's right more often than humans.

    • by SlaveToTheGrind ( 546262 ) on Saturday April 15, 2017 @11:53AM (#54240115)

      Fine, but you only need a great model that's right more often than humans.

      I don't know that I've ever heard of a human driver who ran a stop sign thinking it was a banana.

      • I have no idea what type of fruit some drivers think the signs are, but I see human drivers running stop signs every day.

    • Re:Speed Bump (Score:4, Insightful)

      by gweihir ( 88907 ) on Saturday April 15, 2017 @12:19PM (#54240237)

      That is nonsense. AIs have never surpassed human performance (of course, you always need to compare to a human expert) and there is no rational reason to expect that they ever will. Incidentally, said "great" model is currently completely out of reach, even for relatively simple things like driving a car (which almost all humans can learn to do, i.e. it does not require much). The best we will get is a model that solves a lot of standard situations from a catalog and appeals to human help in the rest. That is pretty useful and will make things like self-driving cars a reality, but some things that smart human beings can do will likely remain out of reach for a long time and quite possibly forever.

      • AIs have never surpassed human performance (of course, you always need to compare to a human expert)

        You must not have heard about AlphaGo.

        • Re:Speed Bump (Score:4, Informative)

          by gweihir ( 88907 ) on Saturday April 15, 2017 @04:50PM (#54241245)

          I have heard about it, but unlike you I actually understand what it means. It only surpasses humans in its "Big Data" aspects, not in the actual AI parts. These are so bad that the expert "beaten" thought he would have no trouble finding a strategy to beat it, and that after he had seen it play only a few times. AlphaGo had the full history of the expert's playing style, the expert had nothing the other way round before.

          In short, this was a stunt. It does not show what most people think it shows. No AI expert got really excited about this either.

      • by Kjella ( 173770 )

        Incidentally, said "great" model is currently completely out of reach, even for relatively simple things like driving a car (which almost all humans can learn to do, i.e. it does not require much).

        We have taught the car how to drive. On the race track self-driving cars are going toe-to-toe with human experts, there's no shortage of driving skills. On a desert run I'd go with the computer just for consistency and reaction time. The challenge is that it's not a closed environment and we don't really know what other general improvisation or problem avoidance / solving skills might come in handy, what the computer lack isn't on any driving test. At least I don't remember anything said about what I should

      • I suggest to google what strong AI systems are right now in production, and what they are doing.
        E.g. IBMs Whatson ...

        • by gweihir ( 88907 )

          I suggest you find out what you are talking about. Watson is weak AI. It has no intelligence whatsoever and to expert audiences IBM does not claim differently. And actually, there are exactly zero "strong AI systems" in existence at this time and zero are expected to be created by actual experts in the next few decades.

  • now i'm hungry
  • Does he really think there won't be 100,000 First World jackasses defacing stop signs for the lulz and religious terrorists hoping that defaced stop signs will cause school buses to crash into synagogues and girls' schools for every 1 political dissident fighting the good fight against repressive regimes?

  • by gweihir ( 88907 ) on Saturday April 15, 2017 @12:00PM (#54240145)

    Weak AI is characterized by not being intelligent. It is merely statistical classification, algorithmic planning and things like that. It has the advantage that (unlike "strong" AI) it is actually available. But it has the disadvantage that is has zero understanding of what it is doing. As strong AI is not even on the distant horizon, in fact it is unclear whether it is possible to create it at all (despite what a lot of morons that have never understood current research in the field or have not even looked at it like to claim), weak AI is all we will have for the foreseeable future. This means that we have to fake a lot of things that even the tiniest bit of actual intelligence could easily do by itself.

    Of course, weak AI is still massively useful, but confusing it with actual intelligence is dangerous. It is however noting any actual expert will ever do. They know. It is just the stupid public that does not get it at all. As usual.

    • A lot of human image processing is also "weak", just a bit more advanced. When you're driving, and you see a stop sign, you don't really think about it, or "understand" in a deeper sense. You just automatically stop.

      • by fisted ( 2295862 )

        so if you see a stop sign next to a green traffic light, you automatically stop?

        • No, the green light automatically overrules the stop sign. It's just a more complicated pattern.

          Of course, as the situation becomes more complicated, for instance, a fallen tree is blocking the road, then you become aware of the situation, and start thinking about what happened, and making a plan on how to proceed.

      • by gweihir ( 88907 )

        Indeed. Humans are running quite a few things using mostly automation. Actually thinking about things takes longer and usually the automation has already started to take action when you, for example, consciously realize a situation is not what your reflexes think it it and then need to stop it. The thing that makes human automation better is not that it is fundamentally better, but that it is using supervised learning with an actual intelligence pre-classifying things and making sure it it is not off. Most

    • The difference is that an AI will have been trained to look for certain important things (like street signs) and ignore everything else - it just does not have the processing capacity to take in the whole scene. A human also looks for the important things and will make similar mistakes, however s/he will then ponder what this strange triangular, red & white object is ... and quickly realise that it is a street sign to which an advert for a disco has been attached ... then act on the street sign. However

      • Sure, for poor people using the discount model car with pure optical navigation.

        But most people will have the lidar stuff and the car already knows the shape the sign, and won't even see the sticker as a triangle just a color splotch on a stop sign. The whole stop sign will probably be masked to the other processing layers since it has already been identified. Really it should be using the placement and orientation of the sign to determine who has to stop, no need to read the color at all.

  • Current AI isn't... (Score:5, Interesting)

    by hughbar ( 579555 ) on Saturday April 15, 2017 @12:11PM (#54240189) Homepage
    We've been going through this since the 1980's when we started to make ruled-based expert systems and put them into production. We called that AI too. Now we're doing the same with statistical machine 'intelligence' (optimisation, often), various configurations of trainable neural networks and some hybrids.

    These are trainable appliances, not intelligences. They don't have the adaptability and recovery from mistakes of human or (in the case of statistical, sub-symbolic etc.) any explanatory power. To some extent, that's why I liked the ancient expert systems with a why? function, but they were also very brittle. So I think the current hype curve has inflected and this is a good thing, since, apart from this, there are some quite weighty ethical problems as well.

    This is not the view of a neo-Luddite, but there's stuff to think about here.
    • I think the current stuff will ultimately become part of the foundational toolset that higher level functions can begin to make use of. Just like the way our brain performs object recognition in the visual cortex prior to higher level processing about those objects.
      • by hughbar ( 579555 )
        Thanks, so do I. Some of our current troubles are a) expecting too much, too soon, a traditional industry vice b) not dealing/reflecting on ethical issues c) lay folk and politicians taking current AI, literally as 'intelligence', we've explained it badly too. There are probably more, but those are the first to come to (my) mind.
  • "The only way to completely avoid this is to have a perfect model that is right all the time."

    Far from true. Many pathological interpretation will solve themselves as the camera moves.

    For instance, a pedestrian could blend into the pole behind. Half a second later, the perspective has changed and the pole is behind something else.

    So the "tiny change" must hold true as the camera moves, or it won't cause failure.

  • "AI" (Score:3, Insightful)

    by ledow ( 319597 ) on Saturday April 15, 2017 @12:36PM (#54240297) Homepage

    The problem with this kind of "AI" (it's not, but let's not go there) is that there's no understanding of what it's actually doing. We're creating tools, "training" them, and then we have no idea what it's basing decisions on past that point.

    As such, outside of toys, they aren't that useful and SHOULDN'T BE used for things like self-driving cars. You can never imagine them passing, say, aviation verification because you have literally no idea what it will do.

    And it's because of that very problem that they are also unfixable, and unguaranteeable. You can't just say "Oh, we'll train it some more" because the logical consequence of that is that you have to train it on virtually everything you ever want it to do, which kind of spoils the point. And even then, there's no way you can guarantee that it will work next time.

    Interesting for beating humans at board games, recognising what you're saying for ordering online, or spotting porn images in image search. Maybe. Some day. But in terms of reliance, we can't rely on them which kills them for all the useful purposes.

    It's actually one of the first steps of humans creating systems to do jobs, that the humans do not and cannot understand. Not just one individual could not understand, but nobody, not even the creator can understand or predict what it will do. That's dangerous ground, even if we aren't talking about AI-taking-over-the-world scenarios.

    • Re:"AI" (Score:5, Insightful)

      by fluffernutter ( 1411889 ) on Saturday April 15, 2017 @12:51PM (#54240369)
      I think most people putting a lot of money into AI don't really understand the difference between programming a response to a stop sign and understanding what a stop sign is. If a stop sign is bent over from a previous accident and covered in snow, you will still stop if you truly understand what that object is. If you have programmed a stop sign, the vehicle is lucky to sail on through because stop signs aren't white objects on a pole close to the ground. How do you program for even every physical condition a stop sign may find itself in?
      • Re:"AI" (Score:5, Informative)

        by mesterha ( 110796 ) <chris.mesterharm ... com minus author> on Saturday April 15, 2017 @01:32PM (#54240513) Homepage

        How do you program for even every physical condition a stop sign may find itself in?

        This assume the AI even needs to see the stop sign. A driverless car has many advantages over a human. It can have a database of the locations of all stop signs. It have telemetry information from other nearby cars. It can have 360 degree sensors that include cameras and lidar. It doesn't get tired or drunk. It can receive updates based on "mistakes" made by other driverless cars.

        Even if there are problems with some of the information, the system can still perform an action based on the total information that is safe for the people in the situation. For example, even if doesn't see a new stop sign, it might still have enough information to see that there is another car entering the intersection.

        Of course, it will make mistakes, but it just has to make significantly fewer mistakes than humans. Honestly, given the pace of progress, that doesn't seem too hard.

        • How does the database get built then? Is every city, county, municipality in the world going to have some way and be legally bound to report immediately any time a stop sign is moved? We might as well be talking about how safe flying cars are. Making an educated guess at what to do at an intersection isn't good enough. When you say AI only needs to be as good as people, you are not correct. They need to be good enough to avoid all accidents a human would have avoided. Otherwise the technology has fail
        • by ledow ( 319597 )

          Sigh.

          If you have to record every stop sign for the cars, you don't have AI. You have an expert system, at best, running on a database.

          Also, you could just make the roads like railways. It would be infinitely safer to have driverless cars on driverless-only, fixed route highways. No pedestrians, no unexpected actions, no junctions to deal with, no need to "interpret" anything. Nobody has serious objection to automation. What we object to is pretending that these things are thinking or learning when they

          • Your problem seems to be based on what people call this technology. Most people in machine learning don't think these programs are intelligent in the human sense. Roughly speaking, machine learning is often good at solving problems that are difficult to code but are solvable by humans. As for artificial intelligence, a whale shark is not both a mammal and a fish.
        • It doesn't get tired or drunk.

          That's what the experts are working on next.

      • For a self driving car a stop sign is irrelevant.
        It always will behave at a road crossing 'as in doubt', and if necessary will give the other car the right of way. Even jf the other car had no such right.

        • Well they're really going to suck to drive in then. They're going to basically be paralyzed at some intersections. Admittedly, that's better then T-boning someone but not commercially viable. If they get confused at an intersection with a bent stop sign, how will they ever drive around snow clearing equipment in the process of clearing the road?
    • Self driving cars are not trained.
      They are hard cided. (facepalm)

  • by fluffernutter ( 1411889 ) on Saturday April 15, 2017 @12:39PM (#54240317)
    I've been saying it before and I'll say it again. These automated cars will be forever getting into accidents because they didn't see a child because of the sun, or because it didn't know a cat would run into the road, or because they saw a ball go into the road but did not anticipate a child running after it. There are too many things to code for.
    • Re:SIGH (Score:4, Insightful)

      by wonkey_monkey ( 2592601 ) on Saturday April 15, 2017 @12:47PM (#54240357) Homepage

      I've been saying it before and I'll say it again.

      And you'll presumably keep saying it until it suddenly isn't true, when you'll have to stop.

      It doesn't matter much if auto-cars do get in accidents as long as they get in fewer accidents than humans do, as a result of the scenarios you've outlined and more. One day they will be smart enough to consider that a child might appear when a ball does, but for now they can just stop or slow down when they see the ball (which is an obstruction in the road).

      They used to think computers would never beat humans at chess. Then it was Jeopardy. Then it was Go. One of the few certainties in life is that the "it can't be done!" crowd are invariably proven wrong, sooner or later.

      • First of all, board games are a totally different problem than driving. Board games are at one end of the spectrum where there are really no decisions being made. AlphaGo has to pick what road to go down because it can't make all the calculations and it is a critical part of what it does, but ultimately its success comes from the fact that a game is just a calculation which computers are good at. Understanding the real world is not something computers are good at, so they can't deal with basic changes in
        • Well,
          I really wonder.
          You do know that we have self driving cars since 10 - 20 years?

          We only wait for regulations and legislations to change.

          All your claims in this post, and the previous ones regarding AI, self driving cars, and computers ... are: wrong.

          And: self driving cars are not driven by an AI.

          • Well I'm not sure why Uber and Google are doing all this research then, if self driving cars have already been here for 20 years.
        • Understanding the real world is not something computers are good at

          Go back 20 years, find someone making this same comment, then bring them forward 20 years and show them what's been achieved so far and see if they change their mind at all. At the very least, they'll admit there's been a lot of progress.

          That's the thing about progress. It keeps getting better, by definition.

    • But they'll be getting in less accidents than human piloted vehicles. Humans get blinded by the sun too.
      • Not if AI doesn't get better than it is. Humans get blinded by the sun because our eyes can't function while staring at the sun. AI may be 'excused' by getting blinded by the sun once, but while it is easy to excuse a human for being infallible, it is less easy to excuse a machine that is driving a car and can't read the world's patterns properly in all situations. there is a reason why they are only being used under the strictest of conditions right now; because they don't want the true pattern to be s
        • Humans get blinded by the sun because our eyes can't function while staring at the sun. AI may be 'excused' by getting blinded by the sun once [...]

          AI can be outfitted with sensors that we don't have. So while the visible spectrum may become confused, radar and infrared may not be. So a hexagonal object positioned near an intersection might be enough to tell the AI that it should stop, even if there are a few stickers on the sign or the sun is shining into the sensor.

          Also, it's easier to make sure a light sensor feeding an AI has appropriate filters. It's tougher to remind drivers to wear sunglasses...

          • In a sensor, a stop sign is just a blob of red spectrum light. People have to tell it if the blob of red spectrum light is standing 10 feet from the ground by an intersection then it is a stop sign. Except a balloon may be red and on a 10 foot string. So they need to tell the car everything that isn't a stop sign. Also a stop sign may not be standing straight up, but that doesn't make it any less than a stop sign so they have the teach the car to recognize the stop sign in every way that it can be bent
          • Let me add that I understand that there is a lot of complex code in vehicles today, but the risk that AI introduces is that you are now putting it in full control of heavy machinery in public, and any mistake in any of these billions of lines of code could kill a person that may not even be driving a car at all.
    • All the examples you give here are just idiotic.
      If a driverless car does not see a pedestrian running into the road because if the sun, somwould a driver!
      And: driver less cars have more than just cameras, it is impossible for them to be blinded by the sun.
      Your ball and child dxample is just ridiculous. 'Anticipate a child'? How retarded are you!? The car is not anticipating anything. It is hardwired to break when an object comes into its path. Same way 'I'm hardwired to break'.
      Do you realy think the program

      • Well Autopilot already confused a truck trailer for a bridge which seems like a fairly obvious mistake, so.. absolutely. It's all about the tricks that shadows and light can play on a sensor. It won't get blinded the same way as a person but it will make a mistake just the same. It's not that I think these programmers are idiots, I just don't think it is possible for any human to think of all the possibilities.
      • Tesla even had a WARNING months before hand that the tech had an issue with trailers. A person activated the self parking feature with their phone and the car drove into the back of a tailor because the low mounted sensors couldn't detect it. I would think if they were sharp people, then they would have looked at every flaw related to trailers and fixed them yet in Florida a car runs into a trailer months later.
    • These automated cars will be forever getting into accidents because they didn't see a child because of the sun

      Funny how in many tests AIs have been better at this than humans. Your "sun" example is especially stupid.

      • Well I think Autopilot running into a trailer because of the way the sun was shining on it was stupid. Maybe Tesla has fixed that but you're stupid if you think something like that won't happen again.
  • by Solandri ( 704621 ) on Saturday April 15, 2017 @12:54PM (#54240381)
    AI researchers first ran across it when developing neural nets. The longer you allowed a neural net to learn, the more rigid its definition of boundary conditions became. Sometimes so rigid that the net became useless for its intended task. e.g. You could develop a neural net which would stop a train in the correct position at the platform 80% of the time. Further training would increase this to 90%, then 95%, then 99% of the time, but resulted in the net completely flipping out the remaining 1% of the time when it calculated it was going to overshoot by 1 mm outside the trained parameters. The first solution was to stop the learning process and freeze the neural net before it reached this stage, then simply use it in production with the learning capability (ability to modify itself) disabled. The next solution was to use simulated annealing to occasionally reset the specific things the neural net had learned, while retaining the general things it had learned.

    You also see this in biological neural nets. As people get older, they tend to get set in their ways, less likely to change their opinions even in the face of contradictory evidence. (As opposed to younger people who are too eager to form an opinion despite weak or the lack of evidence.) I suspect this is also where the aphorism "you can't teach an old dog new tricks" comes from. IMHO this is why trying to lengthen the human lifespan in the pursuit of immortality is a bad idea. Death is nature's way of clearing out neural nets which have become too rigid to respond properly to common variability in situations they encounter. My grandmother hated the Japanese to her dying day (they raped and killed her sister and niece during WWII). If people were immortal, we'd be completely dysfunctional as a society because everyone would be holding grudges and experience-based prejudice for hundreds of years, to the detriment of immediate benefit.
    • The first solution was to stop the learning process and freeze the neural net before it reached this stage, then simply use it in production with the learning capability (ability to modify itself) disabled.

      Which works until somebody re-enables it... [youtube.com]

    • Your example makes no sense.
      Neural nets are not used to stop a train at a plat form in the correct position.
      You use markers, e.g magnets, and sensors to recognize such markers.

      Holding grudges is a character flaw. Plenty of societies have ways to teach: don't hold grudges.

  • by lorinc ( 2470890 ) on Saturday April 15, 2017 @02:23PM (#54240713) Homepage Journal

    The title should have read "Carefully crafted decoy using massive computation resources can fool not up-to-date AI".

    Here's how it works:
    1. Get access to the AI model you want to fool (and only this one). Not necessarily the source code, but at least you need to be able to use the model as long as you want.
    2. Solve a rather complex optimization problem to generate the decoy
    3. use your decoy in very controlled conditions (like stated in the linked paper)

    While the method for fooling the model is fine (and similar work has been buzzing lately), the conclusion are much weaker than you expect. First, because if you don't have the actual model, you cannot do that. You need to run the actual model you are trying to fool. So that takes out all remote systems with rate limiting accesses. Second, your rely on tiny variation which can be more sensitive than real world variation. Take for example the sticker on road sign, if you took the picture on the sunny day, the decoy will very likely not work on rainy day or at night. Third, if the model evolves, you have to update the decoy. Here's the problem with statistical learning systems: they learn. It's very likely that the model got updated by the time you finished the computation and printing the sticker. Many people believe that future industrial systems will perform online learning which renders those static methods useless.

    So yeah, actual research model can be fooled in very specific cases. However, It's not as bad as some article try to make it sound. I'm not saying it won't happen, I'm saying it's not as bad as you think it is. Hey, if you want to impersonate somebody, put some make up and if you want people to crash their car, cover the roadsigns with paint. There you have it, humans are easily fooled by some paint.

  • by SoftwareArtist ( 1472499 ) on Saturday April 15, 2017 @03:03PM (#54240889)

    Almost every comment posted so far about this story is totally wrong. Adversarial examples are a hot topic in deep learning right now. We've learned a lot about how they work and how to protect against them. They have nothing to do with "weak" versus "strong" AI. Humans are also susceptible to optical illusions, just different ones from neural nets. They don't mean that computers can never be trusted. Computers can be made much more reliable than humans. And they also aren't random failures, or something that's hard to create. In fact, they're trivial to create in a simple, systematic way.

    They're actually a consequence of excessive linearity in our models. If you don't know what that means, don't worry about it. It's just a quirk of how models have traditionally been trained. And if you make a small change to encourage them to work in a nonlinear regime, they become much more resistant to adversarial examples. By the time fully autonomous cars hit the roads in a few years, this should be a totally solved problem.

    If you build deep learning systems, you need to care about this. If you don't, you can ignore it. It's not a problem you need to care about, any more than you care what activation function or regularization method your car is using.

As of next week, passwords will be entered in Morse code.

Working...