Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Transportation Security

Split-Second 'Phantom' Images Can Fool Tesla's Autopilot (wired.com) 84

An anonymous reader quotes a report from Wired: Researchers at Israel's Ben Gurion University of the Negev have spent the last two years experimenting with "phantom" images to trick semi-autonomous driving systems. They previously revealed that they could use split-second light projections on roads to successfully trick Tesla's driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they've found they can pull off the same trick with just a few frames of a road sign injected on a billboard's video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind.

In this latest set of experiments, the researchers injected frames of a phantom stop sign on digital billboards, simulating what they describe as a scenario in which someone hacked into a roadside billboard to alter its video. They also upgraded to Tesla's most recent version of Autopilot known as HW3. They found that they could again trick a Tesla or cause the same Mobileye device to give the driver mistaken alerts with just a few frames of altered video. The researchers found that an image that appeared for 0.42 seconds would reliably trick the Tesla, while one that appeared for just an eighth of a second would fool the Mobileye device. They also experimented with finding spots in a video frame that would attract the least notice from a human eye, going so far as to develop their own algorithm for identifying key blocks of pixels in an image so that a half-second phantom road sign could be slipped into the "uninteresting" portions. And while they tested their technique on a TV-sized billboard screen on a small road, they say it could easily be adapted to a digital highway billboard, where it could cause much more widespread mayhem.
"Autopilot is a driver assistance feature that is intended for use only with a fully attentive driver who has their hands on the wheel and is prepared to take over at any time," reads Tesla's response. The Ben Gurion researchers counter that Autopilot is used very differently in practice. "As we know, people use this feature as an autopilot and do not keep 100 percent attention on the road while using it," writes Mirsky in an email. "Therefore, we must try to mitigate this threat to keep people safe, regardless of [Tesla's] warnings."
This discussion has been archived. No new comments can be posted.

Split-Second 'Phantom' Images Can Fool Tesla's Autopilot

Comments Filter:
  • Look at the number of accidents on and off autopilot.

    • by bws111 ( 1216812 ) on Wednesday October 14, 2020 @04:25PM (#60607838)

      Meaningless. An accident on autopilot means there were TWO failures: the autopilot failed AND the driver failed to correct it.

      • Big assumption that the driver is paying attention. I don't care what the laws say, people get bored when they aren't doing anything. Even a split second delay getting back into driving mode could be disastrous.
    • by geekmux ( 1040042 ) on Wednesday October 14, 2020 @04:32PM (#60607866)

      Look at the number of accidents on and off autopilot.

      Uh, why look at outdated statistics? As autopilot (and like systems) become standard in cars, this problem will grow about as fast as wireless router adoption did 20 years ago.

      And to be quite honest, this is scary. Here I was worried about the car communications getting hacked and manipulating them directly. Now all it takes is a picture of a stop sign? Seriously? This is like a company announcing facial recognition multi-factor security in their latest device, and finding you can bypass it with a picture of Kermit the Frog.

      • It's not like that at all. A stop sign is a very well defined shape, easy to recognize... if a system sees it, the chance of a false positive is vanishingly small. The current autopilots have been tuned accordingly -- if they see a stop sign, they assume it is a real stop sign. And when it is projected on a billboard? It is a sign that has the right shape and symbol. There's nothing syntactically to say "this is not a stop sign". It isn't confusing one face for another, as in your Kermit example. It is only

        • It's not like that at all. A stop sign is a very well defined shape, easy to recognize... if a system sees it, the chance of a false positive is vanishingly small. The current autopilots have been tuned accordingly -- if they see a stop sign, they assume it is a real stop sign. And when it is projected on a billboard? It is a sign that has the right shape and symbol. There's nothing syntactically to say "this is not a stop sign". It isn't confusing one face for another, as in your Kermit example. It is only the semantic meaning that says "this is not a valid stop sign."

          My example was identical to yours in the sense that neither of them can tell the difference between a completely fake representation, and the real thing. You're right. It's not confusing one face for another. It doesn't even know the difference between a human face, and a muppet. And that's not just a little broken. It's completely broken.

          "If they see a stop sign, they assume..."

          Yes. We've proven that both AI and humans do that.

          • Now that the flaw is discovered, it is easy to fix.

            Telsa should offer rewards for spoofing. Better that the white hats find them than the black hats.

            • What flaw? When faced with contradictory inputs, the car chose to stop. Which is exactly what it should.
              • What flaw? When faced with contradictory inputs, the car chose to stop. Which is exactly what it should.

                So, using a picture of your face to unlock your Face ID protected iPhone, is doing "exactly what it should"?

                Seems Apple was smart enough to understand there's a flaw here...

            • Now that the flaw is discovered, it is easy to fix.

              Telsa should offer rewards for spoofing. Better that the white hats find them than the black hats.

              How exactly does discovery, define ease of correction here? I kind of doubt implementing the equivalent of TrueDepth (what Apple Face ID uses to differentiate between a real authenticated face and a picture of the authenticated face) and then doing that additional analysis in real time (your face isn't moving at 50MPH when authenticating to your iPhone) to ensure that a stop sign, isn't merely a ghost or mirage of a stop sign.

              And that's before the hackers really get a hold of this.

              Tesla should have started

        • No it isn't vanishing small. I see stop and slow signs on the back of work trucks and construction vehiclesall the time. If the truck is moving at 50mph I don't stop. But what they are saying is tesla's might.

          It would be worth testing.

        • A LIDAR based system wouldn't have that issue, as the "stop sign on the billboard" would come back as a big flat panel. This is why you need more sensors than just cameras.
          • And how would you have a level 4+ autonomous vehicle equipped with LIDAR alone, detect and stop at a stop sign?

            • These cars clearly need lidar, radar, and cameras all working in tandem. In theory the camera could see the stop sign and the lidar could check that it is stop sign shaped and an appropriate distance from a curb. But then what do you do in fog?
              • I don't disagree with you, but in theory you can create a depth map with a stereo pair of cameras. Very easy to do [arducam.com], even with low compute power. This will reveal the shape of the stop sign.

                I suspect this will be how Tesla will "fix" this stop-sign-spoofing-bug.

                • It seems an awfully obvious thing to do before releasing this tech to the public if they knew how to do it. I still wonder how they will keep frost off the camera lenses in 0C.
            • Who said LIDAR alone? You use them all together. Multiple sensors. Using LIDAR alone is as stupid as trying to use cameras alone.
          • by cusco ( 717999 )

            So your body is not equipped with lidar, do you automatically assume that every image of a stop sign means you have to stop immediately? No. The Tesla cameras just need depth perception, a project which many organizations are working on for a variety of reasons.

            • It's not the "depth perception" that is needed; I've seen stop signs in various parts of the world mounted on posts, telephone poles, even painted on the corner of a building. What's needed is understanding of what the sign means. But LIDAR would at least tell you if it was an actual free-standing sign (which is normal in the US).
      • by tragedy ( 27079 )

        The question I think we need to consider here is: should you obey a stop sign displayed on a video billboard? Generally speaking, the answer is no. However, as a driver, you are supposed to obey all posted signs. This gets tricky, because there's no reason that the department of public works couldn't rent space on a commercial billboard and post signs there. In theory, that could include stop signs. Legally speaking, there don't actually seem to be any real constraints on how the various entities in charge

    • Exactly. This story is interesting, but shouldn't scare anyone off of autopilot. Autopilot would have to be pretty horrible to be worse than human drivers.
      • Re:statistics (Score:4, Interesting)

        by bws111 ( 1216812 ) on Wednesday October 14, 2020 @05:21PM (#60608008)

        No. Autopilot MAY be better than the worst human drivers (beginners, drunks, texters). Is there real evidence that it is better than (or even as good as) drivers not in those categories?

        • by cusco ( 717999 )

          Well, add the elderly and you've just described about a quarter of the people on the road today.

          • by bws111 ( 1216812 )

            Ah, some good old ageism, backed by nothing. Quick, which age group has the fewest accidents per mile? The answer: 60-69. Second lowest? 70-79. How about the 80+ crowd? Lower than 25-29, 20-24, 18-19, and 16-17. https://aaafoundation.org/rate... [aaafoundation.org]

            • by cusco ( 717999 )

              It's an age group I'm quickly approaching, and I know my driving skills are deteriorating. IMOO an absurd percentage of accidents are caused by people hurrying, which retired people don't need to do. (Tesla's Autopilot doesn't hurry either, BTW.)

              I've already discovered why old guys drive so slow; it's because they have their wives in the car.

              • by vivian ( 156520 )

                As I approach 50, my driving skills are worse than they were when I was 24, however I no longer race motorcycles down the highway with the throttle opened to the max and passing cars like they were standing still. I also no longer hit mountain roads at the fastest speed I can like I used to do back then - so overall, I'm definitely a lower accident risk than I used to be.

                I am also a lot more patient, don't get pissed off and aggressive if someone cuts me off (pretty much a requisite for getting anywhere if

          • While we're sharing our biases, I've noticed that the two groups of people who drive the worst are white men (any age) and young white women. Pretty much everyone else is being careful not to get deported, I guess. Guy won't yield? Speeds up when you pass? Doesn't care enough to hold his lane? Probably white guy. Someone who chokes on every turn, speeds up for no reason, slows down for no reason, pulls up alongside you and rides in the passing lane next to you for miles if you're on cruise control? Probably

      • Nor is there any evidence they are better than a human in a modern car with proximity sensors, speed adjusting cruise control, et al. You have to compare to a new car at same price.
  • I liked the one where they projected images of a sign on a wall from a drone mounted projector.

    • The next step is to remotely cover up existing signs and road marks. After that to change them remotely. What if a drone could make the Stop sign and marking invisible to the car? Or if a drone could change a speed limit sign on the highway, from the car's perspective?

    • Since the car doesn't have LIDAR you could probably pull a Wile E. Coyote on it and direct it into a brick wall using a mural, and some remarking of the lines. You could almost certainly do it using a video projector, at least at night, to provide false depth cues as the vehicle approached.

  • Tricking the current version of Autopilot seems pretty trivial after driving a Tesla for a while; there are a bunch of things it reacts to unpredictably and late— even speed limit changes are done like a sudden, unanticipated change rather than easing up or down to the new speed. With changing the logic to include vertical and time domain I would expect they can eliminate much of the risk with stealth images.
    • by tlhIngan ( 30335 )

      Tricking the current version of Autopilot seems pretty trivial after driving a Tesla for a while; there are a bunch of things it reacts to unpredictably and lateâ" even speed limit changes are done like a sudden, unanticipated change rather than easing up or down to the new speed. With changing the logic to include vertical and time domain I would expect they can eliminate much of the risk with stealth images.

      Tricking a Tesla is actually pretty easy. You have to realize how it works - it's a purely opt

      • Re: (Score:2, Informative)

        by aaarrrgggh ( 9205 )
        It has radar with about a 100m range, plus ultrasonic for about 1m range. The optical resolution at 200m is likely on-par with LIDAR. The major problem I see is that they appear to need optical resolution at 400m that is ~2x better, and really better side and rear image resolution, but that is more a situational awareness issue than a direct automation concern.
      • by cusco ( 717999 )

        An installation of LIDAR at the time would have pretty much doubled the price of the car (or more). Recently Velodyne has dropped the price of a vehicle-quality LIDAR to under $200, I wouldn't be surprised to see them incorporated into Teslas in the next year or two.

  • by Anonymous Coward
    I can also make cars crash by throwing a brick off an overpass. We already have laws against murder.
    • But you can't throw a brick from a place thousands of miles away that doesn't have an extradition treaty with the US. You might be able to hack a billboard though.

      The question with Teslas is never "are they worse on average?" The question is always how much worse are they in edge cases and how more or less frequent are the edge cases than in normal cars?

      Battery fires are a lot worse than gasoline fires. Are they sufficiently more rare than gasoline fires for it to not matter?

      Automatic retracting door han
      • But you can't throw a brick from a place thousands of miles away that doesn't have an extradition treaty with the US.

        Sure you can. Drones.

        You might be able to hack a billboard though.

        Your car being made to stop. A fate worse than death for an American, I suppose.

        • If my car puts on the brakes for no reason on the highway, I might ok enough to walk away but the poor bastard who plows into me and gets sandwiched by the guy behind him might not be.
  • ... or better: very very different from human visual perception, currently. As impressive as some of the artificial neural networks match or synthesize patterns, they still seem to work in a very different way than human neural networks.
    • Humans also react to an image displayed for 0.42 seconds. The difference is that a human convinces themselves that it's probably nothing and keeps driving.
      • by tlhIngan ( 30335 )

        Humans also react to an image displayed for 0.42 seconds. The difference is that a human convinces themselves that it's probably nothing and keeps driving.

        Humans go "what was that" and then blink and try to recognize agian.

        Human image processing is extremely slow. Image recognition takes a few seconds to occur, which is why in many emergency situations, they budget around 10 seconds or more for the human to detect the emergency and then identify it because that's around the longest time it would take (most

  • by clawsoon ( 748629 ) on Wednesday October 14, 2020 @04:02PM (#60607776)
    So could you keep a Tesla from tailgating you by putting a stop sign in the back window of your car?
  • Obligatory XKCD (Score:4, Informative)

    by dgatwood ( 11270 ) on Wednesday October 14, 2020 @04:05PM (#60607782) Homepage Journal
    • Re:Obligatory XKCD (Score:5, Insightful)

      by battingly ( 5065477 ) on Wednesday October 14, 2020 @04:12PM (#60607802)
      I'll take the technological shortcomings of self-driving cars over the foibles of human drivers any day.
      • I'll take the technological shortcomings of self-driving cars over the foibles of human drivers any day.

        Well, let's hope that electronic billboards become outlawed soon, and no one in government is stupid enough to pursue the digital license plate that would be weaponized to harm and kill en masse. It takes a whole year for 40,000 Americans to die behind the wheel. You trying to see if we can "take" enough shortcomings to meet or beat that number in a single day or what?

        The tech isn't ready, and bad drivers need to be punished off the road. THAT is where we are actually at. Greed is the one driving, not C

    • by ffkom ( 3519199 )
      People might generally not be inclined to kill their neighbors, but some bad-mooded hacker from some far away corner of the world might want to manipulate your electronic street-side billboards nationwide just for the LoLs of sparking chaos on the streets.
    • Yeah, I'm guessing the study where someone throws a cinder block off an overpass wasn't ready for publication.

  • I have to say I'm a lot less worried about triggering the auto braking than I am for the previous things people were doing, like using a sticker to make a autonomous car think a stop sign was a speed limit sign. If very specific stimuli can cause unnecessary breaking so be it. Uber has shown us what happens if you try turning the sensitivity down, I'd rather have the cars break unnecessarily than have a false negative.

    I also imagine there are plenty of images that you could project on those billboards that

  • We've heard plenty of stories about how messing up a sign, using bright flashing lights, etc. would screw with a human driver just as much as with autonomous driving (not limiting to just Tesla although it's the article subject). What I find interesting here is that it's an attack that uniquely affects autonomous vehicles - in this case a human driver would be unlikely to notice or be affected by the momentary images. That's a big differentiator - it allows for a much more specific attack or hack. There'

  • by DrXym ( 126579 ) on Wednesday October 14, 2020 @04:34PM (#60607876)
    The computer can make fast decisions and react quickly, while the human is able to solve complex problems. e.g. a car may brake fast when it detects an obstacle in the way while a human can decide if the vehicle alongside is acting weird and erratic. Any system that doesn't take into account the strengths of both is going to fail in one way or another.
    • Actually if you drive a Tesla you will find it also evaluates other cars behavior. If a car in front of the Tesla in an adjacent lane weaves too closely to the edge of that lane the Tesla wonâ(TM)t immediately move beside it. Instead it will slow down and hang back, even if itâ(TM)s own lane is clear, seemingly observing that car for a while before deciding to move beside it if it behaves itself.

      The notion that my car is observing and actively considering other vehicles is rather uncanny.

      • by DrXym ( 126579 )
        You assume that's the car's reasoning, as opposed it thinking that in this instant that some random obstacle is too close and therefore don't do that thing that it would have done otherwise. A human's thinking would be that hmm I've seen specific red Chrysler weaving oddly ahead of me which is not in response to traffic or obstructions and is therefore not normal and it is continuing to act weird weird so therefore I'd better avoid it.

        Give that the human clearly has a better handle on the situation than t

    • by AmiMoJo ( 196126 )

      Humans have a two tier system. Instinctive reactions like recoiling from pain or the sight of something coming at you fast, and the more complex understanding of the scene.

      The latter is really, really hard for computers. Humans are so good at interpreting their stereo vision and understanding what they are seeing in all lighting conditions, even with odd shadows or dirt or paint, because they understand the world on a subconscious level and what makes sense in it.

      That's one reason most companies use lidar -

    • Nope false. You're assuming that there's enough time to analyse all situations and hand over to the smarter of the two systems. The reality is the fallibility of one and the reaction speed of the other means that you will never be in a situation where human + computer is better than just computer (that is once computers have finished getting the basics right).

      In order for human to solve a problem on the fly he needs to be in control. You can't be in a situation where a computer is control, and hands over to

      • by DrXym ( 126579 )
        That's patently and obviously false. Many vehicles have computers that perform duties like emergency braking, lane keeping warning etc. while allowing the human to drive. It would be an absurd argument to say the computer is safer by itself, or the human is safer by itself.

        The problem with Tesla is their marketing BS makes pretend the computer knows what it is doing which means humans become distracted & inattentive. They are not contributing sufficiently to the driving and therefore situational aware

        • That's patently and obviously false. Many vehicles have computers that perform duties like emergency braking, lane keeping warning etc. while allowing the human to drive.

          Yes that doesn't invalidate what I said. Human + computer > Just human. But that's not the end game here and we still have an endless string of stupid deaths as a result of stupid humans. Human + computer won't be as good as just computer when all is said and done.

          Computers don't sleep at the wheel, don't text, don't argue, don't take late exits, don't race for fun, don't act like toxic fucknugets because their life isn't going well, don't generally just suck, don't fail to learn, don't drink and drive,

  • Such a thing would likely constitute attempted murder, or a lesser but still serious crime. And really, this is all computerized. I'd be surprised if Tesla didn't start logging sudden appearances of traffic signs and such elements, just for such an occasion.

    Some clever guy will no doubt try this, get the book thrown at them, and then it should subside.

    After all, nothing stops people from doing crap like dumping oil or obstacles on a road. It's not a new thing that you can easily screw with society. It just

    • It's the hard-to-detect, hard-to-prove nature of a half second of video causing the crash that makes this more worrisome than the oil on the road.

      • by EmoryM ( 2726097 )
        In the event of somebody attacking a self-driving car like this in what way would it be hard to prove? You've got the sensor data.
        • How many regular car wrecks are going to pull the video and look for a half second flicker? No human at the scene is going to be able to testify that there was anything unusual. It'll be stuck in court for months at least. After that, if the footage gets looked at, someone will start the computer forensics to investigate who hacked the billboard... long after the criminal is gone.

          • How many regular car wrecks are going to pull the video and look for a half second flicker?

            All of them. People get paid to do this stuff, because large sums are involved. Insurance companies go above and beyond to try to find reasons not to pay claims at the best of times.

  • If you shined an image of a stop sign or a human on the road in front of me, or directly into my eyes, I'd probably hit the brakes as well. If a billboard briefly went from displaying an ad to proclaiming "STOP" I'd be extremely confused and at the least I'd probably slow down. I just realized I don't see roadside signs which resemble stop or speed limit signs - are there already laws to prevent this? What is it we want these cars to do when we purposefully fuck with them while they're trying to drive?
    • by ledow ( 319597 )

      Unintended behaviour?

      If you can demonstrate that something in charge of a car at 70mph can produce unintended behaviour through the use of external malicious actions which may go undetected or unrecorded, this is evidence against their further increase in reliance, especially if moving towards entire self-driving.

      Kids can prank cars into coming to an emergency halt on a highway, effectively, without having to be anywhere near, and without other cars seeing/noticing/reacting to any visible hazard. To them,

  • Clear answer (Score:5, Insightful)

    by dohzer ( 867770 ) on Wednesday October 14, 2020 @10:21PM (#60608736)

    The path to safety is clear: ban all advertisements. It's a sacrifice I'm willing to make to gain road safety.

    • I'll agree with this idea. It's not just road safety. It is much more important. Advertising affects consumer choices and thus make markets biased.

      Economists talk us about magic power of free markets, which can regulate production better than any supercomputers and buerocratic superpower.
      But makret, polliuted by advertisements is by no means free.

  • The researchers found that an image that appeared for 0.42 seconds would reliably trick the Tesla

    Dammit, Elon! You and your obsession with the number 420!

  • I must admit, it took me a bit of time to understand how a billboard with 'any' traffic sign would be in my expected range of vision. Then I remembered that where I live has quite strict rules about billboards near highways (and regional secondary roads) that force the signs to be hundreds of metres from a highway and that would allow stereoscopic vision to discount them.

    I'm also curious to see if reproduction of traffic signage is allowed on those or on the backs of transport trailers as mentioned above.

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday October 15, 2020 @09:20AM (#60609812) Homepage Journal

    "Autopilot is a driver assistance feature that is intended for use only with a fully attentive driver who has their hands on the wheel and is prepared to take over at any time," reads Tesla's response. The Ben Gurion researchers counter that Autopilot is used very differently in practice. "As we know, people use this feature as an autopilot and do not keep 100 percent attention on the road while using it," writes Mirsky in an email. "Therefore, we must try to mitigate this threat to keep people safe, regardless of [Tesla's] warnings."

    If only Mirsky knew what autopilot was or how it was used, that would be helpful. Either he does know and is being disingenuous, or no one should be listening to him.

    On an Airliner, pilot or co-pilot must be vigilant at all times in case autopilot does something unexpected. On a boat, same story. You don't just set the autopilot and then go down to the galley to whip up a toasted cheese.

    Using "Tesla Autopilot" as "an autopilot" means paying attention, period.

I THINK THEY SHOULD CONTINUE the policy of not giving a Nobel Prize for paneling. -- Jack Handley, The New Mexican, 1988.

Working...