Researchers Discover How To Fool Tesla's Autopilot System (cnet.com) 177
An anonymous reader writes from a report via CNET: Researchers from the University of South Carolina, Zhejiang University and Qihoo 360 have discovered how to fool Tesla's Autopilot sensors, according to a report from Wired. The researchers were able to trick the system into thinking an object didn't exist when it did, and that an object existed when in fact it did not. Therefore, possible security concerns arise as Autopilot could drive incorrectly, potentially putting passengers and others in danger. CNET reports: "Two pieces of radio equipment were used to convince Tesla's radar sensor that a cart was not placed directly in front of it. One of those pieces, a signal generator from Keysight Technologies, costs about $90,000. The group also tricked the car's short-range parking sensors into malfunctioning using about $40 worth of equipment. Wired points out that this was, thankfully, a rather difficult feat. Most of the technological tomfoolery was done on a stationary car. Some of the required equipment was expensive, and it didn't always work. But it brings up an important point -- even though Autopilot is quite capable, there's still no substitute for an attentive human driver, ready to take control at a moment's notice."
Illusions (Score:5, Funny)
Because it's so hard to make humans see or not see things.
Re:Illusions (Score:5, Insightful)
This actually shows how great autopilot can be, especially if it combines a camera, radar, and other sensors, compared to humans who can be sooo easily defeated.
But in any case, this is just about attacking a car : it is illegal. There are many other (cheap) ways to cause an accident : blow a tire, use light, fumes, oil, ice, or use a missile. If someone wants to attack a car, there are plenty of choices.
Maybe the only difference here is that it may be hard to understand afterwards what happened. The secret services may like that.
Re: (Score:1)
I think this study is useful only if it tells us something about how Autopilot might fail in a real world situation. E.g. if the weather service doppler radars are possibly going to confuse Tesla sensors, it might be nice to know about that sooner rather than later. I can't see that the study does that. But maybe there's some useful substance there if one digs a bit.
"there's still no substitute for an attentive human driver, ready to take control at a moment's notice." That seems to me a very silly noti
Re: (Score:2)
Drive one, then you'll understand. All my car has is adaptive cruise control. It reduces stress while driving significantly. Even just being able to rest your foot helps. I'll never buy another car without it.
Re:Illusions (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
You can cut the brake lines or plant a bomb in any car, that will do the job quite well. You can also use a bigger car to ram the car off the road, it should also work.
However, messing with the car using radio waves (be it confusing the radar, hacking via bluetooth or wifi or hacking the car via the internet) does not leave evidence. Or, at least does not leave any evidence that may be used to identify you.
So, standing on a bridge and dropping bowling balls or bricks on cars, that are driving under the brid
Re: (Score:3)
It will also look really suspicious to any bystanders. It may also not work as intended - the driver could have seen the obstacle before you turned on the light or he can use his hand to shield his eyes from the light.
It will also be difficult to achieve this during the day with clear skies.
Re: (Score:2)
This actually shows how great autopilot can be, especially if it combines a camera, radar, and other sensors, compared to humans who can be sooo easily defeated.
It's only great if it actually works in every circumstance, or if it largely works but the driver's attentiveness is enforced to act as a backup just in case. Then and only then can it be said to be safer than a driver by themselves in every scenario.
But in any case, this is just about attacking a car : it is illegal. There are many other (cheap) ways to cause an accident : blow a tire, use light, fumes, oil, ice, or use a missile. If someone wants to attack a car, there are plenty of choices.
And now there will be plenty more. It's emerging technology and techniques to grief / attack it will emerge too. A sharpie pen could cripple a car. Strong light or radio interference might blind the car. Coated / reflective glass on buildings, or carried by pas
Re: (Score:2)
Re: (Score:2)
Maybe the only difference here is that it may be hard to understand afterwards what happened.
I don't know about that. When a serious accident occurs and you see a large group of nerds in lab coats hurriedly packing up strange equipment into briefcases, piling into a van, and speeding away, we can safely assume there was outside interference.
Re: (Score:2)
Indeed it is illegal. Saying this is a flaw with their system is like saying a Russian or Chinese guy who runs out and leaps on your hood for an insurance scam is a flaw in the driver.
Re:Illusions (Score:5, Insightful)
Fortunately, a radar sensor won't be fooled... (Score:3)
... by a painted tunnel.
Re: (Score:2)
I think the point of this exercise is that one input should be weighted against the other to verify threats and risks. Sterescopic vision would also see that as a flat wall. It's likely that the Tesla has that, but people definitely do.
Re: (Score:1)
Heck, you don't even need stereoscopic vision to detect that... A single camera is enough.. (MonoSLAM or similar)
Two examples:
https://www.youtube.com/watch?... [youtube.com]
https://www.youtube.com/watch?... [youtube.com]
If you want to see some really cool things just search for MonoSLAM on youtube... Some of the things they do is really cool.
Re: (Score:1)
My daughter seems to have a high fail rate in (not)seeing. Equipment cost (pre-failure, discounting cost of the actual car): $0. I don't want to discuss equipment costs post-failure.
If she rode in an auto-piloted car, she would have someone else to blame, and my lawyer would have someone else to sue.
I'm all for the auto-pilot solution.
Re: (Score:2)
Malicious acts can potentially cause Bad Things. New at 10.
Or go one further....a $10 laser pointer can temporarily blind pilots flying large planes.
A driver swerving around can easily cause a crash.
It's the paranoid 'what if' mentality that's resulted in so many pointless laws and regulations around new technology.
blind spots (Score:2)
How about humans? (Score:3)
If you spent the same resources to fool a human driver, how hard would that be?
Re:How about humans? (Score:4, Insightful)
I don't think you need to get very fancy... I would think a laser or extremely bright light bought from eBay would similarly blind a human "sensor".
Re: (Score:2)
I don't think you need to get very fancy... I would think a laser or extremely bright light bought from eBay would similarly blind a human "sensor".
It is in fact very capable of doing so and for that reason it is illegal to point it at aircraft
Re: (Score:2)
Re: (Score:2)
That's high-tech! It's got a low-pass filter for a penis.
Re: (Score:2)
You've got one built into every single car...high beams are notorious for blinding drivers on dark roads. (not to mention being extremely annoying even when you've got other cars/lights around)
Re:How about humans? (Score:5, Interesting)
If you spent the same resources to fool a human driver, how hard would that be?
Exactly, for far less than $90,000 you can set up a water curtain projection system that would fool any unsuspecting driver. Put one on a highway and show a film of an approaching wrong-way driving semi and let the hilarity commence.
They are already used as hard-to-miss warning signs on some roads: https://youtu.be/Dk9DjO-_rT8 [youtu.be]
Re: (Score:3)
A system designed to fool radar or sonar or lidar can be invisible to people, and go unnoticed and undetected even after it's caused an accident. If it's a portable system (like mounted in a van), the perp can simply drive off. meanwhile it takes the cops, NTSB
Re: (Score:2)
I almost flattened a pedestrian the other night who was wearing clothing that blended into the background. His pants were the same color as the road, and his shirt the same color as the foliage on the other side of the road, and he happened to be lined up so that his shirt boundary matched the road/foliage boundary of the other side of the street. The stealth was probably not intentional, but effective nevertheless.
Reminds me of a prank where actors wore stripes that blended into the Abbey Road crossing, ma
Re: (Score:2)
For WAAAY less than 90K I can easily cause a human to not see something is there... because the $300 high powered laser would have blinded them....
This article is dumb. There are as many attacks against human drivers that are just as cheap or cheaper that would work better. Want to crash a car at highway speed? Get a slingshot and a piece of spark plug ceramic and take out the poor bastard's windshield on a tricky corner at night. Or shoot a tire out. Or hit them with a 20000 lumen spotlight.
Re: (Score:2)
Re: (Score:2)
Spend?! Hell, I can jump in front of a human controlled car at a blind intersection and earn money while I sleep*!
*in traction of course.
Re: (Score:1)
So it was YOU! [slashdot.org]
This is pseudo research/news (Score:2)
Of course you can trick any sensor invented by man some way or other. That's nothing new. We even know tons of ways to trick the sensors made by god/nature aka our eyes as well. Shine a bright light into them for $10 or maybe $100 and the driver will be forced to drive blind. Or you can have a $0 natural snow storm and the driver will also be on literally very dangerous ground: zero visibility and icy roads.
The point is not that either can be fooled, the point is, is the mechanical sensor better or at least
Re: (Score:1)
1) In a busy street, you could fuck with a lot of autopilots undetected, but you can't do the same thing with bright lights. What is more, a bright light doesn't trick your brain - you react to it accordingly. Since car autopilots are just (relatively) stupid computers that do no more and no less than what their developers programmed to do, they are MUCH easier to game with more human cunning.
2) Your snowstorm is irrelevant, as this is something one prepares for, or chooses to avoid.
3) There has been no ind
Humans navigation system fooled by researchers (Score:1)
Researchers have discovered a way to disable the human "autopilot" system using just $10 worth of equipment. By shining a flashlight in their eyes, they are able to totally disable the primary optical sensors, so preventing the navigation system from avoiding objects in the car's path. While this was only tested in a lab environment, it is feared that Russian agents and ISIS terrorists could use a similar technique to cause mass casualties.
Re: (Score:2)
Re: (Score:2)
Just you wait... there will soon be a flashlight and laser-pointer licence.....
And the moral of this story is... (Score:1)
Don't drive behind a truck full of signal generators on a bumpy road.
Fake security risk (Score:2)
Re: (Score:1)
Tesla is one of the more popular cars in places like Seattle, Vancouver, and SF, that's what.
Re: (Score:1)
Social engineering and target opportunity increase as the quantity of target acquisition increases. The high value of the vehicles makes take over devices become useful. Modern thieves, for example, use electronic remote key signal generators for the most popular high value vehicles in their region, and just "wander" into parking garages or external parking lots and take the cars there.
Sensors are pretty much the same vulnerability pathway. It doesn't require brains to take over vehicles, just persistance
Re: (Score:2)
Of course it is silly since the autopilot has been fooled just by nature.
Re: (Score:2)
Guns are expensive. Why not just throw a brick through a speeding car's windshield?
Re: (Score:2)
It's reasonable research. The equipment may cost $90k now, but that doesn't mean that the techniques developed couldn't be refined, specialized and miniaturized for cheap in the future.
Guns and bricks and home made spike strips work, but they have difficult limitations that make them limited use techniques. Jamming autopilot in some way may be something extremely hard to detect, possible to do at distance, more selectively or more en mass without much risk of detection.
I think it's reasonable to think abo
Re: (Score:2)
It's reasonable research. The equipment may cost $90k now, but that doesn't mean that the techniques developed couldn't be refined, specialized and miniaturized for cheap in the future.
How about when the gigafactory for autosensing disablement equipment gets going?
Re: (Score:2)
Isn't that the kind of thing the Chinese would turn out? AFAIK that's where people get cell jammers.
Re: (Score:2)
Using a gun is cheaper, but leaves evidence and if somebody sees you using the gun, they might tell the police.
On the other hand, if the radio equipment can be placed inside a backpack (or two), then there is no evidence that I caused the accident. I am just standing there minding my own business and then walk (or drive) away. If the place usually is full of people, nobody would notice or remember me and would not be able to link me to the accident.
Also, the researchers most likely used general purpose equi
All your small children are belong to road kill (Score:1)
Seriously, who autoenables autonomous driving but a fool.
Now, if you'll excuse me, I'll go back to using my 60 mpg stick vehicle that doesn't bleed security info as it was made in 1989.
Re: (Score:2)
Re: (Score:2)
Now, if you'll excuse me, I'll go back to using my 60 mpg stick vehicle that doesn't bleed security info as it was made in 1989.
The bleeding comes after even a minor freeway crash in that coffin, which has to be a Geo Metro or a CRX HF; both complete deathtraps.
This is stupid (Score:4, Insightful)
Re: (Score:2)
Yes, someone going through great effort can cause a crash. I've know cases where people stoodn on overpasses and threw down bricks to cause crashes. Nobody published papers on the "brick loophole" in car security. In most of the examples, it'd have been easier to just cut the brake lines. But we have to target the sensors to get media attention, for a non-story.
We're really good at spotting things that distract humans, not so much at spotting things that distract AIs. Even though this was a deliberate attack there could be other things unintentionally causing interference and leading to crashes, or other classes of attack we haven't discovered yet and may be a lot easier to execute.
Re: (Score:2)
We need an AI smart enough to identify their own misoperation before it's affected operation. That's how human's generally work. You recognize when you get tired, before it causes errors, but will still generate errors if you choose to operate machinery while impaired. You recognize when oncoming light
Re: (Score:2)
They most likely needed to figure out how to trick the autopilot. For that you need general-purpose equipment that can be configured to produce various signals. When you find out what signal causes the intended outcome, you can build much cheaper equipment that is only capable of producing that one signal.
Bikini terrorism [Re:This is stupid] (Score:1)
Don't need bricks; get a scantly-clad attractive female* to walk down the side of a busy road. I can think of several times where such distractions almost got me into a tangle. And it's legal, unlike bricks.
I'd be happy to assist in such research by inspecting the applicants for free.
* Or scantly clad males might work also, who knows. Didn't mean to be discriminatory. (In
Re: (Score:2)
No no, that's been shown to actually improve [liveleak.com] road safety.
Re: (Score:1)
They are measuring speed, not accidents.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
When the logs are analyzed after the crash, the equipment will be long gone. And the car logs won't have the fingerprints of the perp the way a brick would.
Re: (Score:1)
Re: (Score:2)
And the vehicles logs would just show the autopilot operating as intended.
Like the autopilot operating as intended hitting the truck? The logs revealed what was seen, when, and how it was interpreted. The same would happen with most (if not all) of these tricks.
Is this a big deal? (Score:2)
There are plenty of optical illusions for people, too.
Unless this was done on purpose/malevolently (and that could be prosecuted regardless), this seems to me far different from things like hacking into the car's computer itself.
So it's easy to do an DOS attack to auto drive car (Score:2)
So it's easy to do an DOS attack to auto drive cars just wait for the days when you can drop an $40 box and shut down a major road
For $90k I could fool a human (Score:1)
The researchers confirmed what Elon Musk has said (Score:2)
The last sentence sums up what Elon Musk has been saying about AutoPilot:
"even though Autopilot is quite capable, there's still no substitute for an attentive human driver, ready to take control at a moment's notice."
The technology is not called "self driving" - it is called autopilot. Similar to plane where course and speed are maintained. Tesla reminds users to keep hands on the wheel and remain attentive.
No news here. Couple that with the cost of the hack, and there is not much to report. I could fool a
Re: (Score:2)
0.o "Autopilot" literally means "self piloting".
Seriously Tesla/Musk apologists words mean things - and autopilot does not mean what you keep claiming it does. It doesn't mean "an assistant which still requires constant human monitoring and supervision", it means "an automatic system that replaces human operators". From the first line of the Wikipedia entry on autopilots [wikipedia.org] e
Not something that can be encountered in normal co (Score:2)
So not something you would ever encounter.
We aren't worrying about people using high end equipment to make a car crash.
We are worrying about a passive object that is shiny and reflective in just the right way to make it invisible.
Or an active object the size of a small house that because of the light colored paint appears invisible on the horizon.
Those are legitimate concerns at this point.
ZOMG system crashes when exposed to light at 572nm flashing at 13.37HZ while the left rear window control is pressed in
Perspective (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
That's because only 60% of the drivers are "attentive".
Trying too hard .... (Score:2)
Obviously, you want a self-driving car to have the best possible auto-pilot technology you can put into it. But purposeful attempts to trick it into not detecting objects, or into thinking objects are there that aren't really there? That means nothing, IMO. What matters is that it does a reliable job of these things in real-world situations where nobody is TRYING to fool the system.
Human drivers see things all the time and misinterpret them. (There's that popular photo going around social media where someon
Looney Tunes did it first (Score:2)
Humans are probably easier to fool (Score:2)
And that is the reference here. Automatic drive systems do not need to be perfect to be a good replacement. They just need to be better than the average driver. They will start to safe lives when they are better than bad drivers though.
Of course, everybody believes themselves to be good drivers, but the simple statistical reality is that most are in the range from somewhat above average to really bad.
Can tape kill? (Score:2)
Couldn't you just use black tape to cover the road markings and white tape to make new ones, to veer the car off the road and over a cliff?
For $90,000 (Score:2)
Reaching. (Score:1)
I disagree; this is equivalent to optical illusion (Score:2)
People are susceptible to optical illusions. So are machines when you understand the assumptions made. People and machines don't have to make the same assumptions but each is fallible in it's own way. As they say nothing is perfect. No one and no thing is perfect. But is it good enough? Or which is better?
This is a form of radar jamming, and so what? (Score:2)
You can also blind a human driver by shining a sufficiently bright light in the eyes. This fact has not caused anyone to say, "even though human drivers are quite capable, there's still no substitute for a good horse."
moments??? (Score:2)
>"there's still no substitute for an attentive human driver, ready to take control at a moment's notice."
That would be utterly useless. Try "at a fraction of a second's notice". And that simply won't happen once one is in any type of autopilot mode.
Re: (Score:2)
Empirically false. I remain attentive while the cruise control is engaged, and that's a type of autopilot.
Re: (Score:2)
I would hardly call cruise control a type of auto-pilot. Auto-pilot requires the car to steer. Controlling speed is nothing in comparison.
Once the car is doing the two things that require the 99% of the drivers attention- steering and speed, almost no "driver" will be able to keep their attention available "at a fraction of a second's notice". Just human nature. It would be like watching paint dry.
BS (Score:2)
Oh, and rain is wet.
fooling a human driver (Score:2)
You can always DoS a human driver too. Laser pointers, or even perfectly natural feminine breasts could distract a human driver.
Re: (Score:1)
Low tech (Score:2)
I'm sure there are much better ways to "fool" it with much more low-tech items.
How does it distinguish the density of an object it detects? Does a paper bag blowing across the road get marked as something it can drive over or will it take evasive action? Where does it draw the line between running over the paper bag and steering into the car in the next lane to have what it might think is a "softer" crash? And how does it tell the difference between a bag and a child, or even a pigeon?
Isn't most of its a
human pilot vulnerable too (Score:2)
Very stupid and worthless (Score:2)
OTOH, a hack in which you modify a single car to interfere with another single car's ability to drive is absolutely WORTHLESS. Why?
Assume that somebody is going to go after another person because they are rich or something. Then said person likely is driving themselves OR has a chauffer.
Let assume that this is used to go after a kid or a neighbor, etc. IOW, you just want to punish somebody. Now, you have to
Re:It'd probably slam into a stealth fighter jet t (Score:5, Funny)
Researchers discover that for $100 they can dig a hole, cover it with a thin layer of asphault and potentially kill a driver.
In other news, researchers also discover that $3 hedge clippers can cut a brake line endangering drivers.
Researchers discover that $10 high powered flash light carefully timed at a blind corner can confuse a human driver's imaging sensors.
Re: (Score:2)
As a driver would never notice their foot going straight to the floorboard when they had to apply the brakes to shift the car out of park.
Re: (Score:2)
Actually to play devil's advocate here...a smaller cut would give you similar feeling on the brake pedal for light braking and the fail catastrophically (or at least not provide adequate breaking) under emergency braking...or when you finally emptied all the brake fluid.
Point being, there are malicious failure methods available that work just fine on plain old humans. Simple, cheap, easy ones. To say autopilot is less safe than a human drive because other, complicated and technical, things can fool it is
Re:It'd probably slam into a stealth fighter jet t (Score:4, Funny)
Basically.
I was thinking more of painting the road lines into a rock wall, then painting a picture of a tunnel on said wall.
(Then waiting for a roadrunner to come by)
After all, Coyote v. Acme was this country's longest running product liability suit. Though the plaintiff, a partially disabled male Canis latrans, meticulously documented his problems with Acme's mail order line of bird-trapping hardware in a long series of filmstrips and videos, his evidence was leaked to the public, exposing Coyote to generations of ridicule. Most recently the Ninth Circuit sent the case back to lower courts, ruling that Coyote had no standing to invoke the Americans With Disabilities Act, since the ADA applies only to humans.
Re: (Score:3)
> After all, Coyote v. Acme was this country's longest running product liability suit.
I don't see where he'd have standing to sue under the ADA in any case, since Wile. E. Coyote _won_ his lawsuit for manufacturing defects in 1990.
http://www.newyorker.com/magaz... [newyorker.com]
Re: (Score:2)
Paywalled, but whatever pittance Coyote may have won in the lower courts after being rejected by the Ninth Circuit was Pyrrhic. When Sanyo Portable Hole K. K. of Akihabara, Tokyo, became Roadrunner's exclusive supplier of these highly effective countermeasures to Coyote's use of Acme's flawed devices, Coyote's medical bills from running into unexpected canyon walls made it impossible to continue work. He now resides in an assisted care home in Tusayan, AZ, and is permanently barred by the Parks Service from
Re: (Score:3)
It seems to me that the best choice is if both the human driver is paying attention and the autopilot is on.
And how in the HELL do you figure that when the autopilot is going to make split-second sub-human decisions to react and manipulate a vehicle to dodge an imaginary object that the human can clearly see does not exist? By the time the human reacts, the autopilot may have already put them and others around them in considerable danger (like SLAMMING on the brakes to avoid an imaginary object on a freeway).
Second best is the autopilot by itself (at least it can pay attention all of the time)
And as these sensor hacks get fine-tuned, they will be able to fool the autopilot all of the time. No, t
Re: (Score:2)
The Tesla autopilot feature does NOT (we're randomly using all caps here, right?) randomly jerk you around the road in split second maneuvers.
Then tell me something; if the entire purpose of a car controlled by a machine is to be able to react better and faster than a human, why in the hell would it be designed to literally react slower than one?
Paying "attention" better than a human becomes rather fucking worthless if it reacts worse than a 80-year old drunk behind the wheel. Split-second decisions are exactly what is often needed to save lives, as humans have demonstrated for decades now. Autonomous solutions are justified for the specific
Re: (Score:2)
Sure. Accidents happen because of distracted driving all the time. Accidents happen because the driver didn't see something in the road all the time. Accidents happen when people swerve to avoid potholes and instead hit something else. Simply put, accidents happen. And it happens to adults, not 5 year olds. Get your head out of your ass.
The test demonstrated that it was possible to fool the car into thinking an imaginary object was in front of it. Now take a 5-year old and stand them in a room and try and convince them that a large imaginary wooden crate is right next to them. Needless to say you would be hard-pressed to convince that toddler, much less an experienced driver. And yet a highly complex autonomous system was fooled, and clearly demonstrates a weakness that humans would likely not suffer from.
My example was in context and r
Re: (Score:2)
Sure, hell, I can fool you into thinking a deck of cards changed color. have a look. [youtube.com]
I hallucinate things on the road all the time when the sun is in my eyes. Peripheral vision, fast-moving scan (because I don't stare locked straight ahead), and dark spots caused by the sun being right-fucking-there will give enough data for my brain to imagine that smudge is that car skidding in front of me, and so render the car there, when in fact it hasn't moved. I have compensation systems to IGNORE THE EVIDENCE I
Re: (Score:2)
Split-second decisions are exactly what is often needed to save lives, as humans have demonstrated for decades now.
Demonstrated the ability to kill tens of thousands of people every year because of the human inability to consistently make split second decisions? I think is what you mean. We have a well known and well studied problem that is almost completely unsolvable without autonomous driving. Finding issues is fine, but there are a hundred people dying every day who's families can't wait for widespread adoption of autonomous driving.
This technology will save lives and we need it yesterday.
Yes, I'm well aware of the problem we're trying to solve here, but an obituary that reads "killed by autonomous bug #285A" isn't going to make any family member feel better vs one that reads "killed by a drunk driver". In fact, it's likely going to make them even more angry because their loved one that was sober, alert, and attentive behind the wheel was killed by their own damn car through no fault of their own.
Yes, the thousands of people dying every year is something that we do need to address yesterda