Researchers Trick Tesla Autopilot Into Steering Into Oncoming Traffic (arstechnica.com) 186
An anonymous reader quotes a report from Ars Technica: Researchers have devised a simple attack that might cause a Tesla to automatically steer into oncoming traffic under certain conditions. The proof-of-concept exploit works not by hacking into the car's onboard computing system. Instead, it works by using small, inconspicuous stickers that trick the Enhanced Autopilot of a Model S 75 into detecting and then following a change in the current lane. Researchers from Tencent's Keen Security Lab recently reverse-engineered several of Tesla's automated processes to see how they reacted when environmental variables changed. One of the most striking discoveries was a way to cause Autopilot to steer into oncoming traffic. The attack worked by carefully affixing three stickers to the road. The stickers were nearly invisible to drivers, but machine-learning algorithms used by by the Autopilot detected them as a line that indicated the lane was shifting to the left. As a result, Autopilot steered in that direction.
The researchers noted that Autopilot uses a variety of measures to prevent incorrect detections. The measures include the position of road shoulders, lane histories, and the size and distance of various object. [A section of the researchers' 37-page report] showed how researchers could tamper with a Tesla's autowiper system to activate wipers on when rain wasn't falling. Unlike traditional autowiper systems -- which use optical sensors to detect moisture -- Tesla's system uses a suite of cameras that feeds data into an artificial intelligence network to determine when wipers should be turned on. The researchers found that -- in much the way it's easy for small changes in an image to throw off artificial intelligence-based image recognition (for instance, changes that cause an AI system to mistake a panda for a gibbon) -- it wasn't hard to trick Tesla's autowiper feature into thinking rain was falling even when it was not. So far, the researchers have only been able to fool autowiper when they feed images directly into the system. Eventually, they said, it may be possible for attackers to display an "adversarial image" that's displayed on road signs or other cars that do the same thing. In a statement, Tesla officials said that the vulnerabilities addressed in the report have been fixed via security update in 2017, "followed by another comprehensive security update in 2018, both of which we released before this group reported this research to us." They added: "The rest of the findings are all based on scenarios in which the physical environment around the vehicle is artificially altered to make the automatic windshield wipers or Autopilot system behave differently, which is not a realistic concern given that a driver can easily override Autopilot at any time by using the steering wheel or brakes and should always be prepared to do so and can manually operate the windshield wiper settings at all times."
The researchers noted that Autopilot uses a variety of measures to prevent incorrect detections. The measures include the position of road shoulders, lane histories, and the size and distance of various object. [A section of the researchers' 37-page report] showed how researchers could tamper with a Tesla's autowiper system to activate wipers on when rain wasn't falling. Unlike traditional autowiper systems -- which use optical sensors to detect moisture -- Tesla's system uses a suite of cameras that feeds data into an artificial intelligence network to determine when wipers should be turned on. The researchers found that -- in much the way it's easy for small changes in an image to throw off artificial intelligence-based image recognition (for instance, changes that cause an AI system to mistake a panda for a gibbon) -- it wasn't hard to trick Tesla's autowiper feature into thinking rain was falling even when it was not. So far, the researchers have only been able to fool autowiper when they feed images directly into the system. Eventually, they said, it may be possible for attackers to display an "adversarial image" that's displayed on road signs or other cars that do the same thing. In a statement, Tesla officials said that the vulnerabilities addressed in the report have been fixed via security update in 2017, "followed by another comprehensive security update in 2018, both of which we released before this group reported this research to us." They added: "The rest of the findings are all based on scenarios in which the physical environment around the vehicle is artificially altered to make the automatic windshield wipers or Autopilot system behave differently, which is not a realistic concern given that a driver can easily override Autopilot at any time by using the steering wheel or brakes and should always be prepared to do so and can manually operate the windshield wiper settings at all times."
Film at 11 (Score:5, Insightful)
So, optical illusions fool a driver. They just fund a kind that fools a digital driver. Film at 11
Because machines "think" very differently from people, the optical illusions will be very different. No surprise there,
Next we'll get a headline that if you put a number sticker over speed limit signs, human drivers can be tricked into driving at the wrong speed - even though very clearly the stickers have the wrong UV patterns and react to LIDAR clearly in an altered way.
Re:Film at 11 (Score:5, Insightful)
The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.
It points to a big gap in machine learning strategies in general: Training generally happens focused on positive correlations and not a lot of injection of maliciously designed data. So a well trained model is dumb and just says 'training says always follow lines' and follows it right head on into traffic.
This is also a sign of likely problems in road construction, where markings are frequently very messed up.
This is not 'a machine can be fooled like a human', it's a reminder that the machine is still a *lot* dumber than a human.
Re: (Score:3)
I've seen lots of drivers do exactly this. That was in Montreal though, so it may be significantly less common elsewhere.
There's usually lots of negative data in training sets, but you're right,
Re: (Score:2)
The goal is "better than humans" as tens of thousands die every year, and not "perfect".
So occasionally someone dies and some kid goes to jail, just like bowling balls to the head.
Re: (Score:2)
Re: (Score:2)
The current poor excuse for 'AI' will never be 'better than humans' because it is fundamentally incapable of anything like 'thinking', it's just following complicated 'decision trees'. We have no idea how 'thinking' actually works therefore we cannot build machines that 'think', which is why it fails like this so badly.
I don't think 'you' understand what it 'means' to quote something. Tends to make folks put less stock in your ruminations about AI and the nature of human thought.
Re: (Score:2)
Re: (Score:2)
I think humans give themselves way too much credit for their "thinking" ability. The research suggests what we do is nothing like the logical reasoning most of us assume. Mysterious processes tell us the answer and then, if pressed, we justify it to ourselves.
Re: (Score:2)
I agree, and I think people who use stories like this as "oh look, proof self-driving cars will never work" are wrong. But adversarial examples are an issue that should be solved. I don't think it's a terribly difficult solution though. One of the great things about adversarial examples is that you don't even need more training data to get started, just the output of your own adversarial generator.
Re: (Score:2, Insightful)
The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.
If there's active traffic, people will avoid it. But there are plenty of times that human drivers get confused by road markings, especially during construction, worn paint, poor lighting, rain, and blinding headlights from oncoming traffic.
Re: (Score:2)
If there's active traffic, people will avoid it. But there are plenty of times that human drivers get confused by road markings
Yes.... But machines are supposed to be BETTER. Before self-driving cars are ready, they must be able to avoid jumping into the same lane as active oncoming traffic while traveling down a road or highway, even if the road markings are confusing or in error.
Human drivers aren't great (Score:2)
Yes.... But machines are supposed to be BETTER.
They may be someday. They already are better than some human drivers. There are some people who really should not be allowed to drive and many people drive impaired/distracted with some regularity. Currently your hypothetical average human driver is probably still better than even the best machine driver but the machine's are getting better and human drivers are not. Eventually it seems probable that machine drivers will be safer than most (or all) human drivers. Exactly when that happens is unclear bu
Re: (Score:2)
Re: (Score:2)
It has to be able to actually 'think' in order to do that, and this type of software is completely incapable of 'thinking',
No... It doesn't have to "think" to do that, nor would I expect a computer to "think" in the same sense as human thinking.
I would simply expect the vehicle to detect the path of oncoming traffic, and take any necessary correction to not enter that path in spite of confusing road markings.
It just means a more-nuanced "decision tree" as you call it.
People are easy to fool (Score:3, Insightful)
The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.
I guarantee you I can find examples of humans would would be fooled. There are a LOT of humans that are quite easy to mislead and all humans can be mislead sometimes. The only difference is that the tactics that fool a human will usually be different than those that fool a machine but make no mistake that both can be fooled. There are plenty of examples [theweek.com] of people very dutifully following the instructions from their GPS into trouble despite it being painfully obvious that the GPS instructions were faulty
Re: (Score:2)
There are plenty of examples of people very dutifully following the instructions from their GPS into trouble despite it being painfully obvious that the GPS instructions were faulty in some way.
Really? That could never happen! [google.com]
But seriously, I completely agree. It takes very little to confuse, fool, or distract a human. There is nothing surprising about being able to do that, nor should there be something surprising about being able to do the same to a machine with sensory input. The difference is that you can reprogram the machine to not do that next time. Humans are surprisingly resistant to learning not to do dumb shit, and all it takes is a night with no sleep or a bit of trauma in their lives,
Re: (Score:2)
The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.
I guarantee you I can find examples of humans would would be fooled. There are a LOT of humans that are quite easy to mislead and all humans can be mislead sometimes.
There are a lot of coyotes who get tricked by misleading road markers, too, and run right into a mountain side.
Re: (Score:3)
Yes, I'm sure you can, idiots will always exist, but I bet you can't find anything that will systemically cause human drivers to
Accidents caused by idiot drivers are still accidents, and harder to fix.
A bad street sign is all it takes (Score:2)
Yes, I'm sure you can, idiots will always exist, but I bet you can't find anything that will systemically cause human drivers to.
Want to bet? All it takes is a badly labeled street sign in the right place and most humans will be fooled at least once. Naturally the failure modes for humans are different from machines but we have 40,000 deaths from auto accidents each year that prove that humans aren't especially safe or reliable drivers.
Furthermore you are aware that software can be upgraded, right? Every machine can learn from the errors of every other machine. You can make a machine less idiotic - humans not so much. When Tesla
Re: (Score:2)
Re: (Score:2)
I'd rather deal with accidents caused by humans than the horror and tragedy of humans being killed because some machine fucked up and there was nothing anyone could do to stop it from happening -- then there's nobody to even blame because a machine did it, you can't even point at a person and say "it's their fault".
The only thing that matters if the number of lives saved. If self driving cars don't achieve significant gains in that area it'll be a non-starter and you will never have to experience that "horror".
As for the state of the art, there's no question. It's the driver's responsibility, whether the car was on autopilot or not. Autopilot is an assist only. The driver is expected to remain alert and ready to take over at any moment. So do not worry, you'll have someone to blame, and that'll make the death of a lov
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
If the Tesla has an issue, ALL Tesla's have an issue.
And ... one software update can fix them ALL. You can't do that with humans.
Re: (Score:3)
Or make them all go berserk. You can't do that with humans, but you surely can with things like OTA updatable Siemens centrifuges or vehicles.
Re: (Score:3)
The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.
Complete bollocks. Care to set up a situation like that and see how many drivers follow the dots blindly?
Re:Film at 11 (Score:5, Informative)
The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.
Complete bollocks. Care to set up a situation like that and see how many drivers follow the dots blindly?
Unfortunately this situation occurs quite frequently at road construction sites where new lanes are overlaid over existing lanes. The old and new sets of lane markings make lane localization difficult at times even for humans to know where the true lane lies. Often in these cases, the human will follow the preceding and surrounding traffic in an attempt to avoid collisions, even if the true lane appears to be otherwise.
Re: (Score:2)
Humans are very good at spotting things that are not really markings, such as spilt paint or ribbons blown into the road. White tape is fairly common in construction and often falls off vehicles.
A human can spot a long tyre print made from spilt paint and not follow it. A machine... It can, but it needs to be trained and tested.
What's most interesting here is that Tesla started out claiming Autopilot was amazing, and setting the drive attention detection system to be extremely lax. You could go for many min
Re: (Score:2)
There's a coyote and roadrunner joke [youtube.com] here I can't quite pin down.
Re: (Score:2)
Re: (Score:3)
While it is little consolation to the deceased, I fail to see how placing markers designed to deliberately cause a car wreck is not pre-meditated murder. Someone could easily be lying in wait in the woods with a rifle and shoot drivers, or waiting on overpass to drop bricks. That latter one, sadly, happens with some regularity. Murdering people is pretty easy.
I am more interested in cases where it gets confused by routine bad situations. Construction is one, although my experience is that the car is telling
Re: (Score:2)
While it is little consolation to the deceased, I fail to see how placing markers designed to deliberately cause a car wreck is not pre-meditated murder. Someone could easily be lying in wait in the woods with a rifle and shoot drivers, or waiting on overpass to drop bricks.
While there are fortunately very few people who are demented enough to shoot at people on the highway, there are many cases of mischievous teenagers who toss rocks from overpasses onto unsuspecting drivers. I have no doubt that these teenagers would try the "fool the self-driving car camera" trick after reading about it.
Re: (Score:2)
it's a reminder that the machine is still a *lot* dumber than a human
Depends on the classification of dumb. We've all seen massively paradoxical things being done by drivers who were confused by the lane markings. Hell, rubbernecking leads to more crashes as attention is drawn off the road and onto the accident meaning the drive may not see the car in front of them slamming on the brakes. So really depending on how one defines "fooling the driver" one could easily say humans are just as easily fooled by things. The massive difference here is that while evolution of our b
Re: (Score:2)
And it always will be, until it stops being a machine.
It's the self preservation instinct that most modern people forget its there that keeps you alive in, among hundreds of other situations, this one too.
Some of it its genetic
Re: (Score:2)
I get your point. But because we live in the age of sensationalist headlines the authors never bothered to tell you that, although it would steer into on-coming traffic, if there WERE actually on-coming traffic it would start blaring at you (loud enough to wake the dead), automatically braking, shaking the wheel, etc. Ask me how I know...
My personal feeling about autopilot / partial self-driving (owning a car that has it) is that I am not a fan. EVs are awesome, but I think autonomous operation of a vehi
Re: (Score:2)
Exactly. Humans can be fooled - sure. But if the paint has peeled off the roadway, or is covered with snow, or somebody shot holes in the speed limit sign, or something else "out of the ordinary" our brains immediately detect "situation not normal" and we quickly come up with plan B. Computers though apparently stay on plan A until they hit a wall.
Situation Normal, Situation Normal, Situation Normal .... Deploy Airbags!
The story was put forward as a security event. I think in the end though that "machine is
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Machines doesn't think as you probably understand which is why this is a problem - the machine can be fooled but doesn't detect that what fools it isn't logical. A human generally detect those cases and adapt to the situation. The problem is the reliance of pseduo-AI pattern matching without the actual AI, the part that would make the machine "think".
Another problem is the sensitivity of current systems in that small patterns that just would make a human somewhat confused instead is detected as a highly acc
Re: (Score:2)
Because machines "think" very differently from people, the optical illusions will be very different. No surprise there.
I'd like to preface this comment with a disclaimer: I am not attacking or insulting you in this comment; you just happen to be exemplar of an point I've made in the past.
So-called 'machine learning', 'deep learning algorithms', 'neural networks', and everything else they're erroneously calling 'Artificial Intelligence' these day, is completely and totally incapable of 'thinking', 'cognition', 'consciousness', 'sentience', or any other major feature and phenomenon that we associate with actual Intellgence.
Re: (Score:3)
Tell that to the fella who was denogginized by an 18 wheeler! Oh wait, you can't, LOL
Ask this guy if he was glad to be in a Tesla when he fell asleep at the wheel:
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Ask this guy if he was glad to be in a Tesla when he fell asleep at the wheel:
Another way to look at it: he fell asleep *because* he was in a Tesla. The problem with Teslas is that they are essentially self driving cars, minus the certification of being so (because such a thing doesn't exist yet). Of course people are going to fall asleep, pull out their laptop, watch a movie, and so on.
That's the realistic future of self driving cars. There never will be a stamp of "fully autonomous", and the driver will always have ultimate responsibility for the operation of the vehicle. They will
Re: (Score:2)
denogginized
Love it
Re: (Score:2)
Love it
Decapitation is hilarious! Let's all joke about their gruesome death, and as a bonus, think about their grieving family and chuckle.
Misleading headline (Score:5, Informative)
Re:Misleading headline (Score:5, Insightful)
They even said, if there had been cars there, the Tesla likely would have noticed them and not blithely crashed head on.
And if the AoA sensor was reading wrong, the pilot likely would have taken control and not let the plane crash. Those "likely" sure are dangerous.
Re: (Score:2)
They even said, if there had been cars there, the Tesla likely would have noticed them and not blithely crashed head on.
And if the AoA sensor was reading wrong, the pilot likely would have taken control and not let the plane crash. Those "likely" sure are dangerous.
You do know that this issue was a bit more complex than you seem to indicate. Part of the problem with the MAX was the pilots DID intervene, they just didn't understand what was happening and let the aircraft trim itself nose down instead of countering with nose up trim. In short, they *didn't* take control, control of the right thing at least. Both aircraft where 100% flyable, the pilots just had to figure out what was happening and deal with the issue in the time they had. These guys didn't have enough
Re: (Score:2)
So because technology fails one time we should give up on it?
Both your example and the one in the article have been addressed at this point. And due to only one or two issues, the technology has been fixed on every model of that vehicle.
That's the nice thing about technology.
The second, significant failure in your reasoning is not considering the lives saved with the technology working correctly. While it's not an easy calculation, ignoring the benefit and focusing on the harm can easily lead you to cause m
Re: (Score:2)
Re: (Score:2)
Then there were all those cases where the AoA sensor was right and the pilot let the plane drop out of the sky yanking back on their stick with the stall alarm blaring at high volumes as they died.
Thanks but no thanks. I'll take predictable, programmable, and above all fixable computers over fallible squashy blobs of barely thinking water sacks behind the wheel any day.
There's a reason the generally excepted error rate for humans is 10% on demand and for a well designed machine it's several orders of magnit
Re: (Score:2)
One does get the impression that people are trying very hard to get us riled up but are running out of ideas to manage it.
Re: (Score:2, Flamebait)
They even said, if there had been cars there, the Tesla likely would have noticed them and not blithely crashed head on.
Oh, that's nice. So my Tesla will just get duped into crossing over to the wrong side of the road, but will swerve back (in which direction?) when it eventually encounters traffic barreling toward it in what it wrongfully considers to be its lane, while the other cars will be in the process of taking evasive action (in which direction?) due to a car barreling toward them in THEIR lane.
Yeah, no problem whatsoever.
Re: (Score:2)
Well.. I'm not sure that swerving into the oncoming lane is the best option when somebody crosses over into your lane..
My druthers would be to hit the binders and head for the shoulder and hopefully get the horn sounding... That seems like a better option in general. It may not be the right call all the time, but it seems like the best option in a bad situation.
Get out of the other car's way, staying on your side of the road, get on the brakes and scrub as much energy off and increase the time before the
Re: (Score:2)
Completely agree that's a great approach when we have time to calmly and rationally think it through from the comfort of our armchairs, but in the trenches I'd expect that to break down in somewhat inverse proportion to the amount of reaction time available, experience of the driver, familiarity with the road, etc.
And that's for human drivers -- when we're debating whether it's ok to have this sort of behavior out of self-driving cars, we need to consider what can be expected to happen when the oncoming tra
Re: (Score:2)
You do what you practice, in advance or it's anybody's guess what you will do.. I suggest that if you don't have the presence of mind in panic situations that you go out and practice, actually, even if you don't tend to panic, practice. Have somebody randomly declare an emergency and time how long you take to respond correctly. Even mentally walking though these exercises will help prepare you for when it really happens.
Far too often we run headlong into situations without a plan for when things go wro
Re: (Score:2)
You mean like it wouldn't blithely crash head-on into a concrete divider?
Ooops.
True But..... (Score:2)
"The rest of the findings are all based on scenarios in which the physical environment around the vehicle is artificially altered to make the automatic windshield wipers or Autopilot system behave differently, which is not a realistic concern given that a driver can easily override Autopilot at any time by using the steering wheel or brakes and should always be prepared to do so and can manually operate the windshield wiper settings at all times."
While I agree that it shouldn't be a realistic concern, peop
Re: (Score:3, Insightful)
Real airline pilots have tons of training, understand the limits of the systems, and are literally PAID over six figures to do the terribly boring job of monitoring the system. Tesla owners could have zero training, are certainly not privy to the actual system limitations, and are shown tons of marketing indicating the main benefit of autopilot is the ability to NOT pay attention.
But I can see how they are "basically the same thing".
Lidar hacking would be worse (Score:2)
Human drivers too would be affected if someone adds fake lane marking. I remember a prankster was arrested for rearranging the traffic cones in a construction zone to create two colliding lanes. T
Re:Lidar hacking would be worse (Score:4, Funny)
http://www.humoar.com/fake-roa... [humoar.com]
Re: (Score:2)
Lidar hacking would actually be more difficult because it's a scanning laser system. They're not used for identifying lane markers but 3D physical objects. You'd almost need something like a holographic plate to trick a Lidar system which isn't going to be that easy to stick to the road and even then it'd basically at best be tricking Lidar to either believe there's a hole or lump in the road that's not there.
My research indicates... (Score:5, Insightful)
I found that it's super easy to make human drivers crash with a simple $5 laser.
It's amazing how many of our systems only work with the underlying assumption that we're not actively trying to murder each other at any given moment.
Re: (Score:3)
Dammit! This is why we can't have nice things.
Re: (Score:2)
Re: (Score:2)
How about dropping a couple of big chunks of concrete on the road, in the dead of night ? When you throw them from a pick-up truck, that's easier than getting out, and carefully applying stickers to the road.
Re: (Score:2)
How about dropping a couple of big chunks of concrete on the road, in the dead of night ? When you throw them from a pick-up truck
Now you're talking -- that sort of thing would be just as easy for any random person to pull off (because everyone has a pickup truck and is capable of slinging around chunks of concrete big enough to cause cars to swerve around them), would have exactly the same effect on cars and traffic (in that people would of course maintain speed while swerving around them and not return to their own lane afterward), and is just as hard for the average driver to detect and for the average good samaritan to remedy.
Re: (Score:3)
Unless you drop them immediately in front of or on a car, people can see them. In Seattle, dodging potholes and concrete chunks in the roadway is de rigueur.
Re: (Score:2)
Redundant? Really? If that's the best fit you reflexive Tesla apologist mods can find, maybe it's a sign you're trying too hard.
Engineers (Score:2)
Were these engineers contracted out to Boeing to design their MCAS system for the 737max?
Seriously, the design pattern of a life critical system that makes decisions based on one set of or type of sensor is asinine. Boeing should have had the MCAS's AoA indicator cross checked with velocity, GPS, and engine data. Tesla should have the wiper controls visual sensor crosschecked with a humidifier, and the lane sensor crosschecked with a LIDAR. Isn't this just basic stuff here. I don't consider myself a gen
Self driving cars will always be dangerous (Score:2)
Computers will never be people. I don't think we WANT them to become that smart. Imagine the moral questions on that.
Google Car system better (Score:2)
These and other funky glitches are reasons why I wouldn't really want to fully depend on the Tesla system. Google Car on the other hand uses a much larger complement of sensors and a 3D space mapping LIDAR to avoid these issues, unless you're going to go as far as placing a styrofoam lifesized car or panel onto the road which would almost fool real-life drivers as well. Google believes in the concept of making sure the system fully works instead of taking dangerous compromises.
Like sitting beside a first-time teen driver (Score:5, Insightful)
You're not in control, but you have to be constantly ready to take control. You don't have insight into its mental processes so you never know what it's about to do, but you have to be constantly ready to react to what it just did.
And people find driving with Autopilot to be less stressful than driving without it? I guess I'm different from most people.
Re: (Score:2)
And people find driving with Autopilot to be less stressful than driving without it? I guess I'm different from most people.
Because in most cases the system works fine, and people get complacent.
Re: (Score:3, Informative)
While "autopilot" is engaged, you do have visibility to "what the car sees" on the screen. That tells you what obstacles it sees as well as where it thinks the vehicle lanes are. If they don't seem to make sense to what you see, then it's time to take over.
Like the "autopilot" in planes, when the cruise control take over, it reduces cognitive load because the driver doesn't need to pay attention to as many things. That translates into less stress and the ability to pay attention for longer.
If the driver d
Re: (Score:3)
While "autopilot" is engaged, you do have visibility to "what the car sees" on the screen. That tells you what obstacles it sees as well as where it thinks the vehicle lanes are. If they don't seem to make sense to what you see, then it's time to take over.
Like the "autopilot" in planes, when the cruise control take over, it reduces cognitive load because the driver doesn't need to pay attention to as many things. That translates into less stress and the ability to pay attention for longer.
If the driver does other things instead, that's really the driver's fault. Though Tesla's marketing isn't really helping on that front, either.
Bumping this up for visibility, because the AC is spot on.
Everything I've heard from Tesla owners driving moderate to long distances is that it's far less stressful with autopilot. Much less mental fatigue, because there's a lot less you need to do. It's not nothing, but when your car largely stays in its lane, slows for traffic ahead of you, auto-brakes if there's an obstruction, monitors your blind spots, turns on the wipers when it rains, and figures out how far you can go before recharging and suggests
Re: (Score:2)
I guess I'm different from most people.
Probably because you have senseless distrust in electronics. If you're nervous about a computer steering your car, I can't imagine what a wreak you must be on the road with all those other variables you can't control.
Workzones with lines all over the place may trigge (Score:2)
Work zones with lines all over the place may trigger this??
Re: (Score:2)
In my state we have a "no cell phone use in work zones" law, and giant signs before all work zones notifying drivers. I'd bet that the same will happen with self-driving cars as they become more popular. And like the no-phones law, most people will obey it, some won't and will get fined, some won't and will get into accidents and fined, and some will get into accidents and injure someone else and get fined and potentially also get jail time.
self-driving with no controls? or taxis? will do w (Score:2)
self-driving with no controls? or taxis? will do what in work zones then?
Sure (Score:2)
A plastic bag over a stop-sign should work too and it would get the non-Tesla drivers as well.
Also continuing the middle line into the abyss and hiding the original line that goes around.
Putting a fake stop-sign on the middle of the highway should be fun too.
No need to be a 'researcher' for stuff like that.
Re: (Score:2)
Putting a fake stop-sign on the middle of the highway should be fun too.
Or painting a fake tunnel on a rock.
Re: (Score:2)
Putting a fake stop-sign on the middle of the highway should be fun too.
Or painting a fake tunnel on a rock.
The great thing about painted tunnels is that birds can go through them but predators cannot.
Re: (Score:2)
The great thing about painted tunnels is that birds can go through them but predators cannot
Provided you use ACME paint, of course.
Re: (Score:2)
A plastic bag over a stop-sign should work too and it would get the non-Tesla drivers as well.
Would it? I suspect it could trip up a bad driver, but any decent driver should be able handle the situation safely.
When I approach an uncontrolled intersection (one without a traffic light or a stop sign / other signage) I look for cross traffic and be prepared to stop. Part of my evaluation as to if an intersection is uncontrolled or not is to look at both the signage intended for me AND the signage intended for the cross traffic. If I don't have a stop sign and the cross traffic doesn't have a stop sig
Too much faith there, buddy (Score:2, Insightful)
Oh yeah, I'm sure the majority of people can be trusted to remain ready at all times to take over the system called "Autopilot"... Most would assume they can just fall asleep and the thing will magically drive itself. People are dumb, crash reports at 11.
If Tesla was run by someone other than Musk? (Score:2)
Would stuff like autopilot be considered less controversial?
I'd guess it would get promoted by the company as something other than quite such an autonomous self-driving platform.
Volvo (and I'm sure others, I've only been exposed to Volvo's system personally) has what amounts to a nearly self-driving system -- distance sensing cruise, lane centering, you very nearly don't need to "drive" to drive, yet there's not nearly the constant promotion/hostility to their system and other similar ones.
Even my lowly Sub
Shouldn't be surprised (Score:2)
Re: (Score:2)
Re: (Score:3)
The military finally admitted to the UFOs. Do you get any news at all?
LOL.. Technically maybe, as in "We don't (at this point) know what that was." but not in the sense that aliens have landed. Could have been swamp gas, weather balloons, optical illusions or even a bad acid trip, we don't know, but nobody has any evidence that aliens landed.
Re: (Score:3)
You'd have to be able to predict the future to know exactly where to place the stickers where the assassination victim would head-on the truck coming the other way
After observing someone for a little while, you can start to predict their movements. Most people operate on a general routine and schedule. You can know their route to work, and approximately when they will get there. Your confederate could be driving the truck. I'm sure there are less elaborate ways to do someone in, however.
Re: (Score:2)
https://www.nytimes.com/1994/0... [nytimes.com]
My mom's car was hit by a D battery thrown off an overpass once. She and her passenger saw the kids toss it. I don't think it was assassins.
I was once driving hwy 45 throu
Re: (Score:2)
Re: (Score:2)
Just don't place the stickers on both sides or they will still miss each other.
Re: (Score:2)
Or you just place them in an area where almost everyone drives a Tesla like around Google or Facebook HQ.
As long as its the people driving them its fine.
Re: would never work in real life (Score:2)
This wouldn't work on a freeway since they are generally divided by K-Rail which are too heavy to move without a crane.
Lane markings are standard and set in a way for a very specific reason. There is a ton of regulations around how they work.
Plus, even if you got the Tesla onto oncoming traffic, who's to say it wouldn't see the approaching car and ABS itself to a dead stop, which it is supposed to do. Hopefully, the other driver does the same!
Re: (Score:2, Informative)
I'm somewhat amazed that you're almost 100% wrong. Both this and the Boeing have a human override, the Tesla far more so than the plane, for good reason. The plane that crashed had the same issue resolved the day before by following the proper steps to address it. Yes, there was a problem with the plane, but it was solvable by the pilots. The failure the next day was threefold, and not just a technical issue. The malfunctioning instrument wasn't fixed, the pilots weren't notified of the issue or the actions
Re: (Score:2)
Re: (Score:2)
The fact that one group of pilots did it right and the other didn't shows that they had the time. The question is if they had the training. That's looking like a part of the issue. Having a single point of failure and selling instrument failure lights as an upgrade seem to be the largest issue.
But you're still ignoring the fact that the MCAS likely kept a bunch of planes in the sky before an instrument failure brought a couple down. Pointing to it as some horrible idea that humans should never have done is
Re: (Score:2)
"A separate section of the report showed how the researchersâ"exploiting a now-patched root-privileged access vulnerability in Autopilot ECU (or APE)â"were able to use a game pad to remotely control a car. That vulnerability was fixed in Tesla's 2018.24 firmware release."
Fact:
By hacking the web browser in your car, a random third-party can cause your car to steer in any direction the attacker likes using a toy joystick.
No matter WHAT the theoretical example, that shouldn't even be possible.
This is