Dashcam Video Shows Tesla Steering Toward Lane Divider - Again (arstechnica.com) 146
AmiMoJo shares a report from Ars Technica: The afternoon commute of Reddit user Beastpilot takes him past a stretch of Seattle-area freeway with a carpool lane exit on the left. Last year, in early April, the Tesla driver noticed that Autopilot on his Model X would sometimes pull to the left as the car approached the lane divider -- seemingly treating the space between the diverging lanes as a lane of its own. This was particularly alarming, because just days earlier, Tesla owner Walter Huang had died in a fiery crash after Autopilot steered his Model X into a concrete lane divider in a very similar junction in Mountain View, California.
Beastpilot made several attempts to notify Tesla of the problem but says he never got a response. Weeks later, Tesla pushed out an update that seemed to fix the problem. Then in October, it happened again. Weeks later, the problem resolved itself. This week, he posted dashcam footage showing the same thing happening a third time -- this time with a recently acquired Model 3. "The behavior of the system changes dramatically between software updates," Beastpilot told Ars. "Human nature is, 'if something's worked 100 times before, it's gonna work the 101st time.'" That can lull people into a false sense of security, with potentially deadly consequences.
Beastpilot made several attempts to notify Tesla of the problem but says he never got a response. Weeks later, Tesla pushed out an update that seemed to fix the problem. Then in October, it happened again. Weeks later, the problem resolved itself. This week, he posted dashcam footage showing the same thing happening a third time -- this time with a recently acquired Model 3. "The behavior of the system changes dramatically between software updates," Beastpilot told Ars. "Human nature is, 'if something's worked 100 times before, it's gonna work the 101st time.'" That can lull people into a false sense of security, with potentially deadly consequences.
More likely (Score:2)
He isn't replicating the situation consistently and it's never been fixed.
Re: (Score:1)
Re: (Score:2)
1) There are better and frequent on-the-road sensors that self driving vehicles can make efficient use of.
2) AI traffic control that monitors, and at some level, literally drives the vehicles on the road.
3) A quick and easy way for the driver take control.
4) A system that makes sure the driver stays alert. Maybe follow Volvo's driver monitoring system, which will, as necessary, slow down, pull over, stop, and even cut off the vehicle.
Re: (Score:2)
Indeed, it clearly saw the barrier in the left lane. But there was no barrier in the right lane. It just didn't understand the arrow sign on the barriers in the left lane means that the right lane is closed and you're supposed to detour to the side (which it saw as an exit).
Every place has its own edge cases. There's a truly daunting volume of them. That's why I laugh about companies that are making geofenced "robot taxi services", which can only drive on specific roads in specific cities. I mean... great
Re: (Score:2)
It didn't "understand it" because it doesn't "understand" anything. It is a joke. How many permutations of "edge cases" are there on the planet? Billions. Trillions. Quadrillions. NN were invented in the 1960s. They aren't new. They are good for some things, but will ultimately fail.
Re: (Score:1, Insightful)
He never reported the bug because he's apparently unaware of the in-vehicle bug reporting system [reddit.com], yet seems surprised that it's never been fixed.
Neural net vision systems train to their dataset. If your edge case is not in the dataset, it's not going to be learned. Self-driving vehicles without a driver at the wheel (Level 5) are not going to be viable for years because there's such a vast multitute of edge cases, and the only way to learn them is to collect an edge-case dataset. Until then, you're not get
Re: (Score:2)
You just answered why self driving will never work with the current approach: you can only train based on a dataset. You cannot create a dataset large enough to cover all permutations. We realized this in the 1970s with NN, but now a new generation is learning it all over again. It works better, because the processing speeds and data storage has increased, but it is still the same faulty crap underneath. Tesla is a joke, and autonomous driving is a joke too. "Enhanced Summon" is the best you are going to ge
Re: (Score:2)
When my car tries to kill me I don't fuck around with the dashboard to try and report it, I go to the dealer and tell them to fix the worthless piece of shit.
This is not an app on your fucking phone. This is a car. It should be road-worthy. It is not.
Also, as for this driver's particular case, I don't think it's hard to see what's going on.
Very true. It's broken. It can't be trusted. It needs to be switched off and any money paid for that feature refunded.
Tesla and Boeing (Score:2, Funny)
Are they sharing the same autopilot dev team?
Tesla's autopilot automatically takes aim at anything the camera doesn't recognize and the Boeing 737-Max autopilot automatically takes a 90 degrees plunge to the ground the moment something abnormal happens.
There are parallels here..
Re: (Score:2)
The Boeing MCAS system is used when autopilot is OFF. Boeing's MCAS system is an aid to manual flying to make the plane behave similar to the older generation of 737s. Boeing's design flaw was to use a single input to provide the angle of attack value which caused a garbage in garbage out scenario. Worse still, the MCAS system repeatedly triggered causing the vertical elevator to eventually reach maximum end of travel.
Boeing's design flaw could easily be predicted as having a potential for crashing the plan
Not driving towards "lane divider" (Score:5, Interesting)
It's clear in the video the the Telsa is trying to take the left lane that has that strange signage showing it is closed. When the driver steers back to the right at that point it is heading towards the divider, but the car is trying to take that lane that goes to the left of the barrier. That's different than "the car is trying to steer into the lane divider".
In my 30+ years of driving I have never seen that kind of signage or markers that are apparently used to dynamically close lanes at certain times. I would wonder what I was seeing myself the first time I encountered that.
It looks like two things are going on:
1) The visual system of the Tesla does not understand that signage meaning a lane / offramp has been closed.
2) The GPS routing shows that is a viable route when it is somehow only intermittently open.
Re: (Score:1)
Re: (Score:2, Insightful)
In my 30+ years of driving I have never seen that kind of signage or markers that are apparently used to dynamically close lanes at certain times
YES. That's what's hard about automated driving! Will we expect all construction companies everywhere to adopt universal signage and clean it and maintain it accurately? Not bloody likely!
Re: (Score:1)
Exactly. That is why autonomous cars won't work until we build to road system FOR autonomous cars. Billions of dollars are being wasted on this effort.
Re: (Score:1)
Um, sorry, but how do you get "many extra seats"? A relatively modest 6-carriage train can seat over 400, and 5-seater cars, travelling at 70mph with 2 seconds headway would have to occupy over 3 lane-miles to have that many seats. I'll admit I don't know precisely how much headway a train needs, but I'm fairly sure 3 miles is more than enough headway even at 70mph+
Re: (Score:1)
Exactly. And that why the Boring Company and Hyperloop are even more idiotic ideas. He wants to put CARS on sleds in tunnels. Pretty amazing.
Re: (Score:1)
What "research"? They are doing the same crap that was done in the 1970s. It still doesn't work.
Re:Not driving towards "lane divider" (Score:5, Insightful)
Will we expect all construction companies everywhere to adopt universal signage and clean it and maintain it accurately? Not bloody likely!
Huh? You Americans have a problem with standardising road and construction signage? To answer your question: yes, it is perfectly reasonable for a construction company to put in correct the correct procedures and equipment in order to maintain safety. That is literally a good chunk of the job of construction management.
Re: (Score:2, Insightful)
Re: (Score:2)
Hmm, no, it does not. Shit, even the 'no overtaking' sign is different and there are only two variants on that one worldwide*.
*ish
Re: (Score:2)
This is the same trouble we have in the US with states that allow non-English speaking people to take their driver's exams in their native language. I was blown away when my ex-wife was allowed to take it in Korean. I had a Vietnamese coworker who argued that it wasn't necessary until I was able to show him about a dozen examples of English signage that would have required it.
Re: (Score:2, Informative)
It's not clear if the car would have avoided the lane divider. It doesn't look like it but it's possible.
Either way, this is a known weakness of the Tesla system. It doesn't prompt you to take over, and there have been multiple crashes.
If I were writing that software then suddenly finding that the lane was very wide or a major correction was required would sound all kinds of warning bells.
Re: (Score:1)
3) The car has absolutely no concept of what's actually on the road in front of it, and yet people try to pretend it's capable of driving.
You don't drive cars on public road by GPS data alone. Weird shit happens on roads - temporary cones, closed lanes, a policeman waving you away from a burning truck... none of those are GPS'd, properly signposted or probably even listed explicitly in the highway code. But you still have to drive knowing what you need to do or... get this... slow the fuck down if you don
Re: (Score:2)
Not fight the driver who's trying to steer away causing you to then aim at a solid barrier at some significant speed.
There is zero evidence that the car was fighting the driver.
Re: (Score:3)
"In my 30+ years of driving I have never seen that kind of signage or markers that are apparently used to dynamically close lanes at certain times. I would wonder what I was seeing myself the first time I encountered that."
In my nearly 30 years of driving I've seen type of signage lots of times. I know of a bridge that for years had an alternating direction center lane (west bound in the morning, east bound during evening and access to that lane was controlled by signage like this; that bridge has since bee
Wow! (Score:2, Funny)
Re: (Score:2)
The autopilot of Boeing 737 Max 8 is fine, but the MCAS system used as an aid in manual flight is flawed.
Tesla: You pay to be a guinea pig. (Score:3)
Not funny. Not in my garage.
Re: (Score:2)
Re: (Score:2)
I know. I for one will only buy cars where I have a lower chance of survival and less safety features. None of this guinea pig stuff.
Re: (Score:2)
Japan actually banned them from doing tests on customers. Tesla cars in Japan have old versions of the software because the regulator realized it was incredibly dumb to do constant over-the-air updates that alter the behaviour of the car and which have not been certified or properly tested.
Why do people think... (Score:1, Insightful)
....that autonomous driving is going to work? I mean, have you actually used software? Anything moderately complex has tons of bugs on it. And autonomous driving is extremely complex.
Re:Why do people think... (Score:5, Insightful)
The only thing more hubristic than assuming something will definitely work is assuming something will never work.
Of course autonomous driving software will have bugs in it, and those bugs will lead to accidents. The status-quo alternative (biology-based driving software) also has bugs in it, which regularly leads to accidents.
The difference is that bugs in the autonomous driving software will eventually be diagnosed and fixed. Bugs in biological driving software, OTOH, will never be fixed, because every new person has to learn to drive from scratch; even if someone eventually becomes a flawless driver, sooner or later that person will die and replaced by another newbie, who will repeat the same newbie mistakes as everyone else. Lessons "learned" by software (and software designers) OTOH, can stay "learned" indefinitely, as long as they don't lose the source code.
Re: (Score:1)
Autonomous driving will never approach what humans can do. It ain't going to happen.
Re: (Score:1)
Yep, man will never fly... And walk on the moon? Heretic!
Re: (Score:2)
Yeah yeah yeah...because one thing is possible all things must be possible. You guys keep repeating the same mantra, while wondering why you aren't living on Mars yet.
Re: (Score:1)
You're just impatient.
And what reason is there to go to Mars? Wouldn't you rather go to Rio?
Of course all things are possible! It is patently absurd to believe otherwise. We make all things possible, or more correctly, we uncover the possibilities we didn't know.
Re: (Score:1)
All things are possible? No, that is scifi. Reality says otherwise. So does science. But you type dont know science or physics so just assume everything will happen.
Re: (Score:1)
Everything HAS happened!
Re: (Score:3, Insightful)
All things are possible?
They don't. Well some nitwits do, but youre falling into the trap that because some things are physically impossible other things must be too. But when it comes to self driving cars you're pretty wide of the mark.
First, self driving cars aren't limited by physics like space travel is.
Secondly, you're ignoring the advancs in computer vision. Whether you believe deep learning is the key to strong AI or not (it isn't), or whether you believe it's 100% novel never seen before (it isn't),
Here’s where you’re wrong (Score:1)
2. It doesn’t have to be perfectly safe, it just has to be demonstrably safer than humans.
3. Every time any Tesla encounters an exceptional situation, the SW gets altered to deal with that, and then *every* Tesla gets better. That’s exponential improvement.
Re: (Score:2)
Both tasks performed by humans. Auto pilot isn't having a great track record lately.
Re: (Score:2)
Compared to humans it's having a much better track record lately.
Re: (Score:1)
Not special. We just don't understand it. Maybe we will someday, but it doesnt mean we can replicate it.
Re: (Score:2)
I'm in agreement here. I predict that autonomous driving will lead to less automotive deaths and injuries by SEVERAL orders of magnitude over "biological driving software" as you put it. It's not if, but when.
Humans are too easy distracted, or unfit to drive (DUI, etc.), or just stuck with too many dumb, aggressive habits.
Will autonomous driving still lead to some accidents and deaths? Sure. The circumstances in which autonomous driving fails are different than when humans fail. But software will conti
Re: (Score:2)
Autonomous driving depends on clear lane markings. Around here most of them barely visible and don't get repainted often. No thanks.
Re: (Score:2)
There are large sections of 101 that waymo and Tesla's have no problems with that GM, Honda, Toyata, and Subaru's latest all fail miserably with.
I bought autopilot to reduce the risk of being in an accident in a parking lot. (the autopark feature is bundled with auto pilot) The difference between AutoPIlot and the other lane assist software is that AutoPilot does what the other systems claim to do, but can't.
All the ads the Auto industry has for autobraking, adaptive cruise co
Re: (Score:2)
Re: (Score:2)
The only thing more hubristic than assuming something will definitely work is assuming something will never work.
Yeah, and then you go and assume that it will definitely work eventually.
Re: (Score:2)
You're implying that bugs eventually go away. I started using computers more than 30 years ago, and I'm still bitching about many of the same things I was back then.
I keep watching mechanic videos on YouTube about a car not shutting off because the keyfob has a bug in its firmware, and even after several years of it being a known problem, the manufacturer can't fix it. That's not even a complicated thing to correct, yo.
Re: (Score:2)
The only thing more hubristic than assuming something will definitely work is assuming something will never work.
It depends how much you're spending on the latter.
Re: (Score:2)
Nonsense. The world changes every day, cars change regularly, the weather changes every second... no way is a program going to account for all of that. I'm sorry but "The Matrix" isn't real... no coder could cover all those details.
Let's compare.
Human: Hmm, I've never seen a white painted death wall with spikes in the middle of the highway before, I think I'll slow down and avoid it.
Computer: If Unknown Visual Stimulus, Kill Passengers.
Re: (Score:2)
....that autonomous driving is going to work? I mean, have you actually used software?
Why do people think antonomous driving won't work? Have you seen humans behind the wheel of a car? Truly terrifying.
Re: (Score:2)
Yes, please stop getting on airplanes now.
Re: (Score:1)
Except those idiots are also putting OUR lives in the hands of Tesla engineers. The NHTSB needs to put a stop to this already before more people are killed.
Re: (Score:1)
Re: (Score:2)
HAL, Joshua, D.a.r.r.y.l
MCAS
Re:I can't believe it (Score:5, Insightful)
Duh. That's why anyone with a brain knows these things are deathtraps.
You can't debug a NN, not in any reasonable manner, certainly not one that you're constantly retraining and tweaking all the time. In this case, even providing heuristics ("Hey, there's a bridge near this GPS location, so don't think it's a wall" is literally what Tesla are putting into their software in some places because they can't train the behaviour out of the NN).
This has always been the concern of anyone that deals with such stuff since Tesla said they were using that technology.
You're basically training a black box on unknown criteria from limited test data, and then acting shocked when people say they don't understand how the black box works, can't predict what it will do, can't retrain or untrain it easily, and are surprised that even a million miles of road data aren't enough to let it drive safely across the entire world in perpetuity?
Re: (Score:2)
Also we don't know how our brains work but we still use our brains.
If you go back in history, many technologies have been deployed without the knowledge of how the technology works.
Just look at how medicines are created, if a compound was shown to improve a patient's outcome and had an acceptable level of side-effects then doctors can use the medicine. The doctors don't need to know how the medicine works to deploy treatments that scientifically are known to work.
Therefore, black box Neural Nets can be trea
Not really AI at all (Score:1)
Tesla and just about everyone else in the "autonomous" driving game is using an Expert System. This isn't AI, it is something dug up from the 1970s that sort of modeled what AI could do. Someday.
Well, someday isn't quite here yet. There is no underlying intelligence to these things. It is all based on rules and if you get to the bottom of the list of rules, the car has no idea what to do. This is freaking dangerous.
A true "AI" would have some default precepts, like "don't crash" and "don't hit people".
Re:Not really AI at all (Score:4, Insightful)
Tesla and just about everyone else in the "autonomous" driving game is using an Expert System.
Sorry but expert systems are not what does the image analysis. Go back to start. Do not collect $200.
Re: (Score:1)
Tesla and just about everyone else in the "autonomous" driving game is using an Expert System.
Sorry but expert systems are not what does the image analysis. Go back to start. Do not collect $200.
The image analysis is NN. The decision to take based on the analysis results is expert systems. He's perfectly correct.
Re: (Score:3)
Re: (Score:2)
Driving on the road with one of these present will present and unlimited capacity for chaos because if something unexpected (or unprogrammed) happens, the car will do something unexpected. And that could be dangerous to everyone around.
Good point! This is why self driving cars will never work because they do unexpected things and humans never do. It must have been a self driving car I saw over a decade ago which suddenly hauled it over 3 lanes to the middle, pulled a u turn and then floored it back in the o
The problem is non-right hand freeway exits (Score:2)
About the time he said "shit!", I said "shi!t" as the freeway split into 2. 2-3 lanes going left, 2-3 lanes going right, and I was on an offramp straight d
Elon is right! (Score:3)
and even worse problem! (Score:2, Troll)
You guys don't understand (Score:4, Funny)
The video clearly shows that the Tesla was in the Ravenna section of Seattle, which is reasonably nice. It was simply trying to avoid heading further south into the lower-class area known as the University District.
Re: (Score:1)
Re: (Score:2)
If you know that, you'll also understand why I had to use Ravenna for the joke.
It would only work with Northgate if the Tesla had steered AWAY from the exit...
I do not understand why people use this feature. (Score:1)
The Tesla seems to be a relatively impressive electric car.
The Tesla is not even close to a self driving car in any capacity. Look at the amount of sensors, software on the Waymo vehicles and they're still not finished.
The Tesla is a 'toy' automated vehicle. Using this feature is dangerous and foolish. Leave it as an electric vehicle, not an autonomous vehicle in any capacity. I'm shocked more people aren't dead due to this.
Silly (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Because we want it to be better than two human eyes. Two human eyes can't be watching 360 degrees at all times, but a bunch of sensors can.
Re: (Score:2)
In the UK it is legal to drive a car if you only have 1 good working eye. Therefore, from a legal point of view, you don't need 2 eyes to drive. Of course, there may be a performance impact to using only 1 eye, but the law allows those affected people to still drive.
Incremental improvements (Score:1)
beast has it right. (Score:2)
"Human nature is, 'if something's worked 100 times before, it's gonna work the 101st time.'" That can lull people into a false sense of security, with potentially deadly consequences.
You got that right.
When you are dealing with AI, and it gets retrained, it MUST be retested fully.
And it appears that this edge-case is not being tested.
re-training and re-testing (Score:1)
When you are dealing with AI, and it gets retrained, it MUST be retested fully.
Not quite right. You are assuming that the machine learning technique involved suffers from Catastrophic Forgetting [wikipedia.org] upon re-training. This was a problem back in the early days of machine learning, but any modern AI engineer and researcher knows of this problem and is or will be implementing solutions. [papers.nips.cc]
When a human learns to fly a Cessna, we get a pilot's license. When we get a type certificate to fly an Airbus after learning to fly the Cessna, we don't forget how to fly the Cessna and need re-training in
Ralph Nader Unsafe at Any Speed 2 auto ride of dea (Score:2)
Time for
Ralph Nader to write an Unsafe at Any Speed 2 auto ride of death.
a pilot's take on Tesla's autopilot (Score:4, Interesting)
I'm a pilot. I fly a plane with an autopilot. I also drive a Tesla with their "autopilot".
The very expensive aircraft autopilot flies great. I can be hands-off the controls for extended periods of time, read a book, browse Facebook (hurrah for GoGo :), etc. Do I? Hell no! An aircraft autopilot has no clue what other aircraft are doing. TCAS might see another nearby aircraft, maybe it won't. I keep my hands on or near the controls, I look out the window, and I scan the instruments - all the time. Which is pretty much what I do in the Tesla. The big difference is that the Tesla actually does a pretty decent job of reacting to other cars. Odd lane markings and construction zones do freak it out from time to time. I have had the Tesla alert me to an unsafe traffic or road condition and tell me to take over - in a flurry of beeps and on-screen alerts. Freaks me out every time. I wish the autopilot in the airplane would do that - instead it just shuts off, throws a warning light if I'm lucky, and the plane wanders off somewhere in the sky until I pull head of my my ass. I probably hand-fly the airplane more than I hand-drive the Tesla - on cross country trips. Taxiing around on the ground is a bit like driving a Tesla to the grocery store - an annoying fact of life to tolerate only until I get where I belong - out on the road, or up in the air, where the massively automated systems not only make my life easier, they make it safer as well.
You people bitching about how dangerous the Tesla autopilot is are just spoiled, bitchy little meat bags of self-loading cargo. You have no concept of automation, risk, and capability, you see the autopilot and cry that it's not perfect. You all need to fly from LA to NYC in a Ford Trimotor, or drive between them in a model T. Keep a spare set of points and a condenser in the glovebox. The magneto on the Trimotor's radial engines probably uses the same points as the Model T. Make sure you can change the points and gap them in the middle of nowhere, because that's where they'll fail. You'll be flying for about 20 hours, and you'll make about 8 stops for fuel and maintenance. The Model T will take a wee bit longer, at least 60 hours, with modern roads, unless you have to stop and fix the engine [youtube.com]. A model 3 can make that drive in 50 hours [theverge.com], and you won't have to change the points once.
You couldn't.... (Score:1)
I can not understand the purpose of autopilot (Score:1)
What is the purpose of automatically staying in the lane? The driver is still obliged to pay attention. There doesn't seem to be any more cognitive load to actually turning the steering wheel. All this does is remove that warning that you
Re: (Score:1)
Lane keeping though - the tiring part of that would surely be the need to pay attention to the road. Something that you have to do anyway, and are less likely to do if the car is steering. The linked dashcam video illustrates this. A properly attentive driver would not have
Re: (Score:1)
I know. This YouTube video explains everything about Musk and how he affects people: https://www.youtube.com/watch?... [youtube.com]