Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Transportation Software Technology

Dashcam Video Shows Tesla Steering Toward Lane Divider - Again (arstechnica.com) 146

AmiMoJo shares a report from Ars Technica: The afternoon commute of Reddit user Beastpilot takes him past a stretch of Seattle-area freeway with a carpool lane exit on the left. Last year, in early April, the Tesla driver noticed that Autopilot on his Model X would sometimes pull to the left as the car approached the lane divider -- seemingly treating the space between the diverging lanes as a lane of its own. This was particularly alarming, because just days earlier, Tesla owner Walter Huang had died in a fiery crash after Autopilot steered his Model X into a concrete lane divider in a very similar junction in Mountain View, California.

Beastpilot made several attempts to notify Tesla of the problem but says he never got a response. Weeks later, Tesla pushed out an update that seemed to fix the problem. Then in October, it happened again. Weeks later, the problem resolved itself. This week, he posted dashcam footage showing the same thing happening a third time -- this time with a recently acquired Model 3. "The behavior of the system changes dramatically between software updates," Beastpilot told Ars. "Human nature is, 'if something's worked 100 times before, it's gonna work the 101st time.'" That can lull people into a false sense of security, with potentially deadly consequences.

This discussion has been archived. No new comments can be posted.

Dashcam Video Shows Tesla Steering Toward Lane Divider - Again

Comments Filter:
  • He isn't replicating the situation consistently and it's never been fixed.

    • Re: (Score:1, Insightful)

      by Rei ( 128717 )

      He never reported the bug because he's apparently unaware of the in-vehicle bug reporting system [reddit.com], yet seems surprised that it's never been fixed.

      Neural net vision systems train to their dataset. If your edge case is not in the dataset, it's not going to be learned. Self-driving vehicles without a driver at the wheel (Level 5) are not going to be viable for years because there's such a vast multitute of edge cases, and the only way to learn them is to collect an edge-case dataset. Until then, you're not get

      • You just answered why self driving will never work with the current approach: you can only train based on a dataset. You cannot create a dataset large enough to cover all permutations. We realized this in the 1970s with NN, but now a new generation is learning it all over again. It works better, because the processing speeds and data storage has increased, but it is still the same faulty crap underneath. Tesla is a joke, and autonomous driving is a joke too. "Enhanced Summon" is the best you are going to ge

      • by Cederic ( 9623 )

        When my car tries to kill me I don't fuck around with the dashboard to try and report it, I go to the dealer and tell them to fix the worthless piece of shit.

        This is not an app on your fucking phone. This is a car. It should be road-worthy. It is not.

        Also, as for this driver's particular case, I don't think it's hard to see what's going on.

        Very true. It's broken. It can't be trusted. It needs to be switched off and any money paid for that feature refunded.

  • by Anonymous Coward

    Are they sharing the same autopilot dev team?

    Tesla's autopilot automatically takes aim at anything the camera doesn't recognize and the Boeing 737-Max autopilot automatically takes a 90 degrees plunge to the ground the moment something abnormal happens.

    There are parallels here..

    • The Boeing MCAS system is used when autopilot is OFF. Boeing's MCAS system is an aid to manual flying to make the plane behave similar to the older generation of 737s. Boeing's design flaw was to use a single input to provide the angle of attack value which caused a garbage in garbage out scenario. Worse still, the MCAS system repeatedly triggered causing the vertical elevator to eventually reach maximum end of travel.

      Boeing's design flaw could easily be predicted as having a potential for crashing the plan

  • by Dan East ( 318230 ) on Friday March 22, 2019 @05:25PM (#58317982) Journal

    It's clear in the video the the Telsa is trying to take the left lane that has that strange signage showing it is closed. When the driver steers back to the right at that point it is heading towards the divider, but the car is trying to take that lane that goes to the left of the barrier. That's different than "the car is trying to steer into the lane divider".

    In my 30+ years of driving I have never seen that kind of signage or markers that are apparently used to dynamically close lanes at certain times. I would wonder what I was seeing myself the first time I encountered that.

    It looks like two things are going on:
    1) The visual system of the Tesla does not understand that signage meaning a lane / offramp has been closed.
    2) The GPS routing shows that is a viable route when it is somehow only intermittently open.

    • When the driver steers away, it turns off autopilot. Driver did not steer away.
    • Re: (Score:2, Insightful)

      In my 30+ years of driving I have never seen that kind of signage or markers that are apparently used to dynamically close lanes at certain times

      YES. That's what's hard about automated driving! Will we expect all construction companies everywhere to adopt universal signage and clean it and maintain it accurately? Not bloody likely!

      • Exactly. That is why autonomous cars won't work until we build to road system FOR autonomous cars. Billions of dollars are being wasted on this effort.

      • by thegarbz ( 1787294 ) on Friday March 22, 2019 @07:48PM (#58318558)

        Will we expect all construction companies everywhere to adopt universal signage and clean it and maintain it accurately? Not bloody likely!

        Huh? You Americans have a problem with standardising road and construction signage? To answer your question: yes, it is perfectly reasonable for a construction company to put in correct the correct procedures and equipment in order to maintain safety. That is literally a good chunk of the job of construction management.

        • Re: (Score:2, Insightful)

          It doesn't work that way here. The construction unions are too powerful. They won't even take down 'reduce speed' signs at the end of the day. The city has to come and put up the official traffic markers which get left up for the entire duration, while the construction company uses their markers to route traffic around where they are digging or reconstructing. This ridiculousness is decades in the making, and a requirement for fancy new self-driving cars isn't about to change it.
      • by dcw3 ( 649211 )

        This is the same trouble we have in the US with states that allow non-English speaking people to take their driver's exams in their native language. I was blown away when my ex-wife was allowed to take it in Korean. I had a Vietnamese coworker who argued that it wasn't necessary until I was able to show him about a dozen examples of English signage that would have required it.

    • Re: (Score:2, Informative)

      by AmiMoJo ( 196126 )

      It's not clear if the car would have avoided the lane divider. It doesn't look like it but it's possible.

      Either way, this is a known weakness of the Tesla system. It doesn't prompt you to take over, and there have been multiple crashes.

      If I were writing that software then suddenly finding that the lane was very wide or a major correction was required would sound all kinds of warning bells.

    • by ledow ( 319597 )

      3) The car has absolutely no concept of what's actually on the road in front of it, and yet people try to pretend it's capable of driving.

      You don't drive cars on public road by GPS data alone. Weird shit happens on roads - temporary cones, closed lanes, a policeman waving you away from a burning truck... none of those are GPS'd, properly signposted or probably even listed explicitly in the highway code. But you still have to drive knowing what you need to do or... get this... slow the fuck down if you don

      • Not fight the driver who's trying to steer away causing you to then aim at a solid barrier at some significant speed.

        There is zero evidence that the car was fighting the driver.

    • by vux984 ( 928602 )

      "In my 30+ years of driving I have never seen that kind of signage or markers that are apparently used to dynamically close lanes at certain times. I would wonder what I was seeing myself the first time I encountered that."

      In my nearly 30 years of driving I've seen type of signage lots of times. I know of a bridge that for years had an alternating direction center lane (west bound in the morning, east bound during evening and access to that lane was controlled by signage like this; that bridge has since bee

  • Wow! (Score:2, Funny)

    This must be the autopilot in the Boeing 737 Max 8!
  • by BoRegardless ( 721219 ) on Friday March 22, 2019 @05:30PM (#58318008)

    Not funny. Not in my garage.

    • Comment removed based on user account deletion
    • I know. I for one will only buy cars where I have a lower chance of survival and less safety features. None of this guinea pig stuff.

    • by AmiMoJo ( 196126 )

      Japan actually banned them from doing tests on customers. Tesla cars in Japan have old versions of the software because the regulator realized it was incredibly dumb to do constant over-the-air updates that alter the behaviour of the car and which have not been certified or properly tested.

  • ....that autonomous driving is going to work? I mean, have you actually used software? Anything moderately complex has tons of bugs on it. And autonomous driving is extremely complex.

    • by Jeremi ( 14640 ) on Friday March 22, 2019 @05:55PM (#58318130) Homepage

      The only thing more hubristic than assuming something will definitely work is assuming something will never work.

      Of course autonomous driving software will have bugs in it, and those bugs will lead to accidents. The status-quo alternative (biology-based driving software) also has bugs in it, which regularly leads to accidents.

      The difference is that bugs in the autonomous driving software will eventually be diagnosed and fixed. Bugs in biological driving software, OTOH, will never be fixed, because every new person has to learn to drive from scratch; even if someone eventually becomes a flawless driver, sooner or later that person will die and replaced by another newbie, who will repeat the same newbie mistakes as everyone else. Lessons "learned" by software (and software designers) OTOH, can stay "learned" indefinitely, as long as they don't lose the source code.

      • Autonomous driving will never approach what humans can do. It ain't going to happen.

        • Yep, man will never fly... And walk on the moon? Heretic!

          • Yeah yeah yeah...because one thing is possible all things must be possible. You guys keep repeating the same mantra, while wondering why you aren't living on Mars yet.

            • You're just impatient.

              And what reason is there to go to Mars? Wouldn't you rather go to Rio?

              Of course all things are possible! It is patently absurd to believe otherwise. We make all things possible, or more correctly, we uncover the possibilities we didn't know.

              • All things are possible? No, that is scifi. Reality says otherwise. So does science. But you type dont know science or physics so just assume everything will happen.

                • Everything HAS happened!

                • Re: (Score:3, Insightful)

                  All things are possible?

                  They don't. Well some nitwits do, but youre falling into the trap that because some things are physically impossible other things must be too. But when it comes to self driving cars you're pretty wide of the mark.

                  First, self driving cars aren't limited by physics like space travel is.

                  Secondly, you're ignoring the advancs in computer vision. Whether you believe deep learning is the key to strong AI or not (it isn't), or whether you believe it's 100% novel never seen before (it isn't),

            • 1. It doesn’t have to handle every situation, it just has to know when to give up and ask for help.

              2. It doesn’t have to be perfectly safe, it just has to be demonstrably safer than humans.

              3. Every time any Tesla encounters an exceptional situation, the SW gets altered to deal with that, and then *every* Tesla gets better. That’s exponential improvement.
          • Both tasks performed by humans. Auto pilot isn't having a great track record lately.

      • by gtwrek ( 208688 )

        I'm in agreement here. I predict that autonomous driving will lead to less automotive deaths and injuries by SEVERAL orders of magnitude over "biological driving software" as you put it. It's not if, but when.

        Humans are too easy distracted, or unfit to drive (DUI, etc.), or just stuck with too many dumb, aggressive habits.

        Will autonomous driving still lead to some accidents and deaths? Sure. The circumstances in which autonomous driving fails are different than when humans fail. But software will conti

      • Autonomous driving depends on clear lane markings. Around here most of them barely visible and don't get repainted often. No thanks.

        • by micheas ( 231635 )
          That's actually not true.

          There are large sections of 101 that waymo and Tesla's have no problems with that GM, Honda, Toyata, and Subaru's latest all fail miserably with.

          I bought autopilot to reduce the risk of being in an accident in a parking lot. (the autopark feature is bundled with auto pilot) The difference between AutoPIlot and the other lane assist software is that AutoPilot does what the other systems claim to do, but can't.

          All the ads the Auto industry has for autobraking, adaptive cruise co

      • The only thing more hubristic than assuming something will definitely work is assuming something will never work.

        Yeah, and then you go and assume that it will definitely work eventually.

      • You're implying that bugs eventually go away. I started using computers more than 30 years ago, and I'm still bitching about many of the same things I was back then.

        I keep watching mechanic videos on YouTube about a car not shutting off because the keyfob has a bug in its firmware, and even after several years of it being a known problem, the manufacturer can't fix it. That's not even a complicated thing to correct, yo.

      • The only thing more hubristic than assuming something will definitely work is assuming something will never work.

        It depends how much you're spending on the latter.

      • Nonsense. The world changes every day, cars change regularly, the weather changes every second... no way is a program going to account for all of that. I'm sorry but "The Matrix" isn't real... no coder could cover all those details.

        Let's compare.

        Human: Hmm, I've never seen a white painted death wall with spikes in the middle of the highway before, I think I'll slow down and avoid it.

        Computer: If Unknown Visual Stimulus, Kill Passengers.

    • ....that autonomous driving is going to work? I mean, have you actually used software?

      Why do people think antonomous driving won't work? Have you seen humans behind the wheel of a car? Truly terrifying.

    • by dcw3 ( 649211 )

      Yes, please stop getting on airplanes now.

  • Comment removed based on user account deletion
  • Tesla and just about everyone else in the "autonomous" driving game is using an Expert System. This isn't AI, it is something dug up from the 1970s that sort of modeled what AI could do. Someday.
    Well, someday isn't quite here yet. There is no underlying intelligence to these things. It is all based on rules and if you get to the bottom of the list of rules, the car has no idea what to do. This is freaking dangerous.
    A true "AI" would have some default precepts, like "don't crash" and "don't hit people".

    • by thegarbz ( 1787294 ) on Friday March 22, 2019 @07:52PM (#58318574)

      Tesla and just about everyone else in the "autonomous" driving game is using an Expert System.

      Sorry but expert systems are not what does the image analysis. Go back to start. Do not collect $200.

      • Tesla and just about everyone else in the "autonomous" driving game is using an Expert System.

        Sorry but expert systems are not what does the image analysis. Go back to start. Do not collect $200.

        The image analysis is NN. The decision to take based on the analysis results is expert systems. He's perfectly correct.

        • by Jzanu ( 668651 )
          I am glad to see this distinction has been discussed. Because of the difficulty in fixing this problem it is probably due to a combination of control loop in the inflexible decision logic and error in the camera based image recognition system. Musk needs to admit error in relying on fantasy technology and add an array of $100 LIDAR sensors to break these problems. Hell, why not use a few hundred of them for redundancy? Nothing beats detecting the actual real world as a backup, and it should simplify a lot o
    • Driving on the road with one of these present will present and unlimited capacity for chaos because if something unexpected (or unprogrammed) happens, the car will do something unexpected. And that could be dangerous to everyone around.

      Good point! This is why self driving cars will never work because they do unexpected things and humans never do. It must have been a self driving car I saw over a decade ago which suddenly hauled it over 3 lanes to the middle, pulled a u turn and then floored it back in the o

  • Back in 94/95 a friend and I went to a Progfest in LA. My navigator was poring over his paper maps trying to figure out where I should get off. I was in the middle lane of a 5-6 lane freeway, ready to go in either direction at a moment's notice. Keep in mind this is at 70 MPH, surrounded by other cars doing 70. And Ken was a pretty good navigator.

    About the time he said "shit!", I said "shi!t" as the freeway split into 2. 2-3 lanes going left, 2-3 lanes going right, and I was on an offramp straight d
  • by maxrate ( 886773 ) on Friday March 22, 2019 @07:10PM (#58318442)
    AI will be the end of us.
  • They also can't drive into sunlight, in snow, in rain, in fog, in construction areas, in places with potholes, in places with faded reflective paint, in places where other drivers are idiots, can't discern basic optical illusions, can't figure out what road heat mirages are, can't read text on some road signs...basically self-driving cars are one giant lie and only work under extremely controlled conditions and the technology to drive in even 90% of worldwide driving conditions won't be available for 50 yea
  • by 93 Escort Wagon ( 326346 ) on Friday March 22, 2019 @07:40PM (#58318534)

    The video clearly shows that the Tesla was in the Ravenna section of Seattle, which is reasonably nice. It was simply trying to avoid heading further south into the lower-class area known as the University District.

    • <pedantry>This is the northern terminus of the I-5 Express Lanes at Northgate.</pedantry>
      • If you know that, you'll also understand why I had to use Ravenna for the joke.

        It would only work with Northgate if the Tesla had steered AWAY from the exit...

  • The Tesla seems to be a relatively impressive electric car.

    The Tesla is not even close to a self driving car in any capacity. Look at the amount of sensors, software on the Waymo vehicles and they're still not finished.

    The Tesla is a 'toy' automated vehicle. Using this feature is dangerous and foolish. Leave it as an electric vehicle, not an autonomous vehicle in any capacity. I'm shocked more people aren't dead due to this.

    • There are hundreds of millions of cars on the road with no sensors at all other than two human eyes. I’m not sure why a biological neural net can drive on two human eyes but a digital neural net needs 75 times of radar.
      • may be the human rnn had 500 million years of tinkering both software and hardware?
      • Because we want it to be better than two human eyes. Two human eyes can't be watching 360 degrees at all times, but a bunch of sensors can.

      • In the UK it is legal to drive a car if you only have 1 good working eye. Therefore, from a legal point of view, you don't need 2 eyes to drive. Of course, there may be a performance impact to using only 1 eye, but the law allows those affected people to still drive.

  • While substantial reward$s fathomable if able to iffy autonomous transport , it is monumental undertaking. There are simpler incremental safety and efficiency tech solutions that could help in near term such as drive recorders , smart roads that can share road/traffic conditions , monitor dangerous drivers etc... Smarter roads can help autonomous driving. But since financial risk / rewards dispersed less investment. Still transportation getting better. The ride hailing app investors are subsidizing a trans
  • "Human nature is, 'if something's worked 100 times before, it's gonna work the 101st time.'" That can lull people into a false sense of security, with potentially deadly consequences.

    You got that right.
    When you are dealing with AI, and it gets retrained, it MUST be retested fully.
    And it appears that this edge-case is not being tested.

    • by Anonymous Coward

      When you are dealing with AI, and it gets retrained, it MUST be retested fully.

      Not quite right. You are assuming that the machine learning technique involved suffers from Catastrophic Forgetting [wikipedia.org] upon re-training. This was a problem back in the early days of machine learning, but any modern AI engineer and researcher knows of this problem and is or will be implementing solutions. [papers.nips.cc]

      When a human learns to fly a Cessna, we get a pilot's license. When we get a type certificate to fly an Airbus after learning to fly the Cessna, we don't forget how to fly the Cessna and need re-training in

  • Time for
    Ralph Nader to write an Unsafe at Any Speed 2 auto ride of death.

  • by Anonymous Coward on Saturday March 23, 2019 @02:06AM (#58319324)

    I'm a pilot. I fly a plane with an autopilot. I also drive a Tesla with their "autopilot".

    The very expensive aircraft autopilot flies great. I can be hands-off the controls for extended periods of time, read a book, browse Facebook (hurrah for GoGo :), etc. Do I? Hell no! An aircraft autopilot has no clue what other aircraft are doing. TCAS might see another nearby aircraft, maybe it won't. I keep my hands on or near the controls, I look out the window, and I scan the instruments - all the time. Which is pretty much what I do in the Tesla. The big difference is that the Tesla actually does a pretty decent job of reacting to other cars. Odd lane markings and construction zones do freak it out from time to time. I have had the Tesla alert me to an unsafe traffic or road condition and tell me to take over - in a flurry of beeps and on-screen alerts. Freaks me out every time. I wish the autopilot in the airplane would do that - instead it just shuts off, throws a warning light if I'm lucky, and the plane wanders off somewhere in the sky until I pull head of my my ass. I probably hand-fly the airplane more than I hand-drive the Tesla - on cross country trips. Taxiing around on the ground is a bit like driving a Tesla to the grocery store - an annoying fact of life to tolerate only until I get where I belong - out on the road, or up in the air, where the massively automated systems not only make my life easier, they make it safer as well.

    You people bitching about how dangerous the Tesla autopilot is are just spoiled, bitchy little meat bags of self-loading cargo. You have no concept of automation, risk, and capability, you see the autopilot and cry that it's not perfect. You all need to fly from LA to NYC in a Ford Trimotor, or drive between them in a model T. Keep a spare set of points and a condenser in the glovebox. The magneto on the Trimotor's radial engines probably uses the same points as the Model T. Make sure you can change the points and gap them in the middle of nowhere, because that's where they'll fail. You'll be flying for about 20 hours, and you'll make about 8 stops for fuel and maintenance. The Model T will take a wee bit longer, at least 60 hours, with modern roads, unless you have to stop and fix the engine [youtube.com]. A model 3 can make that drive in 50 hours [theverge.com], and you won't have to change the points once.

  • ....pay me to buy these kind of cars.
  • A system that produces an audible warning if the driver drifts away from the middle of the lane makes some degree of sense. I think if you need that, the correct response is to find an exit and take a break; so I guess these have a purpose as a tired driver alert system.

    What is the purpose of automatically staying in the lane? The driver is still obliged to pay attention. There doesn't seem to be any more cognitive load to actually turning the steering wheel. All this does is remove that warning that you

Genius is ten percent inspiration and fifty percent capital gains.

Working...