Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Transportation

Toyota Raises Concerns About California Self-Driving Oversight, Calls It 'Preposterous' (reuters.com) 230

A Toyota official on Tuesday raised concerns about California's plans to require compliance with a planned U.S. autonomous vehicle safety check list, calling it "preposterous." Reuters reports:Hilary Cain, director of technology and innovation policy at Toyota Motor North America, criticized California's proposal to require automakers to submit the U.S. National Highway Traffic Safety Administration's (NHTSA) 15-point safety check list before testing vehicles. "If we don't do what's being asked of us voluntarily by NHTSA, we cannot test an automated system in the state of California. That is preposterous and that means testing that is happening today could be halted and that means testing that is about to be started could be delayed," she said at a Capitol Hill forum. On September 30, California unveiled revised rules that carmakers will have to certify that they complied with the 15-point NHTSA assessment instead of self-driving cars being required to be tested by a third-party, as in the original proposal.
This discussion has been archived. No new comments can be posted.

Toyota Raises Concerns About California Self-Driving Oversight, Calls It 'Preposterous'

Comments Filter:
  • by Anonymous Coward on Tuesday October 11, 2016 @06:18PM (#53058649)

    "If we don't do what's being asked of us voluntarily by NHTSA, we cannot test an automated system in the state of California. That is preposterous and that means testing that is happening today could be halted and that means testing that is about to be started could be delayed"

    Well sorry to shit on your parade, lady, but maybe it's not such a bad idea to slow all of this down and get it right. NHTSA isn't the devil. If you want to get angry at someone, go after IIHS. NHTSA is trying to actually keep the rest of us, who may someday interact with your automated system, safe from it.

    • Re: (Score:2, Flamebait)

      Meanwhile, about 32,000 people die annually in vehicle accidents in America, about 88 per day. How many of those could be prevented if we didn't have bureaucrats trying to slow progress because it isn't perfectly safe?

      • by wasted ( 94866 )

        Perfectly safe, as in the "Zaphod Plays It Safe" sense?

      • by AAWood ( 918613 )

        bureaucrats trying to slow progress because it isn't perfectly safe

        Do we know that the test would force them to be "perfectly safe"?

        I genuinely want to know, I've no idea what those 15 points are, or whether or not they're reasonable. The summary just makes it sound like Toyota is upset at the test being there at all, rather than the contents of the test; I could check TFA, but that isn't the Slashdot way. If Toyota are just objecting to the test on principle, I'm with ACs post; oversight isn't an inherently bad thing. On the other hand, if it is the contents of the test i

        • I've no idea what those 15 points are...

          Here you are 15 points check [nhtsa.gov]. I googled it for you. ;)

          • Thanks for the link. But that doesn't look like a checklist so much as like a PowerPoint slide. Not that it's evil or stupid, but how does one check off items like "Human Machine Interface: Approaches for communicating information to the driver,
            occupant and other road users"?

            I should think that there must be more detail somewhere.

      • by dywolf ( 2673597 )

        it doesn't have to be perfectly safe and you damn well know it.
        it only needs to be as safe or safer than we expect people to be.

        the statistic you cite is meaningless in terms of whether or not we let have some minimum level of reliability of self-driving cars.
        any idiot should be able to see that we do not improve those statistics by just letting any other idiot put out a self driving car without regard to its reliability and safety.

        • It has to be perfectly safe. Autonomous cars are removing a person's individual choices about how they choose to drive safely, so they have to. There is no way around it. There are plenty of drivers out there who figured out how to drive safely, so you can't add to their risk because they chose your particular driving product.to drive safely, so you can't add to their risk because they chose your particular driving product.
          • While there may be many drivers who have achieved a certain level of safety, people have certain weaknesses that computers don't. They don't have perfect attention. Their reaction time is significant. Most people can't look in even two directions at once and their multitasking capability is pitiful.

            So at some point in the future we will see that computers achieve a higher safety level than any sample of human drivers, while remaining imperfect. At this point, it will probably become necessary to ban manual

            • Sure but computers have a long way to go before their weaknesses don't overshadow their strengths in a way that amounts to being safer than a human.
              • Sure but computers have a long way to go before their weaknesses don't overshadow their strengths in a way that amounts to being safer than a human.

                I am wondering. People are good at inferring data from context. A ball bouncing into the street is liable to be followed by a child. A wobbly tire might be about to blow out and cause another car to veer suddenly. That sound might indicate a train coming.

                Are these inferences not trainable? For certain image classification tasks, computers are already better than

                • To my knowledge Google cars have only been driven in near perfect conditions. Ie no blizzards or blowing snow or ice of any kind. Also the technology on Google cars is way more unaffordable than a Tesla.
                  • So, California conditions other than the mountains. Not a problem for me, and an obvious good place to start.

                    Regarding cost, they're prototypes. If the system adds $30,000 to the cost of the vehicle, it would be cost-effective for a lot of people here. I doubt it has to add that much.

                    • But would it be cost effective in a 98% adoption kind of way? Are almost all people going to stop buying a $5000 used car and drop $40 K on a Google car that might be dangerous in bad weather? Total adoption is required for safety gains.
            • Except there are numerous cases of even simple automated braking systems failing and slamming on the brakes for no reason. So when someone smashes into the car that erroneously hit the brakes full force who is at fault? Who is at fault when the inevitable hack comes in and causes the automated car(s) to do dangerous stuff? We are simply not ready for this. I submit voice recognition as the example. I remember in the 80's that it was just around the corner. Well it is 2016 (36 years later) and it is just now

              • Does vehicle maneuvering include a similar amount of ambiguity to the problem of recognizing natural language? It might not.

      • Nothing in the 15 point checklist requires perfect safety. In fact, most of the items are just "it should include something that tries to do X" where X is "obey local traffic laws", "refuse to go into automatic mode if sensors are damaged", "save data if there's a crash" and "switch safely from autopilot to manual control."

        The actual document can be found here [transportation.gov] and simple summary that leaves out a lot can be found here [nytimes.com].

    • I think what they're complaining about is that the NHTSA's checklist is voluntary, while California is trying to make it mandatory. They've probably also spent a lot of money complying with California's old rules, and don't want to have to start over.
    • by zlives ( 2009072 ) on Tuesday October 11, 2016 @06:47PM (#53058815)

      their attempt at self accelerating cars was the first warning ;)

    • by Anonymous Coward on Tuesday October 11, 2016 @06:54PM (#53058861)

      After the unintended acceleration fiasco (for which some engineers and management really should have been put to death instead of settling out of court), no one at all should be driving a Toyota, self-driving or otherwise.
      Source:
      http://www.safetyresearch.net/Library/Bookout_v_Toyota_Barr_REDACTED.pdf

      tl;dr:
      Here is a list of ways Toyota fucked up:
      -Not following appropriate coding style (ie: 'spaghetti'/unmaintainable code, acknowledged by Toyota engineers in internal emails)
      -Not following appropriate coding standards (ie: MISRA-C)
      -No memory error detection and correction (which they told NASA they had, but "Toyota redacted or suggested redactions that were made in the NASA report almost everywhere the word EDAC appears it's redacted. So someone at Toyota knew that NASA thought that enough to redact from the public that false information.")
      -Not mirroring all critical variables (which they initially claimed they did), in particular the critical kernel data structures had no protection, as well as the global throttle variables
      -Task X responsible for a retarded amount of work: pedal angle reading, cruise control, throttle position, writing diagnostic troublecodes, failsafes
      -Buffer overflows (at least one confirmed)
      -Invalid pointers (pointers not checked for validity before being used)
      -Existance of race conditions
      -Using nested/recursive locks
      -Unsafe type casting
      -Insufficient parameter checking
      -Stack overflows
      -Excessive code complexity - 67 functions have cyclomatic complexity (MCC) over 50 (aka -'Untestable') (30 is a typical max), 12 functions have MCC over 100 (aka 'Unmaintainable')
      -The function that calculates throttle position is MCC 146 and is 1,300 lines of code (executed by Task X)
      -Uses recursive functions, which must not be used in critical applications according to MISRA-C
      -Incorrect worst case stack size analysis - Toyota claims worst case usage was 41%, expert found worst case stack usage was 94% *NOT INCLUDING RECURSIVE FUNCTIONS!!!*
      -Critical, unprotected kernel structures located directly after stack. IE: if stack overflows, critical kernel data is guaranteed to be lost.
      -No runtime stack monitoring to ensure it doesn't overflow
      -RTOS (named RX OSEK 850, after the OSEK API/Standards used by many automotive RTOSes) was not actually certified as compliant with the OSEK standard, but used by Toyota anyways
      -MISRA-C rule violations (over 100 rules in total). NASA looked at 35 rules and found over 7,000 violations. Expert looked at all rules and found over 80,000 violations.
      -Toyota claims their internal coding standards overlap ~50% with MISRA-C, but in reality, only 11 rules overlap. 5 of those rules were violated. In total at least a 3rd of their own internal standards were violated.
      -Toyota cannot produce any records of bugs or bug fixing from testing, no bug tracking system was used
      -Inadequate/rare/no peer code review
      -Over 11,000 global variables
      -Totally incorrect ("abysmal") watchdog usage: Run by hardware timer so operates if other parts of CPU are failing, doesn't check that critical tasks are running, throws away error codes sent to it by the OS from other tasks, allows for CPU to overload for 1.5 seconds before reset (a football field @ 60mph).
      -Toyota didn't look at or review the monitor CPU code, though they claimed that there could be no software cause for UA
      -Monitor CPU had all the requirements (electrical signals coming in and going out, adequate memory, CPU) to monitor brake pedal, throttle and to do something useful if there was a malfunction, but it just wasn't implemented due to lazyness or incompetence
      -Many single points of failure
      -Their failure mode analysis missed obvious things because they didn't follow any formal safety processes like MISRA
      -Mix of Toyota code and Denso code
      -"It cost them less to water down the watchdog then to upgrade the CPU to a fast enough CPU"
      -If a fault occurs when there is pressure on the brake pedal, then applying further press

      • Brilliant and informative post. I do have a few questions myself:

        -Invalid pointers (pointers not checked for validity before being used)

        How do you check a pointer for validity? You can only check for NULLness, right?

        -No runtime stack monitoring to ensure it doesn't overflow

        How do you check this? Most uCs don't have a valgrind (although the newer Atmels have MPEs) and I'm not sure how hard it is to add stack canaries to the compiler.

      • Over 11,000 global variables

        Whoa, wait... was the entire entertainment system controlled by a single process or what? If that's just just globals, how can EMU software possibly be this large?

        Fantastic write-up, BTW. People like you are why I still visit Slashdot.

    • Spoilers: Toyota

      And it seems like a States Rights issue to me. If Toyota doesn't want to sell their cars in Callyforniay, they are completely free to not do just that.

      Problem solved.

  • by fluffernutter ( 1411889 ) on Tuesday October 11, 2016 @06:38PM (#53058745)
    "Laws are making us less profitable, that can't be right! Laws are only supposed to help us profit!"
  • If you don't like it, then don't test in California.
  • Seriously, this is interesting that they object to having a safety portion done, but then want the right to test all over our roads.
    Tesla passed it. Why can not Toyota, Volvo, Mercedes, etc?
  • https://www.transportation.gov... [transportation.gov]

    The Safety Assessment would cover the following areas:
    Data Recording and Sharing
    Privacy
    System Safety
    Vehicle Cybersecurity
    Human Machine Interface
    Crashworthiness
    Consumer Education and Training
    Registration and Certification
    Post-Crash Behavior
    Federal, State and Local Laws
    Ethical Considerations
    Operational Design Domain
    Object and Event Detection and Response
    Fall Back (Mi

  • Are you angry because it's probably aimed at your company?

    http://www.forbes.com/sites/ji... [forbes.com]

    Automakers With The Lowest (And Highest) Recall Rates ...Toyota/Lexus/Scion led the pack for the second year in a row with nearly 5.3 million cars and trucks recalled, followed by the Chrysler Group at around 4.7 million and Honda/Acura with nearly 2.8 million models recalled. While these would seem to be staggering numbers, as NHTSA points out theyâ(TM)re not weighed against sales, and as such arenâ(TM)t n

"If value corrupts then absolute value corrupts absolutely."

Working...