Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Transportation Software The Courts

Toyota's Killer Firmware 610

New submitter Smerta writes "On Thursday, a jury verdict found Toyota's ECU firmware defective, holding it responsible for a crash in which a passenger was killed and the driver injured. What's significant about this is that it's the first time a jury heard about software defects uncovered by a plaintiff's expert witnesses. A summary of the defects discussed at trial is interesting reading, as well the transcript of court testimony. 'Although Toyota had performed a stack analysis, Barr concluded the automaker had completely botched it. Toyota missed some of the calls made via pointer, missed stack usage by library and assembly functions (about 350 in total), and missed RTOS use during task switching. They also failed to perform run-time stack monitoring.' Anyone wonder what the impact will be on self-driving cars?"
This discussion has been archived. No new comments can be posted.

Toyota's Killer Firmware

Comments Filter:
  • by i kan reed ( 749298 ) on Tuesday October 29, 2013 @03:28PM (#45272859) Homepage Journal

    I'm convinced. I'll give up my career as a computer programmer now, and go use my bare hands for subsistence farming now. Sorry, I was wrong.

    • by neoritter ( 3021561 ) on Tuesday October 29, 2013 @03:49PM (#45273037)
      Or we could present this as the new Therac-25 and learn from it. :)
    • by jythie ( 914043 )
      "Let's give up now and form an agrarian society!"

      bad stuff happens

      "That's is, we're all farmers......"
  • by Anonymous Coward on Tuesday October 29, 2013 @03:29PM (#45272869)

    Those working on self-driving cars and those that are watching the technology already know that any such car would need an absolutely 100% rock solid OS.

    This changes nothing.

    • by neoritter ( 3021561 ) on Tuesday October 29, 2013 @03:34PM (#45272897)
      It might change the programming language they decide to use though. Pick a language that's more stable at run-time like Ada (missile programming) etc.
      • by jythie ( 914043 )
        Not sure why this was modded flaimbait... this is one of the areas where Ada does generally shine, it is a language built for auditing.
        • by erikkemperman ( 252014 ) on Tuesday October 29, 2013 @04:07PM (#45273257)

          Not sure why this was modded flaimbait... this is one of the areas where Ada does generally shine, it is a language built for auditing.

          That might turn out to be an important point. Suppose some day two cars of different manufacturers cash into each other. Will comparative code audits find their way to court?

    • by NatasRevol ( 731260 ) on Tuesday October 29, 2013 @03:50PM (#45273047) Journal

      I'd be happy with a car OS that kills less than 30,000 people per year.

      http://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_in_U.S._by_year [wikipedia.org]

      Or even less than 10 million accidents a year.

      http://www.census.gov/compendia/statab/cats/transportation/motor_vehicle_accidents_and_fatalities.html [census.gov]

    • by jythie ( 914043 )
      Eh, it does not need to be 100% rock solid, just better then humans. If humans managed to drive around without killing each other such a metric would be necessary, but as it is robotic cars just have to kill fewer people then we do already to be a net gain.
    • This changes nothing.

      Oh it does -- they've been renamed self-blaming cars. 3 Laws of Robotics never saw this coming.

    • by icebike ( 68054 )

      Those working on self-driving cars and those that are watching the technology already know that any such car would need an absolutely 100% rock solid OS.

      This changes nothing.

      But then its principal advocate is Google, where good enough gets pushed to production, left to languish and spring cleaned [blogspot.com] out of existence in a couple years.
      So in spite of the engineers knowing this, the trend is worrying.
      Especially when some of these cars are starting to be drive-by-wire [wikipedia.org] and the trend is that there will exist no physical linkage between the human interface and the cars brakes, engine, steering.

      Some how the assurance from and AC that "all is well" and Trust them, they are Scientists, just

    • Re: (Score:3, Insightful)

      Not necessarily. If said cars kill fewer people than humans, it's still an improvement that should be done.

      The problems are lawsuits. A drug that saves 90% of cancer patients but kills 1 in 10 independently will have it's ass handed to it in civil. court -- assuming it makes it past the FDA.

      Would that outcomes analysis be applied to government activities and civil lawsuit lawyers ' claims of bettering the system as they fatten their wallets.

  • by dclozier ( 1002772 ) on Tuesday October 29, 2013 @03:31PM (#45272879)
    The owner of a self-driving car will have had to accepted the EULA [wikipedia.org] and accepted not to hold the manufacturer liable for sofware defects. (half joking but I wouldn't rule it out)
    • by Anonymous Coward on Tuesday October 29, 2013 @03:49PM (#45273043)

      Won't do any good. I can agree to a hold-harmless provision (and, despite the language of the EULA, such clauses are not actually universal). What I cannot do, is agree to it for someone else. You'd better believe a pedestrian hit by my self-driving car can sue the living daylights out of them. Heck, as previously mentioned, depending on what the particular problem is, *I* can still sue them.

    • by epyT-R ( 613989 )

      Nevermind that, I'd never own (or ride in as the 'driver'/trip planner, whatever) a self-driving car unless it was blatantly legally clear that I am not to be held accountable for its behavior.

    • I am sure they will, and they always would have.

      But just because you sign that, does not mean that the manufacturer/programmer will not be held responsible for the bus load of kids who drove off a cliff.

  • by freakingme ( 1244996 ) on Tuesday October 29, 2013 @03:35PM (#45272913)
    Sure, they will be more safe. Just like in the aviation industry, where each incident/crash is investigated meticulously, and flying has become safer ever since 1903. With non-selfdriving cars 99% of the incidents were caused by human error. Now no more, so we can fix it!
    • by Skiron ( 735617 )
      But you need a few more crashes and 'incidents' to get the data to improve the code. More crashes please!
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      As a old mechanic if you believe for one second that autonomous cars are going to maintained and inspected the way that planes are then you got a bridge to sell you.

      The question is not can we build these thing to me, the question is can we reliably maintain then in any capacity. As a mechanic I would take on liability for the parts repaired can you imagine the legal infrastructure required to allow someone other then the manufacturer to maintain and build these things. How do you compensate for a wheel bea

  • Relevant paragraph (Score:5, Informative)

    by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Tuesday October 29, 2013 @03:37PM (#45272927) Homepage

    2nd link, 5th paragraph:

    In a nutshell, the team led by Barr Group found what the NASA team sought but couldn’t find: “a systematic software malfunction in the Main CPU that opens the throttle without operator action and continues to properly control fuel injection and ignition” that is not reliably detected by any fail-safe. To be clear, NASA never concluded software wasn’t at least one of the causes of Toyota’s high complaint rate for unintended acceleration; they just said they weren’t able to find the specific software defect(s) that caused unintended acceleration. We did.

    • It's interesting to me that NASA was looking at it - though I can certainly understand why they would be interested and why they might have some useful insight.

  • by wjcofkc ( 964165 ) on Tuesday October 29, 2013 @03:37PM (#45272929)
    Anyone wonder what the impact will be on self-driving cars?

    A longer chapter on debugging in the first edition of "Programming Self-Driving Cars: The Missing Manual."
    • by geekoid ( 135745 )

      Clearly it will completely stop the auto industry, just like cars that exploded when rear ended stopped the auto industry.

  • by neoritter ( 3021561 ) on Tuesday October 29, 2013 @03:38PM (#45272941)
    If there's no human fall back or ability to overthrow the computer's control of the car I'll never drive it. I don't think this will change anything except maybe give the people that are rushing for self-driving cars some pause. Every developer knows the risks of self-driving computer controlled cars (if they don't, well they're naive). Between human error in programming and human maliciousness, there are two camps. People who think they can overcome the possibilities of putting a semicolon in the wrong place and prevent hackers from comprising the software's integrity. And people who realize the first people are fooling themselves.
    • "If there's no human fall back or ability to overthrow the computer's control of the car I'll never drive it."
      by definition you wouldn't be driving it.

    • by Stormy Dragon ( 800799 ) on Tuesday October 29, 2013 @04:52PM (#45273689)

      There was a time after automated elevators first came out when people refused to use them because they didn't trust them without a "human fall back or ability to overthrow the computer's control". Today, when nearly all the elevators we've ever seen were automated, this seems crazy.

      In 50 years, when most people have never seen a manually operated car, we'll seem just as crazy for not trusting them.

  • Still happy that my car (not a Toyota) has a stick and thus a mechanical clutch pedal :)

    On the other hand, doesn't automatic gearboxes have neutral setting? Wouldn't moving into this be roughly the same as depressing the clutch on a manual gearbox? Of course, the reaction times are longer (since you have to do something unusual when driving an automatic, i.e. touching the shifter while in motion), but for the cases you hear of where they managed to call 911 while figthing to control the vehicle...

  • wtf (Score:4, Interesting)

    by schlachter ( 862210 ) on Tuesday October 29, 2013 @03:43PM (#45272995)

    'Although Toyota had performed a stack analysis, Barr concluded the automaker had completely botched it. Toyota missed some of the calls made via pointer, missed stack usage by library and assembly functions (about 350 in total), and missed RTOS use during task switching. They also failed to perform run-time stack monitoring.'

    Huh? I'm a software engineer and don't understand the relevance of this statement, how can a jury? How does it confirm that there was a defect?

    • Re:wtf (Score:5, Informative)

      by ZombieBraintrust ( 1685608 ) on Tuesday October 29, 2013 @03:54PM (#45273099)

      Vehicle tests confirmed that one particular dead task would result in loss of throttle control, and that the driver might have to fully remove their foot from the brake during an unintended acceleration event before being able to end the unwanted acceleration.

      The jury could confirm there was a defect because they were able to reproduce it with a physical car. They could confirm the code quality was poor because it 1) It didn't follow the required code standards: MISRA C, 2) The cyclomatic complexity was too high 3) Toyota didn't track bugs.

      • by AmiMoJo ( 196126 ) *

        Where in TFA does it state that they re-produced the problem on a physical car? The testimony says that they did an analysis of the source code in a room, with comments translated from Japanese to English by software. They eventually discovered some potential ways in which it could fail and cause unwanted acceleration, but it does not appear to have been tested or even determined a likely cause of the failure that happened.

    • Re:wtf (Score:5, Funny)

      by geekoid ( 135745 ) <{moc.oohay} {ta} {dnaltropnidad}> on Tuesday October 29, 2013 @03:55PM (#45273119) Homepage Journal

      Are you sure you are a software engineer, and not some programmer with delusions of grandeur?
       

    • A good attorney and expert witness will make it clear to the jury that there are several standard and well-known processes that need to be followed to test software, and that Toyota did not follow them.

    • Re: (Score:3, Interesting)

      by m00sh ( 2538182 )

      'Although Toyota had performed a stack analysis, Barr concluded the automaker had completely botched it. Toyota missed some of the calls made via pointer, missed stack usage by library and assembly functions (about 350 in total), and missed RTOS use during task switching. They also failed to perform run-time stack monitoring.'

      Huh? I'm a software engineer and don't understand the relevance of this statement, how can a jury? How does it confirm that there was a defect?

      Hate to say this but I think any foreign company on trial in the US is going to get reamed. Americans are very anti-foreign companies. If the company was Chinese, probably guilty on all accounts.

      Improper stack analysis does not prove a defect. However, it gives a jury enough rope to hang.

  • Car makers can and have been sued for defective mechanical designs many times. Now they're getting sued for defective and dangerous software and computer hardware designs. I don't think there's much of a difference between the two when it comes down to it. You were either negligent or not, and whether it's software, hardware, or mechanical stuff doesn't really matter.

  • by gallondr00nk ( 868673 ) on Tuesday October 29, 2013 @03:59PM (#45273165)

    Good lord, they have got to be kidding? If Toyota (or their parts suppliers) are making those kinds of errors, you can bet your ass that other manufacturers will be making them as well.

    There needs to be very strict set standards for car control systems. We have standards for OBD, so why not strict, over engineered and thoroughily coded critical systems standards? Even better, why not make them open standards, including the hardware?

    Standardising would make parts cheaper as well as stopping manufacturers from building closed black box units that may be of dubious quality. It would also make it easier to maintain and repair modern cars as they get older, and allow third parties to provide new hardware long after the manufacturer loses interest.

    As an aside, I do wonder what we're going to do in ten years time when the failure rate for most of the control hardware starts creeping up. Would they fail safely? Would the repair cost be prohibitive?

    It would be a sad irony if these environmentally conscious efficiency improving measures resulted in cars being scrapped en masse because the ECU that superseded a $10 throttle cable costs a grand.

  • Awesome transcript (Score:5, Informative)

    by ljw1004 ( 764174 ) on Tuesday October 29, 2013 @06:25PM (#45274575)

    I've been reading the transcript. It's fantastic. The expert explains clearly and lucidly in terms that (I imagine are) understandable by non-techies.

    The transcriber made some funny mistakes... Let me tell you about "parody bits" and "pointer D references" :)

  • More Details (Score:5, Insightful)

    by rabtech ( 223758 ) on Tuesday October 29, 2013 @10:54PM (#45276607) Homepage

    Couple of details here:

    Toyota had no software testing procedures, no peer review, etc. The secondary backup CPU code was provided by a third party in compiled form, Toyota never examined it.

    Their coding standards were ad hoc and they failed to follow them. Simple static analysis tools found massive numbers of errors.

    They used over ten thousand global variables, with numerous confirmed race conditions, nested locks, etc.

    Their watchdog merely checked that the system was running and did not respond to task failures or CPU overload conditions so would not bother to reset the ECU, even if most of the tasks crashed. Since this is the basic function of a watchdog, they may as well not have had one.

    They claimed to be using ECC memory but did not, so anything from single bit errors to whole page corruption were undetected and uncorrected.

    A bunch of logic was jammed in one spaghetti task that was both responsible for calculating the throttle position, running various failsafes, and recording diagnostic error codes. Any failure of this task was undetected by the watchdog and disabled most of the failsafes. Due to no ECC and the stack issue below, a single bit error would turn off the runnable flag for this task and cause it to stop being scheduled for CPU time. No error codes would be recorded.

    They did not do any logging (eg of OS task scheduler state, number of ECU resets, etc), not even in the event of a crash or ECU reset.

    The code contained various recursive paths and no effort was made to prevent stack overflows. Worse, the RTOS kernel data structures were located immediately after the 4K stack, so stack overflows could smash these structures, including disabling tasks from running.

    They were supposed to be using mirroring of variables to detect memory smashing/corruption (write A and XOR A to separate locations, then compare them on read to make sure they match). They were not doing this for some critical variables for some inexplicable reason, including the throttle position so any memory corruption could write a max throttle value and be undetected.

    Instead of using the certified, audited version of the RTOS like most auto makers, they used an unverified version.

    Thanks to not bothering to review the OS code, they had no idea the OS data structures were not mirrored. A single bit flip can start or stop a task, even a life-safety critical one.

    These are just some of the massive glaring failures at every level of specifying, coding, and testing a safety-critical embedded system.

    I am now confident in saying at least some of the unintended acceleration events with Toyota vehicles were caused by software failures due to gross incompetence and negligence on the part of Toyota. They stumbled into writing software, piling hack on top of hack, never bothering to implement any testing, peer review, documentation, specifications, or even the slightest hint that they even considered the software something worth noticing.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...