Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Transportation Software Technology

Uber 'Neglected' Simulation Testing For Its Autonomous Vehicles, Says Report (engadget.com) 51

According to a report from The Information, Uber allegedly "neglected" simulation testing for its autonomous vehicles. "The publication's sources claim that there was a dearth of investment in the simulation software, and lots of incompatible code between the autonomous vehicle software and simulation software Uber is developing internally," reports Engadget. "However, the sources said there isn't a direct link between the lack of investment and the fatal accident involving one of Uber's autonomous taxis and a pedestrian." From the report: It's worth noting that the Unreal Engine-powered simulation software is still relatively new. The Information writes that the suite wasn't developed until after self-driving project lead Anthony Levandowski was fired mid-2017. To add insult to injury, initially, there were also differences in pay between simulation engineers and other engineers in the department. The end goal was to release a self-driving car in Arizona this year, codenamed "Roadrunner," to compete with Waymo's offering just outside of Phoenix.
This discussion has been archived. No new comments can be posted.

Uber 'Neglected' Simulation Testing For Its Autonomous Vehicles, Says Report

Comments Filter:
  • Uber non.
  • by Jeremi ( 14640 ) on Tuesday June 19, 2018 @05:55PM (#56812562) Homepage

    It's worth noting that the Unreal Engine-powered simulation software was relatively new

    Hmm, turns out the real world is different than the Unreal world. :/

    • KSP engine is unity.

    • The hardest part is to take input data and understand what it means. Games engines actually do a pretty good job of creating that data. It is not substitute for real testing, but it is an essential part. Because you can test much more. And because you can test crash scenarios that would be too dangerous otherwise.

      If generating an image from a model is like squeezing toothpaste out of a tube, understanding the generated image is like squeezing it back in.

      And a good simulator will introduce some noise etc

  • Backseat Engineering (Score:3, Interesting)

    by Anonymous Coward on Tuesday June 19, 2018 @05:56PM (#56812572)

    Who is to say that they "neglected" simulation? I'm no Uber fanboy. Engineers working on the main show make more than simulation and test engineers at a great many companies. In addition, in my experience, simulation, especially of something complex and cutting edge like this (rather than say, FEA), is often of little value, because it is so hard to really model the world. Right or wrong, paying less to simulation engineers was not negligence. These wild conclusions seem to stem not from the facts, but from "The Information's" desire to create a sensational headline. Maybe Uber WAS negligent. This article does not convincingly make this case.

    • by Luthair ( 847766 ) on Tuesday June 19, 2018 @06:05PM (#56812604)

      You don't need to model the real world. You drive the car around with sensors on and record the input, then in simulation you feed the input from all the weird scenarios you recorded into your system.

      The behaviour of Uber, Tesla and others is quite frankly reprehensible. The testing of these systems need should be at a higher bar than anything NASA or aeronautical firms given the widespread deployment, reduced control and much wider variety of possibility of harm.

      • by Anonymous Coward

        I'm GP, and I work in the auto industry. Admittedly, I don't know that much about simulation, but I know a bit about how rigorously the software that runs the vital control systems is tested. The answer is not very. Much less than a jumbo jet or a Nasa spacecraft. MUCH less. And, cars, for the most part, don't have many catastrophic software bugs. There is an opportunity cost of delaying automated vehicles until they are close to perfect (a large one). I am in favor of the wild, wild west style with which w

        • There is no assurance that they will ever get to a point where they will save a significant number of lives. Until we have hard evidence of that, I would prefer they stop creating their own deaths. In the case of Autopilot, people are lulled into a false sense of safety until they make a mistake that kills or injures themselves or others.
        • by Luthair ( 847766 )

          There is an opportunity cost of delaying automated vehicles until they are close to perfect (a large one).

          What exactly? Your short term stock options?

          we'll set automated driving back a decade or two, costing countless lives in the process.

          Ah, this specious argument again. Self-driving cars have yet to show this to be accurate, Waymo is apparently the best of the bunch yet requires human intervention on an average of 5000 miles - given that the average American drives 13k miles a year the best automation would crash a little under 3 times a year. While Tesla touts its autopilot notice how it never discloses how frequently it disengages, how often the driver must pro-actively prevent an accident, nor

      • In the case of Tesla I also lay blame with the regulators. At least for Uber and Google the regulators thought they had professional test drivers (which in Uber's case wasn't true, they had a professional diagnostic screen watcher ... which did her job, which was not paying attention to the road as the car killed someone).

        In Tesla's case we know the system as designed insures there will often be no one paying attention behind the wheel, yet the system as designed requires the driver to pay attention. Tesla'

      • Weird scenarios are very uncommon.

        Unless you captured a _buttload_ of sensor data, you'd overtrain for the few you caught.

        A major car manufacturer will need to capture years worth of 'entire production line' worth of autonomous car grade data. Then curate the accidents and near accidents into a trainable set.

        Then your still left with an AI that can't infer intent, at best, immediate course.

        • by murdocj ( 543661 )

          I don't know about this. I see weird stuff every day... people opening car doors into traffic, people wandering out into traffic, cars wandering into my lane. I'd have to think if you recorded a month's worth of data you'd have seen a LOT of stuff. Not to mention you could even generate your own weird scenarios. The fact that the Uber car didn't respond to a major obstacle in front of it seems like there wasn't a lot of testing of any form done.

          • They have likely had an opportunity to record the types of occurrences you are talking about already, and it hasn't gotten them very far. I'm not pretending I know what percentage of possible world scenarios they need in order to be 'safe enough'. I just know that they need a lot more than they are getting, and that it is a problem that they don't seem to know what that level is.
          • > The fact that the Uber car didn't respond to a major obstacle in front of it seems like there wasn't a lot of testing of any form done.

            Just shows a lack of regression testing. I guarantee it stopped for people walking in front of it before placing the car into street testing. They changed object identification, almost certainly without repeating those tests. This is why you want simulated testing, when you change any software, you can re-run hundreds of hours of tests in minutes, without wasting any fu

            • Uber _started_ setting up virtual world testing of its AI _after_ this accident. They are clowns.

              Tesla appears to be a little better. But how do you debug an neural net 'AI'? All you can do it change the training dataset and test the results to death. Your test/training (hopefully different/built independently) set will be incomplete and you will kill more people, you're just hoping it will be less than would have died on their own.

        • Weird scenarios are very uncommon.

          Here's a good start... [google.com]

      • by MobyDisk ( 75490 )

        That approach works but is limited in several ways: You can only playback scenarios that have actually happened, and it doesn't work if the hardware & sensor arrangement is still changing. It is also slow since it is more of an integration test than a unit test. I've worked with this approach before and it is nice once everything is locked-down, but it is not great during the R&D phase.

        • by Luthair ( 847766 )
          If you change your sensor arrangement you start from scratch.
          • by MobyDisk ( 75490 )

            3/4 of the way through the project: "If we move the front-left imager 6 inches to the left and adjust the angle 5% up we should fix scenario thusandsuch..." That means you need all new test data, which in this case might require running over several old ladies a few times to get the test data you need. ;-)

      • Comment removed based on user account deletion
      • The testing of these systems need should be at a higher bar than anything NASA or aeronautical firms given the widespread deployment

        Why? NASA considers loss of life unacceptable, yet we're quite happy to accept over 5 million accidents causing over 30000 fatalities every year from cars. Your demands are completely unreasonable in the face of the risk that has already been accepted by the population as well as the policy makers.

    • Maybe Uber WAS negligent. This article does not convincingly make this case.

      Perhaps. But we do have the evidence that their car mowed down a pedestrian it should have easily detected.

      "A person walking in front of a moving car" isn't exactly some incredibly rare edge case that no one could have thought to test for.

  • by CaptainDork ( 3678879 ) on Tuesday June 19, 2018 @06:09PM (#56812626)

    ... model.

  • is 'skipped', not 'neglected'.
  • I am glad Uber's involuntary manslaughter is getting fully investigated and all the complexities involved discussed. However, this is one much more simplistic question that I can never find the answer to:

    Was Uber performing automated monitoring of the safety driver?

    There has been discussion on the AI ignoring what the sensors detected to avoid over-reaction to false-positive events. It makes sense the AI would be sometimes tuned incorrectly provided there was a safety driver *paying attention* to address

  • by 0100010001010011 ( 652467 ) on Tuesday June 19, 2018 @07:41PM (#56812958)

    How in the world did this make it to production without SIL/MIL/HIL testing?

    dSpace sells HIL benches specifically to test ADAS [engadget.com]

  • I sense there is a catch-22 here. If they were able to write a simulator capable of using some sort of chaos theory to produce any possible situation that may occur in he real world, wouldn't they know how to write an AI engine that could anticipate any possible situation in a self driving car?
  • I think Uber has adopted the "Move fast and break things" development model...

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...