Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Transportation Software

Uber Vehicle Saw But Ignored Woman It Struck, Report Says (engadget.com) 323

gollum123 writes: Uber has reportedly discovered that the fatal crash involving one of its prototype self-driving cars was probably caused by software faultily set up to ignore objects in the road, sources told The Information. Specifically, it was that the system was set up to ignore objects that it should have attended to; Herzberg seems to have been detected but considered a false positive.
This discussion has been archived. No new comments can be posted.

Uber Vehicle Saw But Ignored Woman It Struck, Report Says

Comments Filter:
  • by Anonymous Coward on Monday May 07, 2018 @03:22PM (#56568906)
    So sorry for any inconvenience.
  • So who is to blame? (Score:4, Interesting)

    by LynnwoodRooster ( 966895 ) on Monday May 07, 2018 @03:22PM (#56568912) Journal
    Who is guilty of vehicular manslaughter, here?
    • by mea_culpa ( 145339 ) on Monday May 07, 2018 @03:28PM (#56568956)

      Probably the person playing on their phone as it was their job to override decisions made by buggy software.

      Also the video Uber released is highly altered. I drive on that street frequently and it is very well lit.

      • Nope, this just proves that the person sitting behind the wheel was correct in not reacting. By the time the operator knew the car had not detected the obstacle, it was too late. Previously when this was discussed, it was pointed out that this is actually a reason that the entire concept of a "safety operator" doesn't work.
        • by I4ko ( 695382 )

          Wrong. At the speed the car was moving an operator should have detected the vehicle applying brakes about 3 car lengths before the car entered the shadow of the bridge (as to not put the passengers on the windshield). If the operator have punched the break pedal through the floor as the hood was entering the shadow the hit would not have been fatal, and it was possible to have full stop. It is a very wide bridge, several lanes of highway.

          • by Xylantiel ( 177496 ) on Monday May 07, 2018 @04:07PM (#56569278)
            My argument is that is the obstacle was too obvious. The safety operator had to also overcome their expectation that the car would do the right thing. The engineers doing post-crash analysis appear to only have a vague idea why the car didn't appropriately evaluate this blatantly obvious obstacle. But the safety operator was supposed to figure this all out in less than 3 car lengths.
            • "But the safety operator was supposed to figure this all out in less than 3 car lengths."

              They had a lot more than 3 car lengths to realize the car wasn't slowing down.

              • by I4ko ( 695382 )

                Exactly. The bridge is 16 lane widths, and crossed almost at 45 degree angle at that. It is very very wide. Heck.. at that angle at 45mph which is the allowed speed I need 3 lane widths (effectively 4.24 at that angle) to bring the car to complete stop.

          • Refresh my memory... the next lane over was clear? Braking is the wrong action at this distance; if you might full-stop, you should instead use a lane toss.

        • by msauve ( 701917 ) on Monday May 07, 2018 @04:19PM (#56569382)
          Does the safety driver get any feedback on what the autopilot is planning? I'd think that even a simple green/yellow/red indication to show what it's perceiving (everything's OK/I see something and am prepared to take action/I am taking action) would be useful.

          I could see the car recognizing a potential hazard well in advance of a need to take action - that info should be given to the safety driver. If they in turn take action before the autopilot would have, perhaps an algorithm needs tweaking. And, if the driver sees a potential hazard first, they should be able to provide feedback on that, too, so they can figure out why the human is doing a better job.
        • by Holi ( 250190 )
          You mean when she looked up and said "OH SHIT"? because she wasn't looking at the road?
      • Exactly,
        Being this is test technology. It was his job to override the car when it made a bad decision. Being that Uber's Self driving cars is years behind other makers such as Google, the safety driver should had been much more vigilant.

      • by Raenex ( 947668 )

        I'm not convinced that particular spot was "well lit". She was obviously coming out of a shadow, and there seemed to be only the one light in the area with no overlap.

        • The road there is about as well lit as you would expect a big road to be, it's not as dark as the video implies. Granted, most of the time I'm down there I'm there for a concert and they might have additional lights on, but the video was obviously darker than what a person would see. At the place where she was crossing, the driver should have been able to see her crossing the road for several hundred feet at least.

      • by barc0001 ( 173002 ) on Monday May 07, 2018 @05:00PM (#56569712)

        > Probably the person playing on their phone as it was their job to override decisions made by buggy software.

        I dunno, looking at the video of the crash, the victim crossed the road outside of a crosswalk and wasn't even LOOKING in the direction of potential traffic. I'd assign the lion's share of the blame to the person who literally walked into the path of a brightly lit car without noticing.

      • Also the video Uber released is highly altered. I drive on that street frequently and it is very well lit.

        That doeesn't mean the video was altered. It's just a shitty camera.

      • Did you see the entire video? The street is very well lit all the way through, except at the point where the woman was crossing.

        • Re: (Score:3, Insightful)

          by sexconker ( 1179573 )

          I've seen plenty of videos. The street is very well lit in all of them except Uber's video.

          Interestingly, the camera facing the human "driver" is crisp and clear, using your standard "night vision" mode.
          The Uber video is either doctored or doctored.

    • by goombah99 ( 560566 ) on Monday May 07, 2018 @03:35PM (#56569024)

      This is a clinical trial. The FDA has long long long long long experience in conducting clinical trials. Now one can argue if FDAs caution is too much but even in the worst case everyone would agree they have a well established process for assuring something is safe and effective before you release it onto the public.

      Uber is conducting experiments on the public.

      If this were a new drug or treatment or medical procedure they would be shut down.

      This is actually far worse than that because most new drugs or treatments have clear lineages from prior ones that give us high expectations of what the outcome will be.

      The argument that something has to be allowed prematurely because in the long run it will save lives is a failed argument for medicine.

      In this case there is nothing to support the claim that this will save lives in the long run. Sure one could imagine that it would. But I don't think thats very well established. And if this were a drug study people would have spent the time and money to establish that.

      The claim that they have conducted 5 million miles (or whatever of testing) is rubbish. Those are not statistically valid tests. We execs dashing in front of the cars going 50 miles per hours in any of those tests? I assure you that did not happen.

      Moreover we already have evidence from those tests that driver re-aqusitions do happen frequently, and there is a substatnial lag in the hand over dues to human inattention. THe fact that they only had one driver in it says Uber is negligent.

      • It's not a clinical trial unless all the "patients" give informed consent. This is testing on people that have not be forewarned and have given zero consent. That is a very different animal.
        • > This is testing on people that have not be forewarned and have given zero consent

          Have you given your consent to the guy down the street having his first epileptic seizure while driving past your kids playing?

          Framing self driving car tests in drug trial language is useless.

      • So are all the other "autopiloting" car manufacturers. And drive-by-computer, aka no-mechanical links to the brakes, steering and throttle are also robotic, which will bite us some day.

      • Skin in the game (Score:5, Informative)

        by iMadeGhostzilla ( 1851560 ) on Monday May 07, 2018 @05:00PM (#56569722)

        Excellent point re execs. I read that in England sometime in the middle ages bridge engineers were required after the construction to sleep for two weeks under the bridge -- with their families.

      • >Uber is conducting experiments on the public.

        At some point you have to test tech like this in the real world.

        >If this were a new drug or treatment or medical procedure they would be shut down.

        Would it? If this "new drug" had the potential even with a couple of side effects to replace or supplant a known drug that was already killing 40,000 people and maiming hundreds of thousands in the US alone per year?

        • by I4ko ( 695382 )

          Its not movie studios don't have "fake" real towns built in their premises in California. Why weren't these used to test the car with individuals who consented to be stand-ins for cyclist, pedestrians and other drivers.

      • If this were a new drug or treatment or medical procedure they would be shut down.

        I'm not sure how you can draw an analogy there. In a clinical drug trial, the drug doesn't go out and kill someone not part of the trial.

      • (Should) We (have) execs dashing in front of the cars going 50 miles per hours in any of those tests?

        This is a really cool idea. It would "drive" home the point on system safety. Think how much more thought there'd be about operating when the exec's gotta put their life on the line for the work of their minions?

      • by stephanruby ( 542433 ) on Monday May 07, 2018 @05:39PM (#56569958)

        If this were a new drug or treatment or medical procedure they would be shut down.

        Uber self-driving tests have been (mostly) shut down.

        Uber makes it sound like they suspended their testing operations voluntarily, but the fact is they lost their testing permits in Arizona, California, and one other state.

        And if there is any testing going now with Uber, it's only happening now in computer simulations, or in mocked up urban environments with fake pedestrians and bicyclists.

    • The head of QA. How many of tests on a track with mannequins did they do? Probably should have been hundreds or more.

    • start at the top UBER VP / CEO needs to go to court take the fall for the full outsourced map. Or the next one can just outsourced things so much that no one person is responsible and it takes an year or more to just fully understanding the outsourced mapping.

    • Nobody, since the case was settled out of court.

      • Settlement out of court is for civil cases, not criminal cases. The equivalent is a plea bargain, which happens after arraignment.
    • by Holi ( 250190 )
      The safety driver who was too busy looking at her phone instead of the road.
    • Both venture capitalists and politicians.
    • No one, because there was no vehicular manslaughter.

      The woman was found to be at fault [azcentral.com] for not checking that the road was clear before stepping out of the shadows to cross illegally. Something that she could have easily done since it was dark and the vehicle's headlights were on.

      A large median at the site of the crash has signs warning people not to cross mid-block and to use the crosswalk to the north at Curry Road instead.

  • Oh good. (Score:4, Interesting)

    by Ichijo ( 607641 ) on Monday May 07, 2018 @03:28PM (#56568960) Journal

    The autonomous programming detects items around the vehicle and operators fine-tune its sensitivity to make sure it only reacts to true threats (solid objects instead of bags, for example).

    Then it's an easy fix. Just move the "sensitivity" slider a little to the left.

    Actually, it's kind of terrifying that all that stands between life and death is a sensitivity setting.

    • The autonomous programming detects items around the vehicle and operators fine-tune its sensitivity to make sure it only reacts to true threats (solid objects instead of bags, for example).

      Then it's an easy fix. Just move the "sensitivity" slider a little to the left.

      Actually, it's kind of terrifying that all that stands between life and death is a sensitivity setting.

      Absolutely.

      I hope they use material design so the settings are all hard to see.

      And I really hope it's totally ambiguous whether you have to click Save, or if the changes to the Sensitivity slider will just save automatically, just because you touched them or something.

    • Re: (Score:2, Interesting)

      by Carewolf ( 581105 )

      The autonomous programming detects items around the vehicle and operators fine-tune its sensitivity to make sure it only reacts to true threats (solid objects instead of bags, for example).

      Then it's an easy fix. Just move the "sensitivity" slider a little to the left.

      Actually, it's kind of terrifying that all that stands between life and death is a sensitivity setting.

      It is not the setting that is the problem. The problem is socalled AIs with less intelligence than a cockroach being put behind the wheel of cars.

    • That is general true about the world.
      If you are a driver and you are distracted or not fully focused on the world around you, your sensitivity setting is just off too. The biggest reason why Motocycles get in accidents is because automobile drivers fail to see them, just because they may not be expecting a Motocycle, so their eyes are on the look out for fast moving objects that fill up at least 2/3 of the lane. This fact that we fail to comprehend things that we don't expect is how magicians trick us to s

      • is because automobile drivers fail to see them, just because they may not be expecting a Motocycle,

        The reason I didn't see them is because they were coming up between two lanes of moving traffic. You're right, I am not set to expect that.

    • does the autonomous sensitivity need to be changed all the time? Time of Day? Weather? urban vs rural

    • Just move the "sensitivity" slider a little to the left.

      Actually, it's kind of terrifying that all that stands between life and death is a sensitivity setting.

      There is no "correct" setting for sensitivity because the software is broken. Where it is right now is both "incorrectly classifying a safe situation as dangerous" AS WELL AS "incorrectly classifying a dangerous situation as safe" (probabilities apply).

      Which way do you want the slider to move? It's already too far from the correct position for both classifications.

    • By the way this was a false negative (detection), not false positive.

      A false negative means it decided there was nothing there i.e. that the data did not indicate an object. But that was false.

      A false positive would mean the system decided something was in front of the car, when there wasn't anything or anything significant anyway.

      The problem is when you set the parameters to lower the rate of false negatives (A good thing if you are considering being a pedestrian), then the rate of false positives goes up.

  • Too large! (Score:5, Interesting)

    by HornWumpus ( 783565 ) on Monday May 07, 2018 @03:30PM (#56568974)

    I understand that programmatically telling a blowing plastic bag from a child's toy is difficult.

    But she (and her bike) were clearly large enough to damage the vehicle. Even if the code saw her as debris, the car should have avoided it.

    I think the code had to have dismissed her as lens flair or something similar.

    • Re: (Score:2, Insightful)

      by Xylantiel ( 177496 )
      But that's the danger of machine learning, which seems to pass for AI today, you often don't really know why it does anything that it does. Makes it difficult to test for correct function to say the least. I guess this test failed. Maybe they shouldn't be testing this on public streets.
    • I think the code had to have dismissed her as lens flair or something similar.

      Damn you Michael Bay!

  • Uber cuts corners (Score:5, Insightful)

    by DogDude ( 805747 ) on Monday May 07, 2018 @03:35PM (#56569026)
    Uber's entire business model is based on cutting corners (not paying employees as employees, not following local taxi laws/regulations, etc.). I wasn't at all surprised to hear that one of their self-driving test cars killed somebody. I immediately assumed that it was the result of yet another corner that they cut.
    • It seems to me that all automated cars are based on cutting corners. Since they are doing poorly, they could add sensors to eliminate a lot of issues but they don't because trying to solve this with programming is easier. That is, if these sensors are actually as capable as Slashdotters SAY they are, the only conclusion can be that there isn't enough of them. The car is checking a thousand times a second, after all.
      • I think a big part of the problem is unwillingness to spend on a really really good sensor array. Being confused by stationary objects is only a thing because the computer needs to make guesses with choppy data that is not actually reliable. So, instead of making the occupants seasick with lots of popping on the brakes for no apparent reason, they try to teach the car to ignore some signals.

    • Uber's entire business model is based on cutting corners (not paying employees as employees, not following local taxi laws/regulations, etc.). I wasn't at all surprised to hear that one of their self-driving test cars killed somebody. I immediately assumed that it was the result of yet another corner that they cut.

      And I think your assumption is pretty valid. I believe there will be some attorneys that agree with me as well. That wrongful death suit is going to be very expensive and damaging to Uber.

      • That wrongful death suit is going to be very expensive and damaging to Uber.

        You think so? Because from what I see, Uber settled confidentially with the woman's family within 11 days after the accident.

    • Maybe that's a fair immediate assumption, but did you view the video and, if so, did it cause you to reevaluate your initial assessment?

      I started by assuming that the technology was still its infancy and hence crap[1], then I saw the video and realize that no only is the technology crap, but it's literally a person jumping out from a shadow at the last possible moment on a large thoroughfare nowhere near a crosswalk.

      [1] Not even a judgment on Uber TBQH, could have been Tesla or GM or Toyota. I've seen enoug

      • by quantaman ( 517394 ) on Monday May 07, 2018 @05:01PM (#56569728)

        Maybe that's a fair immediate assumption, but did you view the video and, if so, did it cause you to reevaluate your initial assessment?

        I started by assuming that the technology was still its infancy and hence crap[1], then I saw the video and realize that no only is the technology crap, but it's literally a person jumping out from a shadow at the last possible moment on a large thoroughfare nowhere near a crosswalk.

        [1] Not even a judgment on Uber TBQH, could have been Tesla or GM or Toyota. I've seen enough technologies come up to realize that the cutting edge is riddled with snakes. By the time it's thoroughly ironed out, it's also super boring.

        I did view the video.

        And like most people I came to the conclusion that Uber was either using ridiculously bad cameras or the video was altered. This impression was only compounded when 3rd party videos came out that showed the road in question was actually quite well lit.

        Either way Uber was still fully to blame for the collision, the tech was obviously not ready for testing on live roads, especially not with a single driver who was prone to being distracted. Authorizing that test is damn well close to negligent homicide.

      • by Green Mountain Bot ( 4981769 ) on Monday May 07, 2018 @05:44PM (#56569992)

        ... it's literally a person jumping out from a shadow at the last possible moment on a large thoroughfare nowhere near a crosswalk.

        If by "jumping out from a shadow" you mean "slowly crossing the street", and by "at the last possible moment" you mean "and had nearly crossed all three lanes", ignoring that there's plenty of evidence that the released video did not even vaguely show the actual level of light in the location.

  • criminal case! let's see uber ceo in tent city jail for some time.

    Also with an criminal case you can't hide under the EULA's or a big list of subcontractors.

  • In autopilot software (airpalnes / FAA) this would be tuned in testing / code review before it makes it to real use.

  • My guess is that the problem is actually much more complicated than Uber is making it out to be. The easiest way out of this for them has become, "oops we didn't set the software up right". This allows them to save face, because even if it isn't true they just have to avoid the same circumstance in their testing (like only drive in the day) and then everyone thinks they fixed their software. I really hope they have to provide absolute proof that the problem is exactly what they are saying it is, and that
  • In all likelihood the AI did detect the woman, but then decided she wasn't attractive enough to harass and switched to "ignore" mode.

  • TFA doesn't mention how much time, if any was allowed for compliance. Compliance errors were even documented in an old movie.

    https://www.youtube.com/watch?... [youtube.com]

  • by MobyDisk ( 75490 ) on Monday May 07, 2018 @04:51PM (#56569656) Homepage

    I still want to know why nobody seems to care that the driver wasn't looking at the road. The software bug is secondary.

  • Uber software did not "see but ignore woman".

    Uber software processed some pixel data and erroneously concluded that there was no significant solid object right in front of the car.
    Simple as that.

    To imply that the the software "saw" a person there but ignored the person is pejorative, sensationalist language, designed to troll.

  • So...this is a QA cycle with the expense of 'users'? Did they not test it in more controlled environment? I hope they (all of them) will now.

Neutrinos have bad breadth.

Working...