Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Transportation Technology

Uber Self-Driving Cars Allowed Back On California Roads (bbc.com) 30

Nearly two years after one of Uber's self-driving cars was involved in a fatal crash in Arizona, the ride-hailing firm has been allowed to test its autonomous vehicles on public roads in California. "Receiving a permit in California -- which has granted permits to 65 other transport firms -- is the latest step in Uber's revival of the program," reports the BBC. From the report: California allows companies to test self-driving technology with a backup driver in the car. Before the fatal crash, Uber's self-driving cars were being tested in four locations in the United States -- Phoenix, Toronto, Pittsburgh and San Francisco. Uber said it is considering using San Francisco, where the company is based, again. It did not give a timeline for when it will resume testing. California has granted permits to 66 companies in total to test autonomous vehicles, but Uber is the only one that has been involved in a fatal crash.
This discussion has been archived. No new comments can be posted.

Uber Self-Driving Cars Allowed Back On California Roads

Comments Filter:
  • Christine.
  • by 0100010001010011 ( 652467 ) on Wednesday February 05, 2020 @06:59PM (#59695342)

    Uber's self driving division exists in as much as they try to pretend they have enough of a self driving division to get bought and walk away with no actual decent IP.

    They're trying to unicorn their way into functional safety and their lack of experience shows. First they went in and just poached all of Carnegie Mellon’s top robotics lab to build self-driving cars.

    Mistake 1. Throwing unlimited money at a bunch of PhD students that have never worked outside of academia does not work to make an actual product.
    Then, to rein them in they moved in some Valley folk.

    Mistakes 2 & 3. You can't just herd PhD students, they're skittish. You can't node.js development style your way into Functional Safety.

    Then, they decided "Maybe we should hire functional safety people", or at least those that have heard of ISO26262.

    Mistake 4. They started looking for what they needed first, last. Meaning they skipped over *everything* like proper requirements. The recruiter was proud they were tearing it apart and "rewriting it from scratch weekly."

    dSPACE GmbH makes a device explicitly designed for test environment for testing an ECU for Autonomous Emergency Braking (AEB) with hardware-in-the-loop simulation. [dspace.com]. When asked if they were running HIL tests the recruiter said "They're getting to it". (This was months after the woman was killed).

    Volvo's semi truck division demonstrated back in 2013 their has emergency braking algorithms [youtube.com]. Algorithms that Uber just turned off that were in the production vehicle they hacked apart.

    Uber is playing fast and loose with everything safety related for a cash grab, more people are going to die.

    Tesla's customers think they have ADAS5 when it's closer to ADAS2.9 (TODO: Broad side of semi trailers).

    Uber is trying to sell itself as ADAS4 to the public when really it's just a bunch of ADAS 0.9s strung together, poorly.

    • Well, everyone wanted a solution to the homeless problem but nobody wanted to spend money on it. Are you surprised that this is the result?

      • Nobody wants to be the sacrificial lamb in rapid development, but if robo cars save 1/3 of the lives over human drivers, that's well over 10,000 a year in this country alone.

        If that's delayed 10 years because we don't wanna kill 4 pedestrians a year, "Congrats, Masters. Your principles killed 99,960 people," said House, with a matter of fact disgust.

    • Cash grab. You just nailed it.Uber is nothing but an elaborate scam.
    • by 0100010001010011 ( 652467 ) on Wednesday February 05, 2020 @07:50PM (#59695666)

      If anyone is interested in all the tools and testing that you *should* be doing.

      - dSpace GmbH [dspace.com] has been making HIL test benches for some time. It's competitor ETAS [etas.com] just turned 25.
      - Vector Informatik [vector.com] develops software tools and components for networking of electronic systems based on the serial bus systems CAN, LIN, FlexRay, MOST, Ethernet, AFDX, ARINC 429, and SAE J1708 as well as on CAN-based protocols such as SAE J1939, SAE J1587, ISO 11783, NMEA 2000, ARINC 825, CANaerospace, CANopen and more.
      - VectorCAST [vector.com] for software test automation, often on microprocessor (communicating over CAN/JTAG)
      - MATLAB & Simulink [mathworks.com] for DO-178C/ISO-26262 certification bits. Poly
      - Polyspace [mathworks.com] for formally prove the absence of critical run-time errors without executing code and coding rules, security standards, code metrics, and find bugs For both C and Ada. [mathworks.com].

      They usually have conferences that are pretty in depth technically. [dspace.com]

      Uber's method was "we'll pay someone to text behind the wheel".

    • Tesla's customers think they have ADAS5 when it's closer to ADAS2.9 (TODO: Broad side of semi trailers).

      Uber is trying to sell itself as ADAS4 to the public when really it's just a bunch of ADAS 0.9s strung together, poorly.

      Agree about Tesla Autopilot being just Level 2+. However, nitpicking terminology, ADAS is always Level 2 in the SAE autonomous driving levels. Uber is also ADAS and Level 2 because the driving autonomy levels don't mandate how well the functionality is implemented. Tesla's Level 2 is better than Uber's Level 2, but Autopilot also has its issues.

    • Algorithms that Uber just turned off that were in the production vehicle they hacked apart.

      I agree with everything you said except for this. When testing automation systems you have to disable other competing algorithms otherwise you mask problems in your testing or worse mistrain your algorithm making it specific to circumstance / equipment.

      Now when you disable such things you need supervision.

      It helps when that supervision isn't watching youtube videos.

      • Algorithms that Uber just turned off that were in the production vehicle they hacked apart.

        I agree with everything you said except for this. When testing automation systems you have to disable other competing algorithms otherwise you mask problems in your testing or worse mistrain your algorithm making it specific to circumstance / equipment.

        Now when you disable such things you need supervision.

        It helps when that supervision isn't watching youtube videos.

        So they should create a new ai to observe the supervisor and give increasing electric shocks when it detects disraction wander. I guess that would then need a human supervisor to make sure it doesn't deliver fatal shocks when ever the supervisor blinks. Then an AI observer for that supervisor and on and on until you end up basically with a bus full of people holding devices nervously looking at each other.

  • Dodging needles and human crap on the sidewalk is bad enough, now there's gunna be Uber autobots running people down on the roads, and the sidewalks, ... and the grass. (Bonus points to anyone who knows that reference: hint Steve Jackson).
  • ... next.

  • No, seriously, why?

    Even the engineers say we won't have functional self-driving cars until 2030.

    • I predict we'll have functionally safe self-driving cars for about 2.5 hours before Microsoft steps in and somehow convinces them all their cars need to be Office compatible. Then it'll be Melissa virus all over again only with real crashes.

    • Corollary: What has Uber done to warrant this change?
      Aside: did BBC's "journalist" even ask the question? Why not? What did they think they were doing, stenography?
    • We won't have them well past 2030 either, not until they figure out how real AI can be done (if at all). What they're using now will always fall short because it has no capacity to think.
      • by HiThere ( 15173 )

        Define your terms.

        When you say it can't "think", I don't know what you mean, so I can't tell whether you're reasonable, pessimistic, or a soulist.

        • If you could define think, it would be a much easier game.
          • by HiThere ( 15173 )

            The thing is, I've got several definitions for think that are each useful in different areas.

            When I'm talking normally, It means approximately "estimate as having a high probability of occurrence", but there are also contexts where it means "reason logically" or "recognize or predict a pattern". I'm sure there are other contexts that aren't occurring to me at the moment. ... OK, here's one. "A thermostat controlled system executes the minimum amount of thinking possible to maintain a homeostasis." That d

            • "A thermostat controlled system executes the minimum amount of thinking possible to maintain a homeostasis."

              No, that's called 'anthropomorphizing', a thermostat does not in any way shape or form 'think', it merely reacts according to well-defined physics and provides a very simple, pre-deternined function. An amoeba has more cognitive capability than any so-called 'AI' they keep trotting out; a fruit fly is like a 200 IQ genius in comparison to even the biggest most expensive 'AI' anyone has ever produced. You cannot define what makes 'human thought' work therefore you cannot build machines or write software that

          • If we could define 'think' then we'd be able to build walking talking general AI instead of the half-assed excuse for it they keep trotting out in the media, and we wouldn't need to have this discussion at all, we'd have silicon minds that could be taught like we teach humans how to drive and it would all be moot.
        • We don't even understand 'thinking' (i.e. 'cognition') well enough to even define what it is, let alone build machines that can do it. 'Deep learning algorithms' clearly are not it, though. A human can 'think' their way through a situation they're not prepared for, granted with variable results. A so-called 'self driving car' cannot, because it has no capability to do so, so it just comes to a halt -- or perhaps ignores something a human would have paid attention to, like an obstacle (or a person it ends up
    • Weird. I rode in a self driving Lyft car in Las Vegas a month ago. I must have skipped a decade.

  • Who will be the first person to die? If Uber is grossly negligent, who go to jail?
    • The poor sucker they roped into click "Next" on a bunch of IBM DOORS requirements.

    • by tflf ( 4410717 )

      People are going to die not matter what controls the vehicle. Death and serious injury happen every day while riding in a vehicle. While increased safety standards, better equipment and better roads have made driving safer, the biggest challenge to safe vehicular travel remains the nut behind the wheel. At the end of the day, most people accept the relatively low chance of death or serious injury as a reasonable trade-off for the benefits of vehicular travel.
      Let's just forget expecting Uber (or anyone else)

  • I need an app that tells me where all the self-driving cars are in real time, so I can avoid them.
  • four locations in the United States -- Phoenix, Toronto, Pittsburgh and San Francisco

    Who knew?

Genius is ten percent inspiration and fifty percent capital gains.

Working...