Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Transportation Google

Inside Google's Self-Driving Car Test Center (medium.com) 43

An anonymous reader writes: Steven Levy reports on his trip to the facility where Google tests is autonomous vehicles (here's a map). The company apparently has a four-week program to certify people to not-drive these cars, and they gave Levy an abbreviated version of it. "The most valuable tool the test team has for making sure things are running smoothly is the laptop on the co-driver's lap. Using an interface called x_view, the laptop shows the world as the car sees it, a wireframe representation of the area that depicts all the objects around the car: pedestrians, trees, road signs, other cars, motorcycles—basically everything picked up by the car's radar and laser sensors.

X_view also shows how the car is planning to deal with conditions, mainly through a series of grid-like "fences" that depict when the car intends to stop, cautiously yield, or proceed past a hazard. It also displays the car's path. If the co-driver sees a discrepancy between x_view and the real world, that's reason to disengage. ... At the end of the shift, the entire log is sent off to an independent triage team, which runs simulations to see what would have happened had the car continued autonomously. In fact, even though Google's cars have autonomously driven more than 1.3 million miles—routinely logging 10,000 to 15,000 more every week—they have been tested many times more in software, where it's possible to model 3 million miles of driving in a single day."

This discussion has been archived. No new comments can be posted.

Inside Google's Self-Driving Car Test Center

Comments Filter:
  • by Gravis Zero ( 934156 ) on Friday January 15, 2016 @04:37PM (#51310061)

    it's not bigger on the inside. :(

  • by 110010001000 ( 697113 ) on Friday January 15, 2016 @04:39PM (#51310087) Homepage Journal
    Why doesn't medium.com just buy Slashdot and get it over with?
  • Is it fair to say these cars are currently safer than human drivers when a dedicated copilot is nearly always present and double checking on the cars and who then shuts it off for a human to take over at the slightest problem? How many accidents have been prevented?
    For those of you that haven't seen the university of Michigan report [govtech.com]
    • No, it isn't fair to say that. Googles car depend on extensive pre-mapping of the driving space. That is why the tests are limited to certain areas. If you put that car in the middle of Philadelphia it wouldn't work. Basically autonomous car driving is a con job at this point.
      • Who exactly is Google conning? So far, the only ones losing money on self-driving cars are the companies like Google that are researching them.

        • by MrL0G1C ( 867445 )

          Indeed, Google is conning itself, they've said their goal is autonomous cars by 2020 but they still haven't actually said how they hope to achieve this, primarily - with or without constant extensive mapping?

          And now they've released the fact that the only reason the cars didn't crash is perhaps because humans took control of the cars whenever they made mistakes.

    • by khasim ( 1285 )

      The problem with that report is that it only covers reported accidents.

      ANY accident that involves an autonomous car gets reported.

      A similar accident between two humans may not be reported.

      And, finally, the report even states that NONE of the accidents were the fault of the autonomous cars. They were ALL the fault of the human drivers. So, yes, the autonomous cars are better drivers than the humans in those instances.

      • Two things.
        1) The first sentence of the report says "after correcting for underreported accidents".
        2) Most of the mileage of autonomous cars is going 25mph on residential streets or on the open, non-congested highway under perfect driving conditions.
        so I'm not sure it's comparing apples to apples. I haven't been able to find a non-pay walled version if anyone has I would like to read it but am not paying.
        • by khasim ( 1285 )

          From the abstract:

          Second, the corresponding 95% confidence intervals overlap. Therefore, we currently cannot rule out, with a reasonable level of confidence, the possibility that the actual rates for self - driving vehicles are lower than for conventional vehicles.

          So they are admitting that their "higher" rate may not be correct. Which is what I said.

          Secondly, yes, it is "apples to apples". Because the human drivers involved in the accidents with the autonomous cars were all driving under the exact same co

          • Nice goalpost moving. You said reported accidents. You barely skimmed before replying. So no your argument is invalid as I'm sure your armchair guesswork is better than peer reviewed papers.
            • by khasim ( 1285 )

              Nice goalpost moving.

              Don't use terms you do not understand.

              You claimed that the autonomous cars were WORSE drivers because they had MORE accidents. I said that that 100% of the accidents had to be reported for autonomous cars but not for human accidents. So there is a question whether the comparison is accurate.

              You claimed that the paper said that they had adjusted for that.

              I quoted the summary saying that those researchers admitted that they were NOT sure that the adjustment was correct.

              And then I pointed

              • by khasim ( 1285 )

                Or let's try this a different way. A mental experiment. Think of both sides as two humans. Autonomous Alice and Bob.

                Alice drives less than Bob. And Alice only drives under perfect conditions in a limited area. Bob drives everywhere in all conditions.

                Bob does not report every accident he has to his insurance company. But Alice does. The insurance company sees that, on average, Alice reports more accidents than Bob. And the insurance company tries to adjust for Bob's under-reporting.

                But every single accident

      • Plus you failed to take into account these cars are piloted by humans who, in all likelihood, prevented accidents from happening which was my original point. For a fair study in safety each time they removed control in an unsafe situation should be counted as a crash. That was never documented and released to the public.
  • Self-driving car = Google
    Driverless car = Everyone else
    • by GuB-42 ( 2483988 )

      Google cars, the ones that go in actual traffic are self-driving, not driverless. Because there is still a driver in the car in case of emergency.
      Passenger airplanes nowadays are pretty much self-flying, but they are not pilotless.

  • In fact, even though Google's cars have autonomously driven more than 1.3 million miles—routinely logging 10,000 to 15,000 more every week—they have been tested many times more in software, where it's possible to model 3 million miles of driving in a single day."

    That would be three million miles over the same few miles of well understood track and roads. The real world is much more varied than that.

  • by smooth wombat ( 796938 ) on Friday January 15, 2016 @05:40PM (#51310521) Journal
    Google revealed that their vehicles had been involved in 341 "disengagements" (when the driver had to take over) between September 2014 and October 2015. Of those "disengagements", 79.8% were due to a failure of the autonomous system.

    Read the details here [yahoo.com] which outline the results of the report.
    • by Kjella ( 173770 )

      I wouldn't say it failed. As the article explains it does not mean the sensors failed to detect something, it means the sensors detected a glitch/malfunction/blockage and alerted the driver to take over. Presumably that means the test models lack redundancy and the ability to "vote out" malfunctioning gear, those are things that are relatively trivial to fix since it's purely a technical issue and not about the car's understanding of the surroundings. In fact over the test period they show about a 7x improv

      • by MrL0G1C ( 867445 )

        Presumably that means the test models lack redundancy and the ability to "vote out" malfunctioning gear, those are things that are relatively trivial to fix since it's purely a technical issue and not about the car's understanding of the surroundings. In fact over the test period they show about a 7x improvement on that to once >5000 miles. It's the other 20% that are interesting, after simulations:

        That's a rather massive presumption and could be completely wrong for instance the failure could be on

  • Why would a car test center need to drive?

  • in TFA the car has to be taken off auto-drive because it comes to a construction area and slows down so much it was barely moving

    these things are not going to work in this form, and pushing them into the market will be a disaster

    car AI is much, much improved, and I can see groups of electric semi-trucks following one lead driver on an interstate, but that's about it

There are two ways to write error-free programs; only the third one works.

Working...