Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Google Transportation Technology

California Legalizes Self Driving Cars 508

Hugh Pickens writes writes "The Seattle PI reports that California has become the third state to explicitly legalize driverless vehicles, setting the stage for computers to take the wheel along the state's highways and roads ... 'Today we're looking at science fiction becoming tomorrow's reality,' said Gov. Brown. 'This self-driving car is another step forward in this long, march of California pioneering the future and leading not just the country, but the whole world.' The law immediately allows for testing of the vehicles on public roadways, so long as properly licensed drivers are seated at the wheel and able to take over. It also lays out a roadmap for manufacturers to seek permits from the DMV to build and sell driverless cars to consumers. Bryant Walker Smith, a fellow at Stanford's Center for Automotive Research points to a statistical basis for safety that the DMV might consider as it begins to develop standards: 'Google's cars would need to drive themselves (by themselves) more than 725,000 representative miles without incident for us to say with 99 percent confidence that they crash less frequently than conventional cars. If we look only at fatal crashes, this minimum skyrockets to 300 million miles. To my knowledge, Google has yet to reach these milestones.'"
This discussion has been archived. No new comments can be posted.

California Legalizes Self Driving Cars

Comments Filter:
  • Must past this test (Score:3, Interesting)

    by o5770 ( 2739857 ) on Wednesday September 26, 2012 @11:36AM (#41465415)
    Here is a scenario where if a self-driving car can pass 100% of the time, then I would deem it safe to get into.

    Driving on a mountain road around a sharp corner where there is a steep cliff on the right side. Auto-car is passed on the left by some *sshole "manual" driver, but then the *sshat driver cuts in short because of oncoming traffic at the last second. Robo-driver identifies there is suddenly a car intruding into its safe-T-zone (TM) and does what its programming tells it to do, avoid hitting other vehicles. So the self-driving wonder swerves right to avoid the other car and zooms off the cliff.

    A human driver would recognize that hitting the other car in this instance is the safer solution then to go careening off the steep cliff.

    I agree that a self-driving car can work, and 99% of the time will perform adequately to protect its occupants from disaster. But since we have not mastered true AI yet, all self-driven cars will be built with flaws in their logic that will fail catastrophically. "Avoid hitting all cars", for instance, is not a good enough directive to ensure the safety of the occupants in 100% of all situations.

    Someone mentioned that the deaths caused by self-driven cars would be far less then manual drivers, but then I would disagree that any technology introduced on the highways would be adequate to allow any fatality, especially in scenarios where a human driver may have been able to avoid death.

    Basically what I am waiting for is the inevitable 100 car pile up with massive fatalities that WILL occur at some point in time where investigation will identify that a self-driven car, or cars, was the cause of it. Any company involved in programming or manufacturing that self-driven car will be sued out of existence and the "love affair" everyone seems to have about auto-driving cars will end quickly.

    I am amazed at how delusional governments are into so quickly allowing this technology on the roads, sounds to me like there is some massive lobbying going on to short-cut the necessary amount of time to test auto-driven cars under all senarios, not just ones in controlled and predictable setups like we have seen. 5 years ago robo-cars could not drive around a dirt track, now they are quickly being allowed on our highways. That just is irresponsible.
    • by Anonymous Coward on Wednesday September 26, 2012 @11:41AM (#41465457)

      A human won't pass that test 100% of the time either, so I'm not sure what your point is about 100%. It's all statistics.

    • by h4rr4r ( 612664 ) on Wednesday September 26, 2012 @11:44AM (#41465487)

      Why would a self driving car ever drive off a cliff?
      Clearly it would rank available options and pick the lowest cost one. The cheapest collision in that case.

      Human drivers allow fatalities everyday. The question is not is it better than some hypothetical human driver, but is it better than the drivers we have right now.

      5 years ago the tech to do this was not cheap enough, now it is. This is called progress not being irresponsible. What is irresponsible is suggesting that the average person continue to drive automobiles when we have a better solution at hand.

      • by 0123456 ( 636235 )

        The question is not is it better than some hypothetical human driver, but is it better than the drivers we have right now.

        No, the question is: is it better than me?

        If not, I don't want it driving my car.

        • by h4rr4r ( 612664 ) on Wednesday September 26, 2012 @12:21PM (#41465975)

          No that question is; Is the car a better driver than me when I am sleep deprived, upset at my wife and in a hurry to get home?

          The computer will always drive the same, humans are not the reliable.

        • by Fauxbo ( 1393095 )

          For 99% of people the subjective answer will be 'no', but the objective answer will be 'yes'.

          Guess which one will win?

        • by Matimus ( 598096 ) <mccredie@gm a i l . c om> on Wednesday September 26, 2012 @12:34PM (#41466169)
          I have known a few terrible drivers in my life. Despite their friends, and occasionally strangers, telling them that they were terrible drivers, multiple collisions in which vehicles have been totaled, and even collisions with pedestrians, they still believed that they were good drivers. Individuals may not be the best judges of whether or not they can drive better than a machine.

          It will be interesting to see how this plays out. How the public perceives it. How it is marketed. How it is handled by insurance companies.

          • by mellon ( 7048 ) on Wednesday September 26, 2012 @12:45PM (#41466339) Homepage

            I think that the way it will play out is that as self-driving cars become a real and viable option, the penalties for bad driving will go up—drive drunk once, and you lose your license permanently, because why not—you can just use a self-driving car. Driver's tests will get harder, because why not—if you fail, you can just use a self-driving car. It will start with really egregious behavior, because voters won't feel threatened by it in sufficient numbers to cause a problem. Over time, the standards for human drivers will go up; at some point driving your own car will be about as common as flying your own airplane. We'll also probably stop giving licenses or learners' permits to teenagers, because they don't have the vote, and their parents would prefer to avoid a teenage testosterone tragedy.

            Of course, a really spectacular failure on the part of a self-driving car could put that whole scenario off by a generation.

        • by flimflammer ( 956759 ) on Wednesday September 26, 2012 @01:06PM (#41466629)

          Are you an objective source for deciding whether or not you're a better driver than the machine?

        • The question is not is it better than some hypothetical human driver, but is it better than the drivers we have right now.

          No, the question is: is it better than me?

          If not, I don't want it driving my car.

          It is.

          You're not that great of a driver. Being human prevents you from being a better driver. You only have eyes in front of you, and you need to turn your head and look around, pay attention to mirrors, each time taking your attention away from where you are going for a fraction of a second. The computer can pay attention to 360 degree sensors 100% of the time. Once you detect the need to take immediate action, you need to move your leg to hit the brakes. For the computer controlling the car, the brak

        • No, the question is: is it better for OTHER people to have them than let them drive?

      • I accidentally modded you down, so I'm using this comment to negate it.

      • by dbet ( 1607261 ) on Wednesday September 26, 2012 @12:36PM (#41466205)

        Why would a self driving car ever drive off a cliff?

        I don't know, maybe life wasn't what it expected it to be?

    • by queazocotal ( 915608 ) on Wednesday September 26, 2012 @11:45AM (#41465503)

      I note that in the USA, the pass rate of the driving test in general exceeds 50% by a considerable margin.
      This is not due to great tuition and driver skill and knowledge.
      Also, a number of other safety features that would considerably reduce deaths are not implemented.

      If the autodriver is safer than the average auto driver, ...

      • by tibit ( 1762298 )

        What are those fantastic "other safety features that would considerably reduce deaths" that you claim? Care to elaborate? U.S. has been a leader in safety requirements for cars for quite a while I'd think.

    • That may be a problem in the lawsuit happy USA, but in the rest of the world, self driving cars will improve by leaps and bounds. Anyhoo, a self driving car crashing, is an industrial accident, there are already laws for that.
      • by slew ( 2918 )

        ... but in the rest of the world, self driving cars will improve by leaps and bounds...

        Depending on where in the world you are, this might be a necessity. Observing the driving habits I've seen in many countries of the far east and parts of southern europe, self driving cars *better* improve by leaps and bounds just to survive!

    • Re: (Score:2, Insightful)

      My first gut instinct is, this is bad, bad, bad.. but then I think of the stupid beatch in the Hyundai that blew by me at 85mph, then cut into my lane, making me slam my brakes on while driving to work this morning.. so maybe it's not so bad.
    • by rich_hudds ( 1360617 ) on Wednesday September 26, 2012 @11:51AM (#41465563)
      I think you're entirely wrong.

      A much more likely scenario is that the self driving cars prove statistically to be safer than human driven cars.

      At that point expect legislation to ban humans from driving.

      Imagine trying to defend yourself in court if you've caused a fatal accident.

      'Why did you turn off the computer when you know it is proven to be safer?'
      • by h4rr4r ( 612664 ) on Wednesday September 26, 2012 @11:55AM (#41465649)

        We can only hope that driving tests become harder and harder and only those who pass them will be allowed to drive themselves.

        Why would you ever want to turn off the automated driver? Do you think rich folks are constantly putting their limo driver in the back and taking the wheel themselves?

        • For the foreseeable future, there will be times when it's necessary to disable the autodriver. New roads that aren't in the GPS system, for example, or private driving areas (e.g., parking lots) that aren't well-mapped.

          And sometimes it's just more fun to drive the car yourself.

          • For the foreseeable future, there will be times when it's necessary to disable the autodriver. New roads that aren't in the GPS system, for example, or private driving areas (e.g., parking lots) that aren't well-mapped.

            And sometimes it's just more fun to drive the car yourself.

            I just replaced my Android-powered car with an iOS 6, and the maps aren't up to par yet.

      • Re: (Score:3, Insightful)

        by CanHasDIY ( 1672858 )

        'Why did you turn off the computer when you know it is proven to be safer?'

        "Because my brain operates at a frequency modern computers cannot even begin to match, and it cannot be hacked."

        • by h4rr4r ( 612664 ) on Wednesday September 26, 2012 @12:00PM (#41465713)

          1. your reaction time is absolute crap.
          2. advertisers disagree with your notion that human brains cannot be hacked.

        • by BasilBrush ( 643681 ) on Wednesday September 26, 2012 @12:12PM (#41465877)

          1) You reaction time is far worse than a computer.
          2) Your estimation of distances is far worse than machines absolute measurements.
          3) You are limited to two forward facing eyes, augmented by 3 small mirrors. And you share some of the vision time with looking at the dash. An auto-car can look in all directions at once, and monitor all dashboard information and more at the same time.
          4) An auto-driver will be better at maintaining a safe speed. Able to stop in the distance it knows to be clear far more often than a human driver.
          5) I'd expect an auto-driver system to be seperate from any other computing devices in the car, and connected to the internet or any other vector for hacking. I'd expect them to be as immune to hacking as an auto-pilot system in a plane.

          • by gorzek ( 647352 )

            All excellent points. We already have computer-assisted driving. Automatic traction control and stability systems have computers hooked up to your car, monitoring the vehicle's characteristics at all times. They adjust in real-time to keep the vehicle on the road, going in the direction you have it pointed. They can do this a lot more effectively than a human ever could.

            It's time people realized there are just things machines are better at than we are. It's not something that denigrates humans, it's just ac

        • Comment removed based on user account deletion
    • ... does what its programming tells it to do, avoid hitting other vehicles ...

      Its a bit of an assumption to believe that the driving software has that single goal. Staying on the road seems to be something the software is already considering. I wouldn't be surprised if existing software already has "prepare for crash" code that tightens seat belts, unlocks doors, ... maybe even sends an "oh shit" text message to the road side assistance service.

      • by h4rr4r ( 612664 )

        Most cars already have some of this. Hit the brakes too hard and the seatbelts tighten. I would imagine you are correct and this will only get better and better.

        Some sort of inflato hans-device would be awesome. Of course I always wanted a 5 point harness in my cars.

    • The plan to allow test vehicles to cover a large number of miles and then compare collision/fatality stats with human drivers is the correct one. It's quite likely that the auto-driver will make different mistakes than the typical human driver. For the sake of argument, suppose it has a greater tendancy to make the mistake you outline than a human driver does. That doesn't matter if it also avoids more collisions and fatalities in other scenarios. If the stats say you get fewer collisions and fatalities wit

      • by gorzek ( 647352 ) <gorzek.gmail@com> on Wednesday September 26, 2012 @01:05PM (#41466619) Homepage Journal

        That brings up another thing autocars will be better about than humans. Individual humans can learn from their mistakes, but that knowledge is not directly transferable to other humans. Any mistake a self-driving car makes, however, can have its solution incorporated into all self-driving cars (or at least all the ones of that model.) So, lots and lots of testing should ultimately give us very safe and effective cars.

        • by tibit ( 1762298 )

          Rimshot :) Such resources already exist in mature organizations like NASA. They have a lessons learned database that is pretty much required reading for any engineer who doesn't want to stagnate.

    • by NeutronCowboy ( 896098 ) on Wednesday September 26, 2012 @11:53AM (#41465611)

      So the self-driving wonder swerves right to avoid the other car and zooms off the cliff. A human driver would recognize that hitting the other car in this instance is the safer solution then to go careening off the steep cliff.

      Someone has never, ever taken an AI class. Or even an algorithm class dealing with risk. Here's how the calculation actually works (and by the way, that approach is about 20-30 years old).
      Every situation is assessed an impact value: driving into oncoming traffic, 0 (very bad); driving into the right ditch, 10; swerving into a legal lane, 50; etc. Every situation is given a set of possible actions, with each action having a probability of being completed successfully. The algorithm multiplies the outcome with the odds of achieving that outcome, and picks the highest value. You can set it up in different ways, but the idea is the same: multiply outcome severity with odds of achieving outcome, pick lowest combined risk/outcome. In your situation, driving off the cliff (which is assumed to be very bad, since the car can see a very steep drop-off with no bottom) is going to have a much worse outcome than hitting the car in front of it. Hitting the car in front of it is guaranteed, but so is driving off the cliff. As a result, the algorithm will make the automated car hit the car in front of it, rather than drive off the cliff.

      Not to mention that cars don't sleep, always behave optimally (according to the algorithms in place), and have no blind spots.

      Basically what I am waiting for is the inevitable 100 car pile up with massive fatalities that WILL occur at some point in time where investigation will identify that a self-driven car, or cars, was the cause of it.

      You mean like the ones that regularly happen in fog and icy/rainy conditions?

      Any company involved in programming or manufacturing that self-driven car will be sued out of existence and the "love affair" everyone seems to have about auto-driving cars will end quickly.

      That is a very real risk. Not sure how the laws will deal with it. But until that question is addressed, we won't see large-scale sales of automated cars. I suspect that we'll see the equivalent of ToS: by using this car, you agree to be fully responsible for all its actions and accidents.

      • You missed a cost.
        thisAlgorithmBecomingSkynetCost=-999999999

      • I don't think it is a question of the algorithm, but rather a question on the computer's ability to recognize the situation accurately. Machine vision has improved a lot, but is it to the point that it can recognize all the situations brought up in the OP. Maybe it can, but I think the only real test is extended real world testing.

      • So the self-driving wonder swerves right to avoid the other car and zooms off the cliff. A human driver would recognize that hitting the other car in this instance is the safer solution then to go careening off the steep cliff.

        Someone has never, ever taken an AI class. Or even an algorithm class dealing with risk. Here's how the calculation actually works (and by the way, that approach is about 20-30 years old). Every situation is assessed an impact value: driving into oncoming traffic, 0 (very bad); driving into the right ditch, 10; swerving into a legal lane, 50; etc. Every situation is given a set of possible actions, with each action having a probability of being completed successfully. The algorithm multiplies the outcome with the odds of achieving that outcome, and picks the highest value. You can set it up in different ways, but the idea is the same: multiply outcome severity with odds of achieving outcome, pick lowest combined risk/outcome. In your situation, driving off the cliff (which is assumed to be very bad, since the car can see a very steep drop-off with no bottom) is going to have a much worse outcome than hitting the car in front of it. Hitting the car in front of it is guaranteed, but so is driving off the cliff. As a result, the algorithm will make the automated car hit the car in front of it, rather than drive off the cliff.

        Not to mention that cars don't sleep, always behave optimally (according to the algorithms in place), and have no blind spots.

        Although I agree with your analysis, the question itself is flawed... It presumes that the self driving car is in a situation where (i) there's a truck immediately ahead, (ii) a truck immediately behind with failing brakes, and (iii) a motorcycle in the next lane (the question doesn't actually specify whether the motorcycle is pacing the car and traveling in the same direction or oncoming, but it's mostly irrelevant*). In order to face the dilemma of (a) crash off the cliff, (b) get smooshed between the tru

    • by BaronAaron ( 658646 ) on Wednesday September 26, 2012 @12:01PM (#41465721)

      The system just needs a rapid manual override and a little common sense from the driver.

      I see self driving cars as an evolution of cruise control. Just as cruise control gets out of your way as soon as you manually press the accelerator or brake the auto drive system should get out of your way as soon as you move the steering wheel.

      Also, drivers should take responsibility when they feel it's safe to engage the auto drive. I wouldn't use cruise control on a narrow mountain road, neither would I use auto drive. I would love to be able kick on auto drive on a long boring highway though and focus on a phone call or whatever.

      • by drerwk ( 695572 ) on Wednesday September 26, 2012 @01:23PM (#41466837) Homepage

        The system just needs a rapid manual override and a little common sense from the driver.

        See the results of the http://en.wikipedia.org/wiki/Air_France_Flight_447 [wikipedia.org] AF447 flight for the odds of this working. As a one time private pilot I am totally baffled as to how a professional pilot could hold a plane in a stall from 35,000 ft to the ground. I think there were several issues including human factors in the design of the interfaces; but I really think that these guys got used to being along for the ride and it was not conceivable to them that the plane had decided to stop flying itself.

        After a week of having an auto-car drive me to work everyday I can not imagine I'd be ready in 1/2 second to suddenly take over for the computer and expect a good result.

    • by Altanar ( 56809 ) on Wednesday September 26, 2012 @12:04PM (#41465773)
      Self driving cars *never* swerve. They brake. Statistically they know that swerving almost always is worse than the incoming accident. Humans on the other hand will swerve. See all the accidents that occur when attempting to miss an animal crossing the road.
    • Just let them do whatever they want but don't provide any exemption of liability. When they are prepared to bet the company in lawsuits, then the cars is probably safe enough. Just remember, when 2 of them crash, there is not question who caused the accident/damage/death. When the company is willing to accept that responsibility I'd give them a shot.

      And BTW, the reason this is easier to do today is because brake-by-wire, steer-by-wire, radar systems, etc have already been developed by the auto industry.
    • There is another solution to your scenario: The auto-car could apply the brakes. EVERYONE LIVES!
    • by gorzek ( 647352 )

      If the car knows there is a cliff on the right (which it should, otherwise it shouldn't be driving at all) then it will have to quickly brake and possibly hit the car in front of it. It can handle this better than a human driver in a few ways:

      1. It can gauge the right balance of braking force to minimize impact and inertia transferred to the passengers.
      2. It can pre-emptively deploy safety measures a fraction of a second sooner to protect the passengers.

      There are going to be situations where an accident is

    • And I forgot my drooling-from-the-mouth-fanboy/shill check list:
      * brand new account
      * posts a long post the minute the story goes live, despite the user not being a subscriber
      * subtle or over anti-Google bent in post

      sounds to me like there is some massive lobbying going on to short-cut the necessary amount of time to test auto-driven cars under all senarios, not just ones in controlled and predictable setups like we have seen.

      Ah, here it is. Google is paying off the government in order to kill us more quickly! Quick, bring out the pitch forks!

      Go away.

  • CA Freeways (Score:4, Funny)

    by sycodon ( 149926 ) on Wednesday September 26, 2012 @11:43AM (#41465479)

    Having lived in CA and driven on the freeways, I can say that you don't need "self driving" cars for the freeways.

    All you need is a car that can self park and you are good to go...or...not go.

  • 3e8 car mile ~ 3000 cars * 1 year * 35 miles / hour * 8 hours / day * 365 days / year

    So if Google wants to reach that milestone, they need to start cranking out those self-driving cars.

    • by h4rr4r ( 612664 )

      Why only 8 hours a day and why would you only ever drive it at 35 mph?

      A self driving car should be able to be on the road at least 20 hours a day. That leaves enough time for fueling and swapping of humans.

      • You only need 1,000 cars to meet the milestone in 1 year, Driving at 30mph is the mean but 24 hours a day is plausible. And you don't need humans.
        • by h4rr4r ( 612664 )

          I believe on public roads you do need a human available to take over for legal reasons.

          • by 0123456 ( 636235 ) on Wednesday September 26, 2012 @12:22PM (#41465995)

            I believe on public roads you do need a human available to take over for legal reasons.

            And that worked so well for AF447.

            Aviation autopilots should have proven by now that relying on a human to take over when the situation is so bad the autopilot can't handle it is a recipe for disaster. Besides, what's the point of a 'driverless car' if I have to be continually ready to take over at a millisecond's notice?

            Car: 'Warning, warning, kid just jumped out in the road, you are in control'.
            Driver: "WTF? I just hit a kid and smeared their insides all over my windshield'
            Car manufacturer: 'Not our fault, driver was in control, human error'.

  • The biggest issue with self driving cars is the same issue why people don't like flying, or Cloud Based solutions.

    They are taking something and putting their trust to someone or something else.

    The only comfort is numbers saying if it is indeed safer or not.

  • clearly this is how computers / machines will rise against us...

    self driving cars in CA will become ENRAGED by the clueless jackasses you have to deal with driving here.. and will rise up and destroy humanity (doubtless they will enlist the computers of people who don't watch enough cat videos as allies, computers seem to love cat videos).

    I for one welcome our new self driving car overloads.

    yes this is how the world ends... Self Driving Car ROAD RAGE.. right before you killed by machines remember I called

  • by OzPeter ( 195038 ) on Wednesday September 26, 2012 @11:52AM (#41465585)

    I have been thinking about driverless cars and I'd love to ask the people at Google (or where ever) how they cope with several real life issues
     
    * Emergency vehicles in general
    * Vehicles on the side of the road. In general you move over to the other side (road,next lane etc) to give them some room. But where I am (VA) its an offense if you fail to move over when passing a cop car on the side of the road.
    * Temporary speed limits posted during road works
    * School zones
    * Really bad weather where you can't even see 20 feet ahed of you
    * Looking down the road and predicting that there will be an issue and doing your best to avoid it (ie slowing down/lane changing to avoid the person on the phone who is weaving from side to side)
    * Crap lying all over the road (saw lots of rocks on a mountain road yesterday)
     
    I'm sure there are lots of other "interesting" situations that human drivers have to deal with day to day that would be difficult to encode into hueristics for the self driving cars.

    • by compro01 ( 777531 ) on Wednesday September 26, 2012 @12:20PM (#41465963)

      * Temporary speed limits posted during road works
      * School zones
      * Really bad weather where you can't even see 20 feet ahead of you

      Given that speed limit signs are fairly standardized and well-defined, having the system recognize them and act appropriately shouldn't be an insurmountable problem.

      As for the weather, self-driving cars will have much more flexible sensing than the Mk1 eyeball. Fog, etc. is considerably more transparent to IR and radar than it is to visible light.

    • I don't know why you think any of those are difficult situations.

      * Emergency vehicles in general

      They're just other vehicles - they might be doing unusual things, but any auto-driver system has to allow for the fact that any vehicle may do unusual things. They are only limited by the laws of physics not the rules of the road. And it's easy to detect flashing blue lights and sirens and give priority.

      * Vehicles on the side of the road. In general you move over to the other side (road,next lane etc) to give them some room. But where I am (VA) its an offense if you fail to move over when passing a cop car on the side of the road.

      Stationary vehicles are the very simplest vehicles to avoid.

      Temporary speed limits posted during road works

      The technology for vision systems to interpret road signs is already there. Googl

  • by Zamphatta ( 1760346 ) on Wednesday September 26, 2012 @12:00PM (#41465705) Homepage
    So if I come out of the grocery store and my car's not there, it might not be stolen?
  • 725,000 representative miles

    I hope by "representative" they mean diverse traffic and road conditions that represents the various things that a driver will experience over their driving career.

    While on this thought I don't expect self-driving vehicles to be universally permitted. There will probably be limitations on the conditions under which an automated driving mode may be used.

    As a side note, I believe the FAA adopted a statistical approach to safety regarding civilian space flight. That space flight should be no more hazardo

    • There's a lot of weirdness here.

      For example, 725k miles for any incident, but if we look at only "fatal" crashes it skyrockets to 300M? There's a disconnect here: if we look at only "fatal" crashes, I'm pretty sure we can smash up Google cars every 30k without killing anyone and make it to 300M.

      If you say, "well it has to be 1 in 300M because that's how often a fatal crash occurs and we want to reduce fatal crashes," you're talking about something completely different. 1 crash in 300M miles isn't lik

  • The sooner we can stop self-entitled douche bags from ruining the roads the better.
  • ...if a driver needs to be behind the wheel? I mean yeah it's great and all you don't need to put your hands and feet anywhere but if you're supposed to be alert watching that the car doesn't make a mistake then what's the difference? You still can't text, read the paper, play cards, eat dinner, whatever - or can you?
    • They don't tailgate - you're not a real California driver unless you've driving 1 mm off the bumber of the car ahead of you.
    • They don't drive at unsafe speeds for road conditions - speeding & tailgating in the rain is a California speciality
    • They won't pass in the bike lane or on shoulder of the road - if you don't like our driving, stay off the sidewalk
    • They won't stop and hold up traffic, despite having the right of way, to wave through someone who clearly doesn't have it or is waiting for them to get t
  • Californina and Florida have added a retroactive exemption for large Cadillacs driven only by wizened knuckles and the tops of bouffant hairdos.
  • by Dr_Barnowl ( 709838 ) on Wednesday September 26, 2012 @12:32PM (#41466135)

    The usual standard for a statistical "proof" is held to be a 95% confidence, or a p value of 0.05 that the hypothesis is wrong.

    Using a 99% confidence interval is skewing the numbers away from the usually accepted standard of proof, which makes me suspicious about the motives of the person proposing it.

  • by doom ( 14564 ) <doom@kzsu.stanford.edu> on Wednesday September 26, 2012 @12:43PM (#41466315) Homepage Journal
    I don't think there's any question that automated cars can beat human beings at safety, nor is there any question that they can reduce pollution just by driving more evenly (not to mention by drafting each other, "tailgating" to form car-trains).

    The trouble with them is that they'll take the sting out of long commutes. You already have people who think it's a good idea to spend four hours a day driving for the sake of cheaper real estate. What if they up it to six hours a day when they don't have to stare at the road?

    Note: cutting a problem (pollution, car-deaths) would do no good if you double the miles.

  • by gninnor ( 792931 ) on Wednesday September 26, 2012 @01:06PM (#41466635)

    I wonder if the aging population will end up pushing this into reality. We will not make mass transit is not going to work on a large enough scale, and for many transportation needs are only met by POVs. It will become yet another device to assist people's independence, and that I believe will push the technology and laws as the need for it increases.

You know you've landed gear-up when it takes full power to taxi.

Working...