Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Google Transportation Robotics Hardware

Blind Man Test Drives Google's Autonomous Car 273

Velcroman1 writes "'This is some of the best driving I've ever done,' Steve Mahan said the other day. Mahan was behind the wheel of a Toyota Prius tooling the small California town of Morgan Hill in late January, a routine trip to pick up the dry cleaning and drop by the Taco Bell drive-in for a snack. He also happens to be 95 percent blind. Mahan, head of the Santa Clara Valley Blind Center, 'drove' along a specially programmed route thanks to Google's autonomous driving technology. Google announced the self-driving car project in 2010. It relies upon laser range finders, radar sensors, and video cameras to navigate the road ahead, in order to make driving safer, more enjoyable and more efficient — and clearly more accessible. In a Wednesday afternoon post on Google+, the company noted that it has hundreds of thousands of miles of testing under its belt, letting the company feel confident enough in the system to put Mahan behind the wheel."
This discussion has been archived. No new comments can be posted.

Blind Man Test Drives Google's Autonomous Car

Comments Filter:
  • human factor (Score:5, Insightful)

    by jamesh ( 87723 ) on Thursday March 29, 2012 @07:21AM (#39507261)

    Driving home tonight there was a young kid playing quite near the road, so I dropped my speed in anticipation of him doing something stupid. He didn't, but I did wonder about the google car making those sorts of calls. I'm sure these google guys are pretty clever and have thought of all these things... are there any video's of self drive cars reacting to these sort of situations?

    Like that feeling you get when you see someone else on or near the road and you aren't completely sure that they have seen you and you react by lowering your speed to avoid a potential collision. It's got me out of trouble a few times. If there was an accident you probably wouldn't be at fault, but you've gone one better and seen the accident coming and avoided it.

    I'd want to see lots of video evidence of a self drive car doing this sort of thing before I'd be happy sharing the road with one.

  • by AntmanGX ( 927781 ) on Thursday March 29, 2012 @07:24AM (#39507273)

    Yes, because Google (and the authorities letting these cars on the roads) would have *never* thought of the possibility of pedestrians running in front of these cars.

    Quick! Get in touch with them and bring this to their attention!

  • by bgarcia ( 33222 ) on Thursday March 29, 2012 @07:34AM (#39507341) Homepage Journal
    Exactly!

    And to go a little further, technology doesn't get sleepy. Technology doesn't get distracted by cell phones, GPS systems, or the radio. Technology won't have a blind spot. This is going to be an incredible advance. I'm much less worried about a driverless car hitting a pedestrian than I am the average driver hitting one.

  • by msobkow ( 48369 ) on Thursday March 29, 2012 @07:49AM (#39507471) Homepage Journal

    Automated cars are also unlikely to rip along at 80-90 kph in a 50 zone like the psycho-cabbie who nearly ran me down on my wake-up walk half an hour ago, too.

  • Re:human factor (Score:2, Insightful)

    by Anonymous Coward on Thursday March 29, 2012 @07:49AM (#39507475)

    I'm inclined to favor greatly improved reaction time and unerring robotic focus over your spidey sense.

  • by aaaaaaargh! ( 1150173 ) on Thursday March 29, 2012 @08:01AM (#39507555)

    The problem is technical and social/moral.

    We are willing to accept the fact that a human occasionally makes an error. We are, however, not willing to accept when a machine makes an error, not to speak of the occasional errors made by software engineers. That's the social or moral problem. Who's fault is it if there is a software or hardware glitch?

    But there is also a technical issue. I personally would not drive in a car programmed by Google engineers, because I am not confident that these people have the experience to develop such high-integrity systems. I want to see the CVs of these engineers first. On how many high-integrity systems have they worked so far? I know plenty of people with an AI background, and trust me, I don't want these to program my car. I'd also need to know which programming languages and development tools they have used, see the source code, and would like to know which formal software and hardware verification methods were used to verify the code.

    It would also be a good idea to publish the source code for software used in planes like those of Airbus, but unfortunately they haven't gotten so far yet. However, there are plenty of reasons to have more confidence in airplane safety than in the safety of autonomous vehicles developed by Google. BTW, one reason is that it's technically much easier to fly a plane (without landing, which is still mostly done manually) than to programmatically steer a car or make a robot walk.

  • by Smidge204 ( 605297 ) on Thursday March 29, 2012 @08:03AM (#39507571) Journal

    Did they think of the possibility of driving over a cliff-edge while out of GPS reception?

    If the Internet is to be trusted at all, I'll take the chance of a self-driving car careening off a cliff due to lack of GPS reception over the chance of a human careening off a cliff because of GPS reception.

    Does it detect ice, snow, oil, sand before the wheels are there?

    Humans certainly don't... but there are already automatic traction control systems that do an excellent job maintaining the vehicle's footing in all but the more extreme situations - I can't imagine it would be that hard to send that data to the pilot AI and have it react by slowing down. Also I'd imagine it would be easier for the computer to detect ice and such using sensor data (IR cameras to detect road surface temp, lasers reveal changes in surface reflective properties, etc.)

    What about kids throwing stones off the top of a bridge onto the passing cars (common problem in the UK - someone died just the other month from this)? Is the car looking UP too and determining their intent?

    Again, I doubt humans would do much better. The radar systems on an automated car could conceivably be used to detect objects that may hit the car even from above and some evasive/mitigating action could take place - with better reaction times than a human driver.

    This is why even a jumbo jet - so of the most highly automated and tested machines in the world - has TWO HUMAN OPERATORS. And even there, they have TWO because the first can't be trusted on their own (proven by that recent thing with the pilot).

    Again, though I don't keep careful track of these things, there seems to be more incidents related to human error than automation error. Specifically the humans overriding the automated systems to correct for a problem that didn't actually exist.

    If you honestly, seriously, think that you can reliably determine the outcome of a machine complex enough to obtain all that data, you're an idiot.

    Humans are essentially machines much more complex than that, and have tens of thousands of years worth of historical precedent for doing incredibly stupid things despite having accurate information - yet somehow they are more trustworthy than a machine just by virtue of not being a machine? This kind of argument instantly refutes itself.

    How do you test the system for these things? Tens of thousands of hours of real-world driving. Considering all a human needs to legally operate a 2-ton projectile is roughly twenty minutes worth of testing (if you're lucky!) I'll take my chances with the machine.
    =Smidge=

  • by Smidge204 ( 605297 ) on Thursday March 29, 2012 @08:07AM (#39507601) Journal

    Why would it need to tell the difference between a human and a non-human? If it's not in the way, it's not in the way. If it's moving in a manner that will cause it to be in the way, then react accordingly. A human standing at the curb and suddenly running out into the street is no different, functionally, than an empty trash can getting thrown into the street by a gust of wind. An obstacle is an obstacle regardless of what it's made of and regardless of whether or not it's sentient.
    =Smidge=

  • by TheRaven64 ( 641858 ) on Thursday March 29, 2012 @08:44AM (#39507887) Journal

    Why would it need to tell the difference between a human and a non-human?

    There are a few situations where this could be important. Consider a cat runs into the road on the right and a child runs into the road from the other side to get the cat out of the road. A human would typically prioritise not hitting the child. If the AI doesn't, and hits the child in preference to the cat then it's not going to look very good. If there's only one obstacle, you want to avoid it. If there are two, then you want to avoid the most valuable ones, and generally we consider humans to be more valuable than anything else you're likely to collide with.

HOST SYSTEM NOT RESPONDING, PROBABLY DOWN. DO YOU WANT TO WAIT? (Y/N)

Working...