Why Self-Driving Cars Are Still a Long Way Down the Road 352
moon_unit2 writes "Technology Review has a piece on the reality behind all the hype surrounding self-driving, or driverless, cars. From the article: 'Vehicle automation is being developed at a blistering pace, and it should make driving safer, more fuel-efficient, and less tiring. But despite such progress and the attention surrounding Google's "self-driving" cars, full autonomy remains a distant destination. A truly autonomous car, one capable of dealing with any real-world situation, would require much smarter artificial intelligence than Google or anyone else has developed. The problem is that until the moment our cars can completely take over, we will need automotive technologies to strike a tricky balance: they will have to extend our abilities without doing too much for the driver.'"
Don't have to be perfect, just better (Score:5, Insightful)
This writer makes a fundamental mistake: believing that if full driverless technology is not perfect or at least near-perfect, it is therefore unacceptable. But this is not true. Driverless technology becomes workable when it is better than the average human driver. That's a pretty low bar to clear. I know all of us think we're above-average drivers, but there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job.
Re:Don't have to be perfect, just better (Score:5, Insightful)
I know all of us think we're above-average drivers, but there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job.
That depends entirely on the failure mode.
"Fail to Death" is actually acceptable to society as a whole as long as the dead person held his fate in his own hands at some point in the process. This is why we get so incensed about drunk drivers that kill others. A person doing everything right is still dead because of the actions of another. But if you drive under that railroad train by yourself, people regard it as "your own damn fault".
When the driverless car crosses the tracks in front of an oncoming train it will be regarded differently. Doesn't matter that the driver was a poor driver, and had a lot of fender benders. Most of those aren't fatal
In spite of that, I believe Google is far closer than the author gives them credit for. They have covered a lot of miles without accidents.
Granted, we don't know how many times there were errors in the Google cars, where one part of the system says there is nothing coming so change lanes, and the human or another part of the system notices the road is striped for no-passing and prevents it. Google is still playing a great deal of this pretty close to the vest.
Re:Don't have to be perfect, just better (Score:4, Interesting)
This is why we get so incensed about drunk drivers that kill others. A person doing everything right is still dead because of the actions of another. But if you drive under that railroad train by yourself, people regard it as "your own damn fault".
Ah but see here, the driverless cars have full 360 degree vision, and they already stop for trolleys and erroneous pedestrians crossing the road illegally. They do so without honking and swearing at the git too. So, as you say, if your ignore the copious warnings the driverless car's self diagnostic gives you that its optics are fucked, and manage to override the fail-safe then wind up under a train, they yes, people will still regard it as "your own damn fault". Not only that, but YOUR insurance is paying to clean up the mess.
Re:Don't have to be perfect, just better (Score:5, Insightful)
One problem it won't solve is the mindless moron behind you who nearly slams into your back end because you had to brake suddenly. This happened today in fact - I was on my way back from the store, doing about 35mph in a zone so marked, when a stray dog started to cross the street in front of my, and I instinctively stomped the brakes to avoid killing an innocent animal. Some little dipshit on a little blue moped was following too close and sounded his horn, then flipped me off two or three times, as if I was the one in the wrong.
Re: (Score:3)
One problem it won't solve is the mindless moron behind you who nearly slams into your back end because you had to brake suddenly. This happened today in fact - I was on my way back from the store, doing about 35mph in a zone so marked, when a stray dog started to cross the street in front of my, and I instinctively stomped the brakes to avoid killing an innocent animal.
This is a problem driverless cars will fix. Unlike you, a computer doesn't react instinctively. It has 360-degree vision, knows where all of the surrounding vehicles are, including distances and velocities, and is capable of accurately deciding whether or not braking is safe. Assuming it can also distinguish dogs from children (very likely, I think, though I don't actually know), it should be capable of deciding that it's safer to simply run the dog over if that's the situation.
Re:Don't have to be perfect, just better (Score:4, Insightful)
Re: (Score:3)
Re: (Score:3)
They do so without honking and swearing at the git too.
So they're not quite there yet?
Re: (Score:3)
That's not an argument against self-driving cars, that's an argument against driver-driven cars. Don't be surprised if manually driving your car becomes illegal, at least on highways.
Re: (Score:3)
Also, I've found driving outside the US that people give way when not required to by law much more than in the US.
I've found that observing the local driving habits and mimicking them makes for the safest and most efficient driving.
Re: (Score:2, Insightful)
Exactly. Just because a self-driving car can't respond to outlier situations as well as a human can doesn't mean the car shouldn't be driving. By definition, those outlier situations aren't the norm. Most accidents are caused by someone picking something up they dropped, looing the wrong way at the time, changing the radio station, etc. or inibriation. These problems would be solved by a self-driving car, and while I don't have any numbers to back it up, something tells me that eliminating these will far ou
Re: (Score:3)
Most accidents are caused by someone picking something up they dropped, looing the wrong way at the time, changing the radio station, etc. or inibriation.
Really? I'd like to see the source of your assertion.
Granted distracted driving is the darling of the press these days. But that doesn't make it the major contributor to fatalities.
In fact, fatalities by all causes are on a steady year by year decline [census.gov] and have been for 15 years.
Drunk driving [edgarsnyder.com] still accounts for a great deal, 31% of the overall traffic fatalities in 2010. One-half of traffic deaths ages 21 to 25 were drinking drivers.
Distracted driving hovers around 16% of fatal crashes by comparison. [census.gov]
Drunk
Re:Don't have to be perfect, just better (Score:5, Interesting)
Having self driving cars would eliminate the "drunk driving" cause of traffic fatalities, as well as all of the "distracted driving" fatalities as well. That is, according to your own numbers, 47% of all driving fatalities.
Also, according to Wikipedia [wikipedia.org] The speed that someone was driving was listed as the cause for 5% of fatal crashes, and "driving to fast for the road conditions" was listed as another 11% of fatal crashes. A self driving car would also, probably eliminate almost all of these crashes...so there is another 16% of all fatal crashes.
In fact, there was a study done in 1985 [wikipedia.org] that concluded that the human factor was SOLELY responsible for 57% of all fatal car crashes. It also found that the human factor was entirely or partly responsible for 93% of all fatal crashes...these are the situations where there may have been bad road conditions or other outside factors, but the driver didn't react in the best way to avoid an accident. These numbers are almost 20 years old, but they are probably still representative of the current stats.
A self driving car should eliminate nearly all of the 57% of crashes where the human factor contributed solely. It should also eliminate a healthy portion of the remaining 36% of fatal crashes where the human factor was a contributor as the car can be programmed to respond in the best possible way to a huge number of road conditions.
Based on these numbers, I believe it is reasonable to say that self driving cars should eliminate about 75% of all fatal crashes. Technology and machinery is also, when compared to the human factor, extremely reliable...particularly when it is designed correctly. I have no problems saying that self-driving cars will eliminate 75-90% of all fatal crashes.
I am also certain that there will be some outlier situations where, if the driver had been in control the entire time, that a fatal accident would have been avoided...The technology failed and the car drove over a cliff, or the signal sent to cars about a railroad crossing didn't activate, etc, etc... but those incidents would be offset by a HUGE margin with the number of incidents that the self driving cars prevented. These same arguments were made against seat belts! There have probably been several examples where someone who was wearing there seat belt drove in to a lake and drowned because they couldn't get free of the car because of the seatbelt. But it is proven that seat belts save FAR many more lives than they cost.
Re: (Score:3)
Re: (Score:3, Insightful)
Re: (Score:2)
And those bad drivers get their licence revoked, just like bad software should be banned. It's not singleing out aoutonomous cars.
Re: (Score:2, Insightful)
Point is, it is *not* better. Yes, it is probably better at normal driving situations, and certainly would probably be very welcome on a nice open highway. I imagine it would excel at preventing fender benders, which would be very nice indeed.
However, how would it react to sudden situations at high speed? According to the article and everything I know so far... not well. And the point of the article is that the automation would still require a driver for that, but the automation would actually cause the
Re: (Score:2, Funny)
...probably better at normal driving situations
Will auto-pilot have the blood lust to take out the squirrel crowding your lane? Humans are number one!
Re: (Score:2)
Re:Don't have to be perfect, just better (Score:5, Insightful)
However, how would it react to sudden situations at high speed? According to the article and everything I know so far... not well.
In principle, at least, an automated system could react better than a human to sudden emergency situations... because a computer can process more input and faster decisions than a human can, and also (just as importantly) a computer never gets bored, sleepy, or distracted.
Dunno if Google's system reaches that potential or not, but if not it's just a matter of improving the technology until it can.
Re:Don't have to be perfect, just better (Score:5, Insightful)
I have to respectfully disagree. All of the situations that you've mentioned thus far are well within the realm of possibilities given current and near term technological advancements. Human beings will always be limited to 1) their imperfect memory of the route being driven assuming they've driven it before and 2) their sole source of input which is the visual electromagnetic spectrum. Driving doesn't require true AI, in my opinion. There are only so many things that can happen from a programmatic standpoint and it really boils down to collision avoidance. You have a route and a volume of space that you occupy along that route at any given time. Either something (an object, person, animal, etc) is going to occupy the same volume at the same time or it isn't. Collision detection is very easy to program, and the technology is sufficiently advanced at this point to be able to detect objects both big and large and make real-time assessments to determine the action that leads to the best chance for survival of both the object and the car. Those calculations are performed by a computer operating much faster and with near-instantaneous reaction time compared to its human equivalent that has to spend time deciding whether it's best to accelerate, brake, swerve (or a combination of those) and then perform the muscle actuations to initiate that action.
Remember, too, that a computer system can have access to near-perfect data such as GPS records for the route, as well as other object/road input systems beyond just the visual spectrum. That's not to say the right this minute we have a perfect set of data for every road, but certainly for the majority of traveled roads we have a pretty complete picture which could be used to provide the car's route in the absence of visual feedback. When it snows here, 4 lanes turns into 2 because humans can't see the lane boundaries, but that's not a limitation for a computer system programmed with the road trajectory to within inches. You may be able to interpolate where the road is 300 feet in front of you, but someone not familiar with the area might not. I drive on some country roads around here where, during a blizzard, you have no feedback about the roads location except for the random house every 1/4 mile. If I didn't know that road was perfect straight, I'd be off of it in no time. An advanced optics system can see further, clearer and more completely than ANY human.
I'm not trying to say that this whole process is trivial and there are reasons why it will take a long time to develop and implement. But I don't believe that there are any reasons which can't be overcome with present day technology.
Re:Don't have to be perfect, just better (Score:4, Insightful)
The problem is that a computer only reacts to things it has been programmed to deal with, humans can be creative.
Humans "can be" creative, but in general aren't. A computer can be programmed to deal with anything. It's called "fail safe". Windows is programmed to deal with everything. If it's not sure what to do, it locks up with a blue screen. That's not useful, but is actually programmed for "everything."
It isn't hard to program it to slow and pull over when things are beyond the basic programming. If there is an unavoidable crash (someone over the center line on a road with a rock wall cliff on one side and a cliff drop on the other - guardrail or not, so no where to go, and even if you stopped in time, a crash would happen), then likely the best action is to slow as much as practical. The human may decide to drive into the rock cliff to reduce the chance of falling over the cliff drop, but the computer will just get you to slow. Perhaps not what you'd do, and it may take multiple traffic engineers a week to decide on the best course of action in that situation (and it may end up being something counter-intuitive and practically impossible like the best result for your occupants being to accelerate and steer into the other car to meet them at nearly the same speed in a flat-head on (not an offset one) that will minimize chances of going over the edge and allow all belts and bags to work as intended). And chances are no regular human would make that choice as a reflex. A computer might, as it would be based on rules programmed by engineers and many other smarter than the average driver.
Re:Don't have to be perfect, just better (Score:4, Insightful)
I'm sorry but your comment is extremely clueless. You are asking about sudden situations at high speed. How would one end up in such a situation? By ignoring the laws and safe driving conditions. Just by obeying the speed limit and reducing it even more when sensor data gives warnings, the car can avoid such problems in the first place.
Almost %90 of the human drivers are confusing their luck of driving at 80+ Mph on bad weather (which is equal to playing Russian roulette with a gun with slightly higher capacity for ammo) with their driving abilities.
Re: (Score:2)
Is it a mistake? Or your unsupported bias against human drivers?
That's a claim I keep hearing... but I don't buy it. For all the really bad drivers supposedly out there, in a wide variety of road conditions, lighting, weather, etc... there's
Re:Don't have to be perfect, just better (Score:5, Informative)
http://www.census.gov/compendia/statab/cats/transportation/motor_vehicle_accidents_and_fatalities.html [census.gov]
The catch is, nearly all traffic accidents are preventable by one of the parties involved. Most are at low speeds and most are due to the driver not paying attention to the situtation around them. Next time you are at a busy traffic light, count the cars around you. Chances are one of them will be in an accident that year. Now do that every time you stop at a traffic light....
Re: (Score:3)
okay... how about some statistics to back up the claim: Human factors in Driving Fatalities [wikipedia.org]
The human factor is SOLELY responsible for 57% of ALL driving related fatalities.
The human factors is partially responsible for another 36% of all driving related fatalities.
This means that the human factor is a factor is 93% of all driving related fatalities. A self driving car, even if it only eliminates HALF of the fatalities where the human factor was only partly responsible, you're still reducing the number of
Re:Don't have to be perfect, just better (Score:4)
I don't think this is true. No auto maker wants to deal with the insurance overhead involved in installing a "marginally flawed driverless system." Can you imagine that meeting? The moment when some insurance executive is told by GM, Ford, Honda, Toyota, Mercedes, BMW, whoever, that their next car will include a full-on autodrive system? The insurance company will raise the manufacturer's insurance through the roof! Imagine the lawsuits after a year of accidents in driverless cars. Everyone would blame their accidents on the automated system. "Not my fault, the car was driving, I wasn't paying attention, can't raise my rates, I'll sue the manufacturer instead." A few go-rounds like that and no one will want the system installed anymore.
Re: (Score:2)
We'll just have to install them ourselves.
Re: (Score:2)
They could get around this by having the insurance company work with the manufacturer, so you lease the automated driving software and one of the terms of the lease is to pay for the manufacturer's insurance cost for the use of the automated car. Or you have the insurance contract written to support both automated and non-automated driving. Automated driving should have cheaper insurance if it works better than the average driver, which makes it a win for everybody.
Re:Don't have to be perfect, just better (Score:5, Insightful)
Re: (Score:3)
But for fleet operations, that won't matter - what GM will have to pay is what UPS will save in terms of insurance costs. UPS will know how much they will save in terms of insurance, and will be willing to pay more for a car.
Re:Don't have to be perfect, just better (Score:4, Insightful)
There are two obvious solutions to that problem:
1) Subsidize the insurance carried by the car manufacturers in some way. It doesn't have to be done via the government either, since the manufacturers could simply introduce a new, ongoing cost with each purchase that arranges for the purchaser of the vehicle to pay their portion of the insurance cost being borne by the manufacturer. It's simple to do and the cost per person will go down over time as more people purchase the cars and rates go down due to a decreasing number of accidents as a result of human error. Of course, initial costs will be higher, but...
2) Eventually it will become safer to be driving in an automated car than not, and when that happens, there will start to be incentives from insurance companies to switch over to the self-driving cars. As a result, costs for personal insurance will eventually go down for passengers in driverless cars and rates will go up for people who choose to drive their own cars, since they'll increasingly be the ones responsible for causing accidents. As such, simple economics will eventually force most people to switch to driverless cars.
The second will happen on its own. Whether the first happens or not remains to be seen, but it'd certainly ease things along.
Compare to airlines (Score:3, Interesting)
Airlines are liable for around $175000 for each passenger death, set by IATA. A similar figure could and should be set by law for autonomous vehichles. So you do the math and find that per car, with a reasonably safe driving system, that's no big deal, whether it's covered by your car insurance or the manufacturer's liability.
Re:Don't have to be perfect, just better (Score:4, Insightful)
It'll be adopted just as quickly as the orders of magnitude safer Nuclear Power over coal has taken off despite a few relatively minor and contained accidents.
Re: (Score:3)
Except the "logic" of crippling fear from badly-estimated risks doesn't apply so much here. While the general public is easily convinced that a single nuclear failure might kill thousands and yield cities uninhabitable, it would be harder for the "anti-automatic-car lobby" (who the heck would that be? "big auto" would love to sell everyone new cars) to make the masses terrified of automatic cars going on mass-murder sprees. Especially since, unlike nuclear plants, cars are something the general public can o
Re: (Score:3)
I know I don't want just any car that could up, and on it's own slam into people waiting in line at the movies. I'm sure they'll all be networked together as well so it will be the start of the robot uprising with thousands of cars running people down at random. I saw the documentary of the last time it happened called Maximum Overdrive.
Re: (Score:2)
Re: (Score:3)
How do you failover when road conditions exceed the thresholds of the car? Uh... slow down or stop maybe???
Just because you might need to transfer control to the driver doesnt mean you need to do it at 100 miles per hour.
Computers are far better at understanding their limitations that human drivers, and when they start getting reduced sensor data or confusing conditions will be programmed to be conservative, unlike the human drivers that keep barreling on until they are at the edge of disaster.
Re:Don't have to be perfect, just better (Score:4, Funny)
I completely agree. You can imagine any emergency scenario you want with combinations of black ice, sand, curves, blowouts, etc. But a human being has to rely on what they've experienced to help them react to those situations and I'm willing to bet that most people are unprepared for that scenario. On the other hand, a computer which has been programmed for that scenario, or has learned from it, can easily benefit the entire population of cars via a software update. A computer can make real time assessments of weather, tire wear, etc to determine the best possible course of action. Most people I see around here can't even be expected to turn on their lights in inclement weather let alone know how to reactor to uncommon emergency situations.
Re: (Score:3)
This writer makes a fundamental mistake: believing that if full driverless technology is not perfect or at least near-perfect, it is therefore unacceptable. But this is not true. Driverless technology becomes workable when it is better than the average human driver. That's a pretty low bar to clear. I know all of us think we're above-average drivers, but there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job.
It becomes workable when it achieves ALL of these things:
the first obvious application is to replace cab drivers.
Re: (Score:3)
Driverless technology becomes workable when it is better than the average human driver. That's a pretty low bar to clear. I know all of us think we're above-average drivers, but there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job.
And here in lies the problem.
Driverless cars will have to live side by side with human driven cars for years, if not decades and as you pointed out, there are a lot of bad drivers out there. A self driving car will need to be able to be proactive in protecting it's passengers from drivers that are pretty damn unpredictable. Right now we cant even get everyone doing the same speed or indicating, a driverless car has no hope dealing with that at our current level of technology.
Right now, you cant even g
Re: (Score:3)
Or a buggy automated system under the direction of a buggy human?
The manufacturers will never assume all responsibility, and no one would even suggest that as a viable business model. Some things (like our current cars, guns, stairways, electricity, etc) carry inherent dangers. Product liability law does not mandate that there be zero risk.
There are design standards, that if adhered to, pretty much absolve the manufacturer of responsibility. There is no reason to believe something like this will not be i
Taxis first (Score:3, Insightful)
I think we'll probably see self-driving cars in congested, relatively low-speed environments like inner cities before they're screaming down the highway at 75mph.
The first robot taxi company is going to make a mint when they integrate a smartphone taxi-summoning app with their robo-chauffeur.
Re: (Score:3)
Re: (Score:2)
Most governments limit taxi services to control safety.
Its the Taxi companies (and unions) themselves that lobby long and loud for control of quantity.
Re:Taxis first (Score:5, Insightful)
Maybe AUTO drive only express lanes that cars will (Score:5, Insightful)
Maybe AUTO drive only express lanes that cars will go at high speed at near bumper to bumper
Re: (Score:3)
Screaming down the highway is much easier, actually.
Re: (Score:2)
more likely to kill you in the case of a malfunction though.
actually, probably less issues on the highway (Score:3)
Screaming down the highway at 75MPH is *exactly* where I want a self-driving car. I live in the Canadian prairies, the nearest large city is 5hrs of highway driving, next nearest is 7hrs. I would _love_ to put my car on autopilot for that trip.
Also, on the highway you generally have long straight sections, sight lines are long, cars are further apart, there are no pedestrians, and often you have divided highways so you don't even need to worry about oncoming traffic.
Re: (Score:2)
Re:Taxis first (Score:5, Informative)
I think we'll probably see self-driving cars in congested, relatively low-speed environments like inner cities before they're screaming down the highway at 75mph.
On the contrary, "screaming down the highway at 75mph" (never been on the Autobahn, have you?) is a lot easier to automate than driving around a city block. Similarly the easier part of a plane's autopilot is the part that handles cruising at 500mph at 30,000 feet. The numbers are impressive, but the control is comparatively easy.
On a highway there are no traffic lights or stop signs, and there are nicely marked lanes and shoulders. Just stay between the lines at a constant speed and hit the brakes if something appears in front. Compare that to trying to figure out if some guy who's not watching is going to step off the curb and into your way, or if the car pulling out of a parking spot is going to wait for you to pass.
Re: (Score:2)
I think we'll probably see self-driving cars in congested, relatively low-speed environments like inner cities before they're screaming down the highway at 75mph.
This makes sense. I would be willing to drive a self-driving car in a crowded city, where the maximum speed is 25, and the risk of serious injury is minimal if I get in a collision. Driving at 75 down the highway has a lot bigger risk if things go wrong.
Of course, looking at it the other way, I wouldn't want to be a pedestrian with a bunch of self-driving cars going around me. A small sensor error could lead to catastrophe.
Re: (Score:2)
maximum speed is 25, and the risk of serious injury is minimal if I get in a collision
Warn me beforehand - I don't want to be a pedestrian in that city.
Re: (Score:2)
Driving straight is not the challenge. The failure mode is much more severe on a highway, and extreme conditions are probably just as difficult to manage for an autopilot as city traffic, if not more. For example, what does the AP do when
- the car hits a big pothole and perhaps blows a tire? There's too little time for a human to take over. Sometimes you need to make a choice, whether to stop abruptly or not.
- lane markings disappear because of prior construction? Construction detour?
- truck blows a tire in
And when one is involved in an acident ... (Score:2, Insightful)
... because they will be, who is going to be sued?
If I was a car manufacturer I don't think I'd be mad keen on going down the self-driving route - it's only going to mean more lawsuits.
Re: (Score:2)
I'm totally fine with that. I feel like regardless of how smart cars will get, they should always have a fully mechanical failsafe, and should always be driveable in fully-AI-disabled mode. At that point, I'd argue an accident should only be the fault of the car manufacturer if it could be clearly proven that either a. no reasonable human driver would have done what the car did (like, say, it just decided to drive off a bridge suddenly and without warning), or b. the failsafe didn't work when engaged. I'd b
What's wrong with Google cars (Score:2, Interesting)
[Google] says its cars have traveled more than 300,000 miles without a single accident while under computer control. Last year it produced a video in which a blind man takes a trip behind the wheel of one of these cars, stopping at a Taco Bell and a dry cleaner.
Impressive and touching as this demonstration is, it is also deceptive. Google’s cars follow a route that has already been driven at least once by a human, and a driver always sits behind the wheel, or in the passenger seat, in case of mishap. This isn’t purely to reassure pedestrians and other motorists. No system can yet match a human driver’s ability to respond to the unexpected, and sudden failure could be catastrophic at high speed.
Re: (Score:2)
What a vague statement! This entire article is lacking in any real specifics or citations.
Re: (Score:2)
Re: (Score:2)
What's that, Google's hype is just, well, hype? Say it ain't so.
Numerous car manufacturers have been working on self-driving cars for many years. In 2014 Mercedes will actually have a car with some very limited self-driving capabilities (sort of cruise control on steroids - you can use it on the highway when you stay in your lane). As limited as it is, that's a lot more real world application than you're likely to see out of Google anytime soon. Contrary to the beliefs of some Silicon Valley and Google hy
Re: (Score:2)
In 2014 Mercedes will actually have a car with some very limited self-driving capabilities (sort of cruise control on steroids - you can use it on the highway when you stay in your lane).
FYI the article describes the author driving a 2013 Ford Fusion with exactly those features. Also mentions a 2010 Lincoln with auto-parking.
Re: (Score:3)
I RTFA, and I think that's impressive. What struck me about Mercedes is they're actually going to be selling cars with those features in less than 6 months. I don't think the Ford features are going to be sold soon, though the article wasn't clear. The automated parallel parking has been around for a while, and in all fairness should count as a limited form of self-driving car.
It reinforces my point about Google. While they're hyping completely self-driving demos, various car makers have been doing the ha
Re: (Score:2)
I don't think the Ford features are going to be sold soon, though the article wasn't clear
Looks like Ford's selling it now [ford.com].
Re: (Score:2)
Re: (Score:2)
The 2014 Sprinter [1] has this technology. It not just auto-brakes when someone tried a swoop and squat (easy insurance money in most states), but can compensate for wind, alerts the driver if there is someone in a blind spot, and can automatically follow in a lane. IIRC, there is even an adaptive cruise control which doesn't just follow by speed, but also automatically keeps a distance from the car in front, so cruise control can be used safely in more places.
[1]: Sprinters are weird beasts. Stick a Fr
Re: (Score:3)
Stick a Freightliner logo on the back of a new one, and the local homeowner's association starts sending notices that "company vehicles" are prohibited in driveways unless active work is being done. Pull two torx screws out of the flap between the double doors and attach a Mercedes logo, and the same HOA types now consider the vehicle an asset to the neighborhood due to the make.
Wow, yet another reason to stay away from HOAs.
Re: (Score:2)
What road do you know of where a human hasn't driven the route at least once?
The street view cars have driven the bulk of roads in most major US cities at least once documenting everything along the way.
Left unsaid by Google is how many human interventions were required, (for what ever reason).
Re: (Score:2)
Can't find the article, but there was a mention that in the (small number of) incidents where a human operator took over control, they reviewed the logs afterwards and in all cases the computer would have taken either the same or equally safe action. These where mostly related to pedestrians and jay walking if I recall.
Re: (Score:3)
Impressive and touching as this demonstration is, it is also deceptive. Googleâ(TM)s cars follow a route that has already been driven at least once by a human, and a driver always sits behind the wheel, or in the passenger seat, in case of mishap. This isnâ(TM)t purely to reassure pedestrians and other motorists. No system can yet match a human driverâ(TM)s ability to respond to the unexpected, and sudden failure could be catastrophic at high speed.
Google has cars driving around almost everywhere for their map feature, I'd have no problems with a first edition limited to what they already know. And they're legally obligated to have a driver ready to take over, even if they wanted to go solo. Miiiiiiiiinor detail.
Re: (Score:2)
I bet that's also why Google has these people called "engineers" who worry about "miiiiiiinor details," like unsafe driving conditions (shallow water flowing across the road, and subsequent hydroplaning when an idiot human driver thinks "what harm can three inches of water do?" causes a lot of accidents); probably to a greater extent than even experienced drivers do. And, once a "detail" has been identified and solved, it's solved 100% of the time for 100% of the drivers (no matter how many people die a yea
Re:What's wrong with Google cars (Score:5, Insightful)
What happens next?
The driving computer sees the 4 feet of water ahead using the cameras/radar and stops because it determines the water is too deep to ford?
Re: (Score:2)
Re: (Score:2)
So, most trips in cars are repetitive - you drive to work every day. Your Google-O-Matic could 'learn' the route over time, follow your driving habits, confer with other Google-O-matics about their experiences. Maybe the first time you drive a route, you go manual. Not such a big deal.
Boats have had autopilots for years. Most of them are pretty primitive. Planes likewise. The captains of both devices are responsible for the vessel at all times. Same as with an autonomous car. The driver decides when
Re: (Score:2)
Re: (Score:3)
The Google cars don't REQUIRE a human. They CAN operate fine without one. They MAY NOT operate without one.
They have a human behind the wheel so that they comply with the various licensing regimes. As long as there is a human behind the wheel capable of assuming control the current laws in most places are fine with the computer controlling the car.
Great AI (Score:2, Funny)
Imagine if you had a car with a great AI, better than what is out there today. You just tell it to drive somewhere and it does. It never gets lost, knows where all addresses are, knows how to park, etc. Basically everything.
There'd still be people that did things like, "It seemed to be going too fast so I slammed on the brakes and the car spun out of control and into a ditch. If it weren't for your AI, this never would have happened! I want a million dollars." or "I was sitting in the driver's seat dru
Its not here yet but. (Score:3, Insightful)
Re: (Score:2)
whats really needed to make it work is a road infrastructure designed around this.
Came here to say that myself.
Also, the issue of liability is another major barrier; until the government figures out who to blame when something goes awry and one of these things causes damage to life/property, you can bet your bonnet that Uncle Sam will not allow driverless cars on "his" streets.
Re: (Score:2)
Uncle Sam already allows driverless cars, it's just that they're all test platforms 'insured' by the company doing the developing. It's for liability purposes that somebody is behind the wheel of the google cars currently.
Still, there are a number of possibilities I can see. Much like how the federal government set down specific rules on how nuclear power liability will be addressed, the same can be done with cars. I see several possibilities(but I'm writing this on the fly, so I'll probably miss stuff):
1
Manufacturer limited liability (Score:2)
Oh yeah, forgot part of #1: As part of making the manufacturer liable for accidents caused by the AD system, even limited, the manufacturer would build the liability into the price of the system, enabling dirt-cheap insurance if you can afford the auto-drive.
Field tests prove them wrong? (Score:2)
Google has apparently been using this technology for their StreetView cars, and apparently meet all the straw-man requirements of the article.
So...do they have more bogus requirements that need to be met?
Driverless cars don't need to be able to handle any possible situation. Most drivers can't handle those situations either - witness the large number of accidents that happen every day. The driverless cars just have to be better, and have superior liability coverage, than human drivers.
Re: (Score:2)
I haven't seen Google participate in any independent test. They are making big claims, but I wouldn't just accept the words of a company praising its own product when they fail to provide proof for years.
Re: (Score:2)
How about semi-automated, remotely piloted? (Score:2)
If the car's AI cannot handle the situation, control of the car could be transfered to a central location where a human could take over. Another option would be to get the car to a safe spot and have a human come out and take over.
Also, the cars don't need to go anywhere at anytime under any conditions to be useful, they just need to be able to follow pre-determined courses safely. In the even of an accident, detour, heavy traffic, or even bad weather, the automatic driving cars could be sent home or told t
Headline (Score:2)
Re: (Score:2)
I didn't, but I'd give you +1 funny for the visual (if I had any mod points, and if I hadn't already posted a comment :p)
Not about fully automated cars (Score:4, Informative)
The article is about semi-automated cars, not fully automated ones. Semi-automated cars are iffy. We already have this problem with aircraft, where the control systems and the pilot share control. Problems with being in the wrong mode, or incorrectly dealing with some error, come up regularly in accident reports. Pilots of larger aircraft go through elaborate training to instill the correct procedures for such situations. Drivers don't.
A big problem is that the combination of automatic cruise control (the type that can measure the distance to the car ahead) plus lane departure control is enough to create the illusion of automatic driving. Most of the time, that's good enough. But not all the time. Drivers will tend to let the automation drive, even though it's not really enough to handle emergency situations. This will lead to trouble.
On the other hand, the semi-auto systems handle some common accident situations better than humans. In particular, sudden braking of the car ahead is reacted to faster than humans can do it.
The fully automatic driving systems have real situational awareness - they're constantly watching in all directions with lidar, radar, and cameras. The semi-auto systems don't have that much information coming in. The Velodyne rotating multibeam LIDAR still costs far too much for commercial deployment. (That's the wrong approach anyway. The Advanced Scientific Concepts flash lidar is the way to go for production. It's way too expensive because it's hand-built and requires custom sensor ICs. Those problems can be fixed.)
Re: (Score:2)
The fully automatic driving systems have real situational awareness - they're constantly watching in all directions with lidar, radar, and cameras.
They have the sensory information needed for situational awareness, which can be a long way from having situational awareness.
Re: (Score:2)
I can't help but think we're going about this the wrong way. Automating transportation in 3D is really hard. Automating it in 2D is a lot easier. Automating it in 1D is dirt simple.
Perhaps wh
Answer (Score:5, Funny)
Because Skynet, that's why.
If a human has to be in the driver's seat (Score:5, Insightful)
ready to take over in case of an emergency, what is the point of the whole thing?
And assuming the human will be tweeting on his Facebook Amazon phone with his hands nowhere near the steering wheel and feet propped up on the dashboard, how is he going to take over control of the car in a split second when an emergency occurs? He can't. So that means he will have to be alert and in a ready driving posture and paying attention to the road like he's really driving. But then what is the point? Might as well have him drive it himself and save money by not buying the Google Car stuff in the first place.
Either make a car that can go 100% without a human driver, or go back to web advertising and forget about the whole thing.
Re: (Score:2)
I agree, and this is a very different situation from an automated aircraft. Things generally happen quite slowly in aircraft except for the few minutes around takeoff and landing during which times the pilots can be alert. In cruise flight if something goes wrong with the automation there are usually many seconds for the pilot to become aware of the problem and to take over. In the famous AF 447 flight it was several minutes between the beginning of the problem and the last time at which the aircraft cou
Re: (Score:2)
There are plenty of situations where a human driver could reasonably take over, with many seconds of warning: when the car starts having trouble with its sensors (or the lane markers disappear), when the map and reality diverge, and so on. The "sudden surprises" need to be 100% computer-handled on auto-pilot, but that's a whole different problem than the system's failsafes detecting that normal operation has degraded for whatever reason, including overdue maintenance.
Other roadblocks (Score:2)
There are other fundamental problems with driver-less cars: Illusion of control, exchange of information, and unmodeled conditions
The illusion of control is why some people are scared to death of flying, but feel confident of getting behind the wheel: even though the stats say that people are safer in an aircraft than on the road, people hate to give up the control. Even if the control is just an illusion (you can do little if a drunk driver slams into your car).
The exchange of information is crucial to
Driving is a human activity (Score:2)
To be an efective driver an AI would need to understand my hand signals. Until then it is not safe for it to be on the road when I am on my bike.
Computers can't bluff (Score:4, Insightful)
As soon as people figure out that a computer is driving a car, they will pull out in front of it knowing they won't be hit. They will change lanes into it, knowing that it will slow down and get out of the way. And they will loath it, because it will never flatter their feelings.
Self-driving cars will be bullied off the road, because there is a lot more to driving with people than being able to control the car. There is a lot of social/herd/mental/aggression dynamics that are instinctive for people but not for computers.
Assisted driving first, insurance second (Score:5, Insightful)
But then the breakthrough will be that some company that has crossed some critical line of self-driving capability will say that full liability insurance is included with the price of the car. Potentially they will even cover all insurance short of trees falling on the car and whatnot as they will be sure the car can't cause an accident and that with all the cameras and sensors that some other fool can't blame you or your car if they are the cause of the accident.
At this point my money would be on cars finally being marketed and sold as robotic self-driving cars. Shortly after this the tidal wave will wash away all the non-robotic cars as being a dangerous menace. The key here is that most cars by this point will be largely capable of being autonomous or very close to autonomous with only antiques being the hold outs.
But, and the big but, is that some robotic car will drive off a cliff or into a train or whatever and that single incident or small collection of incidents (and their Youtube videos) will get everyone saying, "Those things are death traps, I'll never let the car drive." This will temporarily postpone the inevitable but going from 35,000 US annual road deaths to 35 will be too much reality for foolish people to fight for long.
Re: (Score:3)
There will never be a "self-driving" care because nobody can afford the liability of selling an autonomous car. In any accident, the manufacturer will obviously get sued. You always have to have a human "in control" so you have someone to point the finger of blame at.
That's blatantly false. The price of the liability insurance will just be folded into the price of the car. Where manufacturers would worry is getting sued for more than would normally be the case, for any given accident, but I'm sure they can buy protection from that from their favorite senator cheaply enough.
Re: (Score:3)