Humans To Blame For Most Self-Driving Car Crashes In California, Study Finds (axios.com) 187
cartechboy writes: Turns out computers are better drivers than humans after all. Axios compiled a study that found the vast majority of crashes in California involving self-driving cars were not caused by the autonomous vehicles themselves. Of the 54 incidents involving 55 companies holding self-driving permits in California, only one crash could be blamed on a self-driving car in autonomous mode. Six crashes were when the self-driving cars were in conventional driving modes, while the majority of the accidents were to be blamed on other drivers or pedestrians. Maybe self-driving cars aren't such a bad thing after all, it's humans that are the problem.
I'm sorry, Dave... (Score:2)
It can only be attributable to human error.
https://www.imdb.com/title/tt0062622/quotes/qt0396920 [imdb.com]
Re: (Score:2)
That's 2010 baloney. The monolith corrupted him the same way it corrupted the monkeys so they picked up a weapon.
Re: I'm sorry, Dave... (Score:2)
False logic (Score:2)
Human drivers know that other road users make mistakes. As long as the majority of drivers is still human, the real question here is not whether a self-driving car accident has to be blamed on another driver or a pedestrian, but whether a human driver could and would have avoided the accident in question.
Re: (Score:2)
Agreed. There's also the issue of whether unusual or unexpected behavior by the self-driving car makes other road users more likely to hit it.
Re: (Score:3)
I like how Australia handles this. Learners' cars are marked, as are teenagers', so you can anticipate the type of likely stupidity. It's hilarious the wide berth learners get; almost as unpredictable as roos. Add Elderly marking to the mix, everyone wins.
Re: (Score:3, Funny)
I like how Australia handles this. Learners' cars are marked, as are teenagers', so you can anticipate the type of likely stupidity. It's hilarious the wide berth learners get; almost as unpredictable as roos. Add Elderly marking to the mix, everyone wins.
We have elderly marking here in the US. They are all required to drive a Buick with a handicap marker somewhere, usually on the plate itself.
Re: (Score:2)
Our goal should be to ban all human drivers
Not quite.
It's more like *if* our goal is to minimize the number of traffic injuries, *then* we should probably aim to ban most human driving.
But minimizing the number of injuries may not be the only goal. Individual freedom, flexibility, and whatnot, may be other goals that may be in conflict with autonomous vehicles.
Re: (Score:2)
Or simply put in accident avoidance systems that take control of the car any time the driver either lapse in attention or starts driving like an idiot / illegally.
I once worked on lane-tracking software (Score:2)
Given perfect weather and the absence of traffic, animals or pedestrians, lane tracking software is still hard. Not all roads are well marked
I'm a futurist and a big fan of the idea of autonomous vehicles
I'm also a programmer who has been writing code since the 70s
The current tech seems to be 90+% percent working. The last few percentage points and edge cases are where the deeper problem lies
Re: (Score:2)
On unfamiliar roads, I often have that problem also. (I'm a human, by the way.) CA roads are still recovering from the Great Recession, so I often have to guess around faded lines.
If there are cars in front of me, I simply follow them, hoping they know from prior experience on the same road. If not, I keep an eye on the cars around me for cues. If there are no cars around me, then guessing wrong has minimal risk anyhow.
As much as I rely on
Re: (Score:2)
There's a lot of practical little heuristics like this that humans use. Bots could also, but it may take years to include and tune them.
And just when they are done tuning for every situation involving another car on the road, there will be another accident with a car that was for some reason half on a sidewalk, and then they will need to start over again. There are so many possible edge cases I cannot fathom how it could ever work.
Re: (Score:2)
If you entertain the thought that Human-equivalent AI will someday be implemented (I do), you could expect it to be able to drive a car just as well as a human. You could probably even expect it to drive as well as the best human driver. With cars having the equivalent of the human sensors, and then some more (radar, lidar, vehicle-to-vehicle communication, ...) , I think that they have a good chance of becoming a decent improvement in traffic.
But given the state of AI today, in my view the software in auto
Re: (Score:2)
Are they really using 'AI' for self-driving, or are they just using a ton of fuzzy logic? I'm assuming fuzzy logic, it's more predictable and more tweakable. Actually, fuzzy logic for the decision making, AI for the recognition of objects in images.
Re: (Score:2)
Re: (Score:2)
On unfamiliar roads, I often have that problem also. (I'm a human, by the way.) CA roads are still recovering from the Great Recession, so I often have to guess around faded lines.
If there are cars in front of me, I simply follow them, hoping they know from prior experience on the same road. If not, I keep an eye on the cars around me for cues. If there are no cars around me, then guessing wrong has minimal risk anyhow.
As much as I rely on that algorithm myself, the thought of a bot having a similar algorithm bothers me. But faded is faded.
I'm somehow reminded of a joke.
A teenage driver was driving for the first time in the winter. His dad told him, "If you ever get caught in a snowstorm, just wait for a snowplow to come by, and follow it until it gets onto a major road."
Well, sure enough, the kid got stuck in a storm, so he started following a snowplow. After about half an hour, the snowplow driver stopped and got out of his truck.
"Why are you following me?" the man asked the young driver.
"My dad said that if I ever got stuck in a snowstor
Re: (Score:2)
Just last week I was driving on some fresh asphalt with those little flag-type reflectors in the middle waiting for proper botts' dots or what have you to be applied... given the area, actually, I suspect it's a rumble strip center line. And the guy in front of me could not manage to interpolate those dots into a line at all. He did okay (or at least average) on the sections before and after, but he had real trouble where the line wasn't clearly marked out for him even though the reflectors are highly visib
Re: I once worked on lane-tracking software (Score:2)
The Google cars are heavily reliant on prerecorded, highly detailed 3D maps. Tesla tries to "just do it" with......Some success.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
So they are working with a flawed implementation from the start, unless they have 1 google car for every construction zone in the world.
Construction zones are a real issue. I don't know if they've made progress dealing with them. You are right though, the mapping component is huge: like Google maps but much, much more detailed.
Re: (Score:2)
A robot could also measure objects (lamp posts, buildings, signs, etc) on the side of the road, and use a detailed map to figure out exactly where it is. That way you could drive even if you're the first on a completely snow covered road.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It isn't about being as good or better than humans, it is just because of the serious consequences that occur when a 4500 pound machine makes a mistake in traffic.
The consequences are not less serious when a human makes a mistake driving that 4500 pound machine.
Re: (Score:2)
Re: (Score:2)
There's a reason they test in Arizona and California.
I wanna see these things take icy turns at reasonable speed, and avoid skids better than humans and recover from them better than humans.
We will still have lawyer problems with dollar signs in their eyes as they sue for accidents, claiming facetiously they are improving the quality when in fact they may be delaying mass roll out, leading to tens of thousands of extra deaths per year, for years or decades.
Imagine 100% roll out, and deaths drop from 35,000
Re: (Score:2)
I wanna see these things take icy turns at reasonable speed, and avoid skids better than humans and recover from them better than humans.
Most of that technology is in every car sold since what, 2010? They've all got ABS and ESP. If they've got AWD as well, then all they have to do is keep to reasonable speeds and the underlying platform will do most of it. That's how most humans handle those conditions, at least where the roads aren't being cared for. In my experience, icy roads get treated somehow. Thankfully, in my region they use volcanic cinders rather than salt. This is hard on the tires, and driving on the loose cinders can be a bit sl
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I wanna see these things take icy turns at reasonable speed, and avoid skids better than humans and recover from them better than humans.
Low level vehicle control on icy roads is a fairly easy problem to solve for computers.
Re: (Score:2)
Re: (Score:2)
Given perfect weather and the absence of traffic, animals or pedestrians, lane tracking software is still hard. Not all roads are well marked
No it is not. Marks are only used as guidelines. There are plenty of things lane tracking can use. Here is a list of about 20 algorithms: http://airccse.org/journal/jcs... [airccse.org]
I guess if you google for them individually you find youtube videos that show how the algorithms work.
Camara based lane tracking only fails in deep snow. But usually sides get market with sticks then,
Good Driver (Score:3, Insightful)
A good driver is not just supposed to prevent at fault accident but should also do its best to prevent accident when the other party is at fault. If you replace all good drivers with self driving system, you are going to have lot more accidents than you have today if the self driving system is simply claims to have no at fault accident. Remember there is no reward for preventing accidents and so there is no tracking of it and we don't know how many of them are prevented daily.
Once I was on a divided road (divided by 2 feet concrete wall) driving on right lane. A car took left turn and entered in wrong way to the left of me thinking that it was a 1 lane undivided road and 2 feet divider was a barrier to some private property. It was not at all a danger to me and I would have just ignored it and let it have accident but I honked hard, stopped car, opened the window and alerted driver. He backed up and moved to the other side of divider. A self driving car would have just ignore this car. I can easily narrate dozen such incident and few more incidents where I was at fault.
We need self driving car which is not just not getting involved in at fault accident but also its at no fault accident rate is lower than average.
There's a massive reward for preventing accidents (Score:2)
Re: (Score:2)
Re: (Score:2)
Well, why would you preferentially replace the *good* drivers first?
I don't think it follows form the 98% human fault rate that robotic drivers don't try to prevent accidents. Who would want to ride in a car which didn't drive defensively? But I suspect robots aren't quite as good as human with dealing with other humans behavioral flexibility, which is a nice way of saying "unpredictability".
That flexibility is sometimes good, sometimes bad. In a world of robotic drivers, no car would stop to honk at anot
Re: (Score:2)
"no car would stop to honk at another car for entering the the highway the wrong way; but then that other car wouldn't be doing that."
Replace wouldn't with shouldn't in that but then the other car part.
My car shouldn't pop out of park in certain situations, but was recalled because it could.
Cars shouldn't lose power on the freeway, but they do.
Traffic lights shouldn't quit working, but they do.
Self driving cars shouldn't make mistakes, but they will because they're just like any other object that can encoun
Re: (Score:2)
Dunning Kruger. The good drivers... the ones who dive defensively and, for example, cruise down a road where they know the lights are timed at the speed limit and hit every green... will be the first to understand: "Yeah, a computer can probably do this better than I can.", and trade in their old cars for self-driving models. It's the bad drivers... the ones who weave in and out of the traffic pattern, take every turn too fast, jackrabbit off every green, and slam on the brake at every red... who will neve
Re: (Score:2)
Apparently you haven't heard of Teslas slamming into perfectly stationary freeway dividers. If you replace all cars with perfect self-driving cars (a non-existent object, if it ever will), then sure, by definition, mistakes wouldn't occur, because they are perfect.
Re: (Score:2)
It is a fair point, but I would say that all Waymo cares about for the moment is some good numbers for the PR. In the long run, this is not going to cut it.
depends on how you assign "fault" (Score:2)
What strikes me is the raw number of accidents are higher in autonomous mode. Maybe the vehicles spend the majority of their time in autonomous mode. The data need to be normalized in accidents per mile driven.
Re: (Score:3)
Unless I misunderstand something, what those numbers tell me is that the sample size os too small to provide meaningful data.
all or none (Score:3)
IMO, the only way self-driving cars will be safe is if all cars are self-driving. they need to be able to talk with each other in order to be safe. Humans are so illogical, no way to to have an algorithm to predict what they are going to do at any given moment.
Re: (Score:2)
Re: (Score:2)
IMO, the only way self-driving cars will be safe is if all cars are self-driving.
Nope. Even then they will not be 100% safe all the time in every situation.
A more reasonable standard is whether they are safer than human drivers, and they already are.
Re: (Score:2)
Re: (Score:2)
Catch 22 description (Score:2)
Driving Would Be So Easy (Score:2)
If there wasn't anyone else on the road.
One missing factor (Score:2)
Maybe self-driving cars aren't such a bad thing after all, it's humans that are the problem.
Are you sure that California isn't the problem?
Unpredictable (Score:2)
Well, Duh! (Score:2)
I don't want (Score:2)
To live in a robotic world where everything I have so enjoyed over my lifetime is run by a machine.
I want to feel the acceleration of a car when I want to feel it. I want to take a curve at the limits of the machine.
I want to be free of my robotic overlords.
Fortunately I'm just old enough that those who foolishly believe robotic cars, robotic airplanes, robotic sex, robotic ass wipers, and all other things robotic will make their life miraculously wonderful will not be able to dictate their living hell upo
Re: (Score:2)
I want to feel the acceleration of a car when I want to feel it. I want to take a curve at the limits of the machine.
I enjoy all that stuff too, but it's a lot safer for everyone if it happens on a track. Where there is sufficient demand, there can be municipally-operated tracks, so they don't have to be expensive.
Honesty (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Humans are to blame (Score:2)
Perhaps the headline should have been Human drivers are to blame for most accidents
The solution ... (Score:2)
Signed,
SkyNet.
You can cause a crash... (Score:3)
... even if you technically are not to blame. Stopping suddenly for example and causing people to pile into you... technically is typically the fault of the person that rear ended you. But "you" did cause it. If you had not driven in a way that was surprising and unpredictable to other drivers then it wouldn't have happened.
Now the law will say that you should maintain enough distance that even if people do that there shouldn't be an accident.
But if the streets are crowded... high traffic... high congestion... that is often not viable.
Now what they'll then say is "go slower"... the problem is that if everyone does that the traffic becomes even worse.
What people learn in busy cities is that there is a "way" to operate on the road that has more to do with Chinese bicycles than it does with California road laws. The idea is that everyone follows a code of conduct on the road... "vibe"... a pattern... and if everyone does it... then we have TRUST... and that trust means that we can drive faster and with less space between cars than the law would like. But it is generally very safe so long as people are aware of and hold the pattern.
When a given individual on the road doesn't follow the pattern... this system becomes unsafe. I notice this all the time on the streets of the busy city in which I live.
You just get a sense that things are "off" on the road... people are not moving predictably. Maybe it is me... maybe it is them... doesn't matter. I get off the road immediately. I literally park and go for a walk or something. And often I find that there are shattered car parts all over the street when I get back. Why? Because the accident I could sense coming... because people weren't following the pattern caused an accident.
So... was the AI responsible for the accident? Yes. Legally? Perhaps not. But legality has very little to do with how actually driving on an actual street works. Driving computers have been dealing with this for awhile.
It is a very annoying situation when the police give people tickets for this... according to DMV rules... the way people drive on the streets is generally illegal. It is however how we've basically always driven and continue to drive. If you wanted to... you could probably haul half the drivers in for violating the law.
You'd have a riot on your hands and the politicians would probably be forced to actually have the law reflect how we actually drive. But they could do it... for a minute.
Long and short... Cali driving laws are more of a rough guideline and less of the letter of conduct.
Re: (Score:2)
What I was talking about was not tailgating.
What I was talking about is the reality of driving in any major city. Anyone with any experience in this knows what I'm talking about.
By your comments, I must assume you don't know how to drive.
Re: (Score:2)
So other drivers and pedestrians are (Score:2)
Just my 2 cents
will not matter on /. (Score:2)
Re: (Score:2)
Far too many trolls here will not care about facts.
Thats so funny Windy. I nearly fell off my chair.
this post [slashdot.org] explains who doesn't care about facts.
You are still yet to show a single lie, yet you claim it all the time. You like to also claim any random AC is me, it's probably you. You are dishonest enough to pull that shit.
I often point out your lies [slashdot.org] and lies [slashdot.org] more lies [slashdot.org] more lies [slashdot.org] even more lies [slashdot.org] lies [slashdot.org] and lies [slashdot.org] When you aren't lying, you are just making shit up [slashdot.org] that is in no way believable, and lying.
Easy way to fix that (Score:2)
"Hey, Hal, fix it so there's no more crashes involving humans and robot-driven vehicles."
"No worries, Dave."
*kills all humans
What will _force_ us into Avs ... (Score:2)
Once it becomes widely known that the vast majority of accidents are caused by human error, then insurers will push up the cost of "human" insurance.
We will then enter a period of claims containing "punitive" damages: "well, why wasn't the car computer controlled?" and insurance rates for people will climb even more. And as rates of vehicle accidents become lower, due to there being more AVs on the roads, the publc's tolerance for accidents as being "acts of god" will dimi
Fat chance (Score:2)
See the sick turn of mind exhibited here? (Score:2)
Re: (Score:2)
Machines are not humans and never will be.
Then they will never reduce accidents, or reduce driving safety, and we should act as if they are a novelty not as if they are a savior.
Liability vs safety (Score:2)
Re: (Score:2)
Humans are *not* the problem; unsafe coding practices and lack of regimen is THE problem. Yes, stuff happens. But the onerous conclusion that we have to modify our behavior for the needs of some god-forsaken coder's neural network is something to be actively rebelled against. What churl-- we're the people, they're not.
Re:Umm.... (Score:5, Insightful)
But the onerous conclusion that we have to modify our behavior for the needs of some god-forsaken coder's neural network is something to be actively rebelled against.
The alternative is to even more onerously modify our behavior for the needs of other human drivers. I'd rather share the road with the computers.
Re: (Score:2)
If you can't trust humanity, your'll be surprised just how bad coders are. And if you think they're predictable, you have another thing coming still.
Re: (Score:2)
Oh, sure.
That's why hundreds of thousands of systems were victims of ransomware in the past 18months. Everyone updates!
Indeed we're alls suffering from update fatigue.
"I can drive just fine, thank you." (Score:4, Funny)
Statistics prove otherwise.
In 20 years it will be illegal for humans to drive cars in public spaces. You heard it here first.
Re: (Score:2)
Re: (Score:2)
I'm gonna pull out the argument that keeps being used to explain why the US has such bad broadband coverage: The US is f'ing huge. How many of those miles are driven on completely straight thousand mile cross country roads with perfect visibility? Should Waymo start 'testing' a lot on those roads to drive up (heh) their average?
Re: (Score:2)
Re: (Score:2)
Statistics prove otherwise.
Statistics may prove that *people* are not good drivers or that they are not as good as they think they are. Statistics don't prove that a particular individual driver is not a good driver. Not unless there is sufficient statistical data that was collected on said particular individual.
Statistics may prove that people are not good at math and science, but that doesn't mean that Einstein was bad at math and science.
Re: (Score:2)
Nope. If the piece of software steering the car hits a human or human operated car the software is to blame and needs improvement.
Nonsense. There are plenty of situations where an accident is unavoidable, by either a computer or a human. For instance, a car runs a stop sign at a blind intersection. Or an oncoming car swerves into your lane in heavy traffic.
Re:Other Drivers (Score:4, Insightful)
Now that's an interesting theory.
So if I cross into the oncoming traffic lane and a Tesla in autopilot can't avoid hitting me. It's the software's fault? If I try to change two lanes to the right, cut off the car in the middle lane and the autonomous vehicle in the right lane hits me as I come out of nowhere, it's the software's fault?
Now I can understand skepticism at the claim that over 98% of autonomous vehicle accidents are the human's fault, but the claim that humans in principle automatically bear no responsibility for mishaps involving software seems even more extreme.
The thing about huimans is that they *are* amazingly good at things, except when they're not. Somebody can be a model drive nine days in a row and on the tenth day do something stupid, because that's how people are.
Re: (Score:2)
If a human driven car hits another human driven car, then the police generally can figure out who is at fault. So in an accident with an autonomous vehicle, why not also let the police figure it out? Why automatically insist that the one car is in the wrong by default?
Re: (Score:2)
Re: (Score:2)
Well, it's obvious that if the police see long skidmarks coming from a human driven vehicle smashing into an autonomously driven vehicle, and the witnesses claim that the light was red for the human driver and green for the autonomous vehicle, then you would conclude that the autonomous vehicle was in the wrong?
The autonomous vehicle will have a human in the car, and can describe what happened during the accident as well as the other party can.
Re: (Score:2)
The beauty of robot cars is they are always entirely predictable. The outcome may not have been what you intended, but computers only ever do exactly, to the letter, what they are instructed to do.,
With self-learning networks, nobody knows exactly what the computer is instructed to do. And we certainly cannot predict what they'll do when they get live inputs instead of training data.
Re: (Score:2)
Well, I think as long as we are dealing with prototypes, there would on the engineers' part be such a rebutable presumption of guilt, because that's how you do engineering.
From a legal perspective it makes no sense not to go with the more usual preponderance of evidence standard, especially as there's bound to be a lot more data. But if for some reason complete log and telemetry data aren't available for the robot driver, in *that* case there might reasonably be a presumption of guilt.
Re: (Score:2)
Don't blame humans for not understanding a totally foreign introduction in a system that
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Which brings up a good question. Could a Waymo vehicle's black box be forced to be revealed in court showing that it unexpectedly braked due to an odd shadow in the roadway? Could a rear-end collision be the legal fault of the Waymo vehicle since it is basically driving like a person high on drugs, i.e. seeing things that are not there?
My 2014 Ford has backup sensors that regularly falsely detect shadows as solid objects and warn me. And while I would hope sensor tech has gotten better in the last 5 year
criminal case only or it's an NDA / EULA says no (Score:2)
criminal case only or it's an NDA / EULA says no
Re: (Score:2)
Re: (Score:2)
When every major self-driving car company has vehicles that defer control to a human in as little as milliseconds before a crash ...
No they don't. Only Uber did that. Waymo and Tesla do not.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)