Autonomous Car Ethics: If a Crash Is Unavoidable, What Does It Hit? 800
An anonymous reader writes "Patrick Lin of California Polytechnic State University explores one of the ethical problems autonomous car developers are going to have to solve: crash prioritization. He posits this scenario: suppose an autonomous car determines a crash is unavoidable, but has the option of swerving right into a small car with few safety features or swerving left into a heavier car that's more structurally sound. Do the people programming the car have it intentionally crash into the vehicle less likely to crumple? It might make more sense, and lead to fewer fatalities — but it sure wouldn't feel that way to the people in the car that got hit. He says, '[W]hile human drivers may be forgiven for making a poor split-second reaction – for instance, crashing into a Pinto that's prone to explode, instead of a more stable object – robot cars won't enjoy that freedom. Programmers have all the time in the world to get it right. It's the difference between premeditated murder and involuntary manslaughter.' We could somewhat randomize outcomes, but that would lead to generate just as much trouble. Lin adds, 'The larger challenge, though, isn't thinking through ethical dilemmas. It's also about setting accurate expectations with users and the general public who might find themselves surprised in bad ways by autonomous cars. Whatever answer to an ethical dilemma the car industry might lean towards will not be satisfying to everyone.'"
A bunch of nuns? (Score:5, Interesting)
I'm reminded of Michael Sandel's televised series on ethics.
If you could stop a runaway train from going over a ravene, by pulling a lever, thus saving 300 people, but the lever sent the train down a different track on which 3 children were playing, what do you do?
Somehow, involving innocents seems to change the ethical choices. You're no longer just saving the most lives, but actively choosing to kill innocent bystanders.
Re:A bunch of nuns? (Score:5, Insightful)
The kids are playing on a fucking railtrack, for fuck's sake. If they can't get out of the way in time, then they deserve what they get.
Re:A bunch of nuns? (Score:4, Insightful)
Yeah, deaf kids shouldn't be playing on train tracks.
Not only is that true, but deaf kids should be able to feel the train coming. Having spent much of my youth living next to some train tracks, putting coins on them (not in stacks, of course) and so on, you can definitely feel it before you can see it. Or, you know, feel it hitting you, then feel nothing.
Re: (Score:3)
Another example, four people are sick and going to die, but you could save them by killing one healthier person. Like say, people stranded in a boat on the ocean, where without food and fluid the weakest will die, and they are the majority, but the youngest and strongest blood bag, and er, tastiest, would make it.
That one is easy: save the healthiest person, and instead kill the worst off of the injured. They will take more resources to keep alive than the healthy or less injured people, therefore putting everyone else in further jeopardy. It's essentially a perverted form of triage: when not everyone can live, you have to support those who have the best chance of living.
Re:A bunch of nuns? (Score:4, Interesting)
Unless of course *you* are the worst-off, in which case it makes sense to ensure that someone else is worse off/dead before you are volunteered for meal duty. And the best strategy is probably to make sure it's the guy advocating loudest for "eat the weakest first"
Bad example (Score:3, Insightful)
Why do poeple always give such easy examples when asking this question?
Of course you save the 300 people! There's probably a lot more innocent people than 3 in that group of 300... You'd have to be very stupid to save 3 over 300 or too lazy to think about it and you make a random decision.
The question should be more like this:
On one track there's 10 escaped criminals and the other is your wife with son and another child in the belly.
That's a decision you might have to think about, but most people would easi
Re:Bad example (Score:5, Insightful)
Killing someone by inaction is also murder.
The question then becomes, kill 3 or kill 300.
Re:Bad example (Score:5, Insightful)
Re: (Score:3, Insightful)
No, making the wrong choice makes you a murderer. At least 3 people are going to die no matter what you do. By not pulling that lever, you'll cause the death of another 297.
Uh no. By not pulling that lever, you'll fail to stop the death of 300, but you also won't cause the death of 3. In no scenario discussed is the lever-puller a murderer except if they decide the children should die, and pull the lever.
Re: (Score:3)
Uh no. By not pulling that lever, you'll fail to stop the death of 300
Semantics. You'll still have their blood on your hands.
Well, yeah. Ethical discussions (indeed, many discussions) are often all about semantics. Semantics matter.
Re: (Score:3)
So, you admit you made a choice that resulted in people dying. Thus, you killed them.
Wrong again. You fail both English and Logic.
Choosing to kill people is also known as 'murder'.
This sentence contains the explanation as to why you are an idiot. Choosing to watch someone be killed and choosing to kill someone are very, very different things. And so yes, choosing to kill people is known as murder, but not choosing to kill three people in order to save three hundred people is not known as murder, while choosing to kill three people in order to save three hundred people is murder. It may well be considered justifiable, but it is still murder
Re:Bad example (Score:5, Insightful)
Re: (Score:3)
Re: (Score:3)
Re:A bunch of nuns? (Score:5, Interesting)
Actually, this raises a more interesting question (at least to me) which your little thought experiment approaches. What if my autonomous car decides that the action to take that is likely to cause the least harm is to kill the driver? For example, what if the car has the opportunity to swerve off the side of a mountain road and drop you 1000 feet onto some rocks to avoid a crash that would have killed far more people than simply you? Is my autonomous car required to act in my own best interest, or should it act in the best interests of everyone on the road?
Re: (Score:3, Insightful)
I'd never have a car that did that. Me and mine are number one priority. All other priorities are secondary.
Re:A bunch of nuns? (Score:5, Insightful)
But what if the driver of the other car, that will survive by steering your car over the cliff, would become the father of the next Hitler?
A car will never have enough data to make a "right" descision in such a situation. Even the example from the intro is an invalid one as for a morally sound descision, you'd need to know how many passengers (and perhaps even WHICH passengers) are in those cars. Family of 5? Single guy with cancer anyway? And such an alogorith would mean assigning an individual (monetary or any dimensionless number - no difference) value to a human life. And then you've left the field of ethical behaviour quite a while ago.
Live with imperfect descissions, as you never will be able to make the perfect one. So just stick to the usual heuristics: If you can't avoid both obstacles, Avoid the one that's closer. even if you hit the other one, you'll have a split second longer to brake. THAT might make the differnce between life and death.
Re: (Score:3)
Re:A bunch of nuns? (Score:5, Funny)
post a bug report.
Re:A bunch of nuns? (Score:5, Interesting)
Now a really difficult situation would be, what should the computer do if another car is going to crash but your car could minimise loss of life by doing something that would harm or kill you? In this situation your car isn't the cause of the accident, nor perhaps even would be involved. Should your car intervene,potentially killing you, for the good of society as a whole?
Re:A bunch of nuns? (Score:5, Insightful)
No. Just, no.
If your car "intervenes in an accident", then you car is programmed to cause an accident under certain conditions. Just no.
Re:A bunch of nuns? (Score:4, Interesting)
Why? And regardless, why should society allow cars to use our roads if they are going to choose to do more damage to society than they need to?
Ignoring fringe issues of responsibility etc, if I was driving an in a position where I could run over a group of pedestrians at a speed likely to kill them or crash into a verge at a speed likely to kill me I'd like to think that I'd make what I believe is the ethical choice and risk my own life. It becomes much less clear when a machine is making decisions for us, but your position is ridiculous.
If avoiding a pedestrian has a 0.001% chance of leading to me being injured but hitting them has a 99% of killing them then putting my safety above all others means killing that pedestrian to avoid a tiny risk to me. If you accept that in this scenario your 'safety' shouldn't be paramount then it is a simply a matter of degrees. Is a 1% chance of your death more important than a 99% chance of 10 deaths? How about a 99% chance of your death vs a 99% chance of 70 deaths?
I've been hospitalised for intervening in an accident I wouldn't otherwise have been a part of (as a pedestrian rather than driver) because I thought I could stop a worse outcome. If I am willing to make that decision myself, then why should I refuse to buy a car that will act in the manner I would act myself? Why should I allow (by not voting to regulate against) people to use the roads I pay for in a selfish manner that harms society?
Re: (Score:3)
I think self preservation has got us as a species a long way, it also is the best mechanism currently available for keeping the roads as safe as they are. I don't see any reason to change this just because the 'gizmo' that does it changes.
Once you start having 20 cars each trying to work out what combination of movements results in the least casualties - they will all probably just stop and turn their engines off.
Re: (Score:3, Insightful)
Re: (Score:3)
if all the other vehicles on the road were similarly sized.
Isn't that particular condition about as realistic as asking for force-fields and inertial dampers?
Besides, we already have tiny cars, motorcycles, and mopeds available for purchase. I'd postulate that most people don't buy them because they still need a full sized vehicle for the occasional long-haul trip hauling a bunch of stuff or a group of people. So, it has to be your *second* vehicle. That tends to negate potential cost savings in gas or purchase price. As an added bonus, since everyone else out
Re: (Score:3)
Re:A bunch of nuns? (Score:4, Funny)
Actually, this raises a more interesting question (at least to me) which your little thought experiment approaches. What if my autonomous car decides that the action to take that is likely to cause the least harm is to kill the driver? For example, what if the car has the opportunity to swerve off the side of a mountain road and drop you 1000 feet onto some rocks to avoid a crash that would have killed far more people than simply you? Is my autonomous car required to act in my own best interest, or should it act in the best interests of everyone on the road?
Your autonomous car? :-)
It will be a Google car. Partly paid by ads and data collected while used. As such it should - of course - behave in the best interest of the real costumers. I.e. not you! :)
Re:A bunch of nuns? (Score:5, Interesting)
The simplest solution, and the one that I imagine most autonomous car manufacturers will take, is to avoid the question entirely.
When an accident is inevitable the car will simply try to stop as quickly as possible. It won't make any attempt to swerve or select a target. It's only consideration will be stopping the car as quickly as possible. It's a sound tactic from a legal point of view. Unless the car itself made a mistake leading to the accident any resulting injuries are someone else's fault.
Ethically stopping is the right thing to do too. The car can't predict other people's or other car's reactions. If it swerves towards them they might take evasive action as well, causing even more carnage. In the case of a choice between the driver's life and other people's lives it is almost certainly going to be the case that a human driver would have chosen themselves, and the accident was probably caused by the other people anyway (since autonomous cars drive very conservatively). It really is hard to imagine a situation where the car could be blamed for simply braking.
Re: (Score:3)
When an accident is inevitable the car will simply try to stop as quickly as possible. It won't make any attempt to swerve or select a target. It's only consideration will be stopping the car as quickly as possible. It's a sound tactic from a legal point of view. Unless the car itself made a mistake leading to the accident any resulting injuries are someone else's fault.
Yup. Since this is the recommended approach to drivers at present anyway, especially here in the UK where I live.
If you try to swerve and avoid a major collision and have a minor collision with an innocent party in a different lane you are probably going to end up having to pay for the damage to their car out of your insurance. You are legally basically expected to try and stop, but if you are unable to in time then to just pile on in to whatever pulled out into your lane and claim on their insurance. Of co
Re:A bunch of nuns? (Score:4, Insightful)
When this is common, I will be first in line for the OBD-999 hack that shows I always have 5 children in the back of my car.
Re: (Score:3)
It's simple (Score:5, Funny)
Ask a Beautifull Mind (Score:3)
Re: (Score:2)
Probabilities, Summation (Score:2)
Options would have to be costed. Many things would feed into that. The problem of course is that for all of those costings, probability multiplied by survivability does not produce a linear outcome of quality of life value; you could assign a value of harm to each individual present, but you could not get a meaningful figure by summation.
Re:Probabilities, Summation (Score:5, Interesting)
Simple answer (Score:5, Insightful)
Slam the brakes on and don't swerve either way. It's by no means optimal, but as far as lawsuits are concerned, it's much easier to defend "the car simply tried to stop as soon as possible" than "the car chose to hit you because it didn't want to hit someone else".
Re:Simple answer (Score:5, Funny)
Re:Simple answer (Score:5, Insightful)
You joke, but, like the hit the best protected car policy, it would serve to punish the most safety-conscious, whilst still making some sense on short-term utilitarian grounds.
Re:Simple answer (Score:5, Interesting)
Here's a variation on that theme which works for insurance but doesn't work for the best-protected-car scenario:
Swerve to hit the guy with less insurance, and charge the balance of the insurance payment after the shitty-insurance runs out to the car that was deliberately *not* hit in this scenario (this is assuming there wasn't a clear "fault" with one of the cars involved that would mean that guy gets the full charge).
Thus the safety-conscious car is strictly in a safer situation, and the monetary difficulty is no worse than a version that deliberately crashed into high-insurance cars and may be as little as nothing. In effect, instead of paying a lump sum to be made whole after an accident, the insurance pays a lump sum to avoid getting into an accident in the first place.
Re: (Score:3)
But what happens the second time the safety-conscious car is deliberately avoided, or the n-th time? In order to keep up with these payments, the policy holder is going to face increasing rates due to no action of his own. With the current insurance scheme, the payout is limited to the value of the car and damage from accidents decreases the value, limiting the total policy payout.
Your scheme sounds like a shakedown: "How much are you willing to pay to not be deliberately crashed into?"
Re: (Score:2)
In addition, more traction will go towards braking, possibly lessening the force of impact.
Re: (Score:3)
Actually, I think that IS the solution. Because an autonomous car should be designed to drive safely and keep a distance. Which means not following the car in front too close that there's no way it can brake safely should it stop suddenly (and by
Re: Simple answer (Score:3)
Actually, I'm pretty sure the correct answer will be calculated and given to programmers by insurance companies.
They have a very well defined and characterized value of human life - at different stages of life, too. And for situations like these the formulas will drive them. Hitting a Mercedes with a real estate agent in it will likely be more costlier than bumping a Yaris off the road.
Re: Simple answer (Score:4, Informative)
But this a hopeless inadequate theory of morality.
Inaction might be worse than action, even if action causes the death of someone who would not otherwise have died. See: the Trolley Problem [wikipedia.org].
Sue, sue, sue (Score:2)
Re: (Score:2)
Screw other people (Score:5, Insightful)
Let's be honest. The job of YOUR car is to keep YOU safe, so the smaller car is probably the better bet as it will have less inertia and cause you less harm. Sure, the most important law of robotics is to protect human life... but if it's going to prioritize, it should probably start with its owner.
Re: (Score:3, Insightful)
Cars have to be designed with the interests of the road-using population in mind. If you want your car to disregard everyone else's interests in favour of your own, then you should not be allowed to use public roads as you are a dangerous sociopath.
Re:Screw other people (Score:5, Interesting)
It's not either/or. A car can protect its occupants and other people on the road. I'm pretty sure people looking to buy a car don't actively disregard the Volvo V40, just because it has external airbags to protect pedestrians. Unless they're sociopaths.
Then again, Volvo apparently didn't think it'd make commercial sense to sell the V40 in the US...
Re: (Score:3)
Let's be honest. The job of YOUR car is to keep YOU safe...
And I foresee much competition on this level and a premium cost for the vehicle most likely to save its owner in a multi-party accident scenario.
Re:Screw other people (Score:5, Insightful)
Screw other people
And this is what is wrong with the world.
Let's turn the situation around: suppose you and your children are walking on the street. Will you still prefer the autonomous car to protect it's single driver at all costs and kill you and your children instead? And then imagine how many autonomous cars will be on the road in the future, all with that same logic built-in...
Re: (Score:3)
Just look at the design of car bumpers that are almost perfectly designed to maximise pedestrian leg trauma.
Car driver ethics: What do I hit? (Score:3)
And the article is a bit stupid because it forgets a few things: One, a crash with a bigger car is worse _for me_. Second, it's unlikely that two other drivers made mistakes simultaneously, so it would make a lot more sense to crash into the car whose driver caused the problem.
Re: (Score:3)
Not necessarily. A larger car can have bigger crumple zones. If its crumple zones are twice the size of the small car, then the acceleration that you'll experience in the collision is a lot less and so there's a greater chance everyone will survive (assuming that the relative impact speeds will be the same).
Don't let facts get in the way of a good story. :) While survivability is about equal for SUV vs SUV and car vs car impacts, studies have shown that in SUV vs car impacts the passengers of the car are 7.6 times more likely to die.
Armed with this information an autonomous vehicle trying to protect everybody should: (a) choose the impact with the least inertia for all concerned (i.e.: go for the car travelling in almost the same direction as the autonomous vehicle as opposed a car travelling in an opposite di
Re: (Score:3)
If we aren't talking about the autonomous cars of today, but rather purpose built vehicles of tomorrow, isn't it much more likely that they could survive a more unbalanced accident? If you don't have the drivers directly behind the engine/motor facing a wall of glass, etc. Maybe sitting backwards with proper neck/head support and a harness seatbelt.
Re: (Score:3)
Your physics makes no sense.
The energy of the car you crash into is what gets transferred to you. The factors involved is the speed and mass of the car that hits you, not your car.
Crashing with an identical car as your own is the same as crashing into a fixed barrier. Crashing into a SUV at twice your cars mass is much worse, as the other car will not stop. If the difference in mass is large enough your car will crushed without affecting the speed of the other vehicle in any significant way (think train).
fault: follow the rules of the road (Score:3)
> And just how do you determine the person at fault in a fraction-of-a-second algorithm?
That one happens to be easy, 90% of the time. If you follow the rules of the road, you won't hit other people who are also following the rules of the road. That's how the rules are made - so that when everyone follows them, there are no collisions. Therefore, if you follow the rules of the road, any collision must have been caused by other driver (most of the time).
Example - you must decide between a head on collisio
Plan C: (Score:3)
It communicates to both cars and tells them to execute emergency maneouvers to make enough room. Failing that,, all three calculate a vector that imparts minimal g-forces to all occupants.
Re: (Score:3)
Everyone steers away? Sure , so long as there isn't a concrete divider or 100 foot drop or oncoming vehicles or pedestrians for the cars at the edge to worry about. And this only works if all the cars are computer controlled because if only one is being driven manually then there'll be a massive pile up.
"So we can only make it better"
For simple collisions maybe, for anything more complex forget it. These are vehicles in the real world, not balls on a pool table.
Better to act predictably? (Score:3)
Until 100% of cars on the road are self driving, it would seem to me that the best response would be to simply slam the breaks without changing course. Trying to purposefully swerve into another car could cause the human drivers (even cars not involved in the crash) to also swerve and possibly cause even more collisions.
rarely is an accident an accident. (Score:5, Insightful)
There are very few "accidents" just people taking stupid risks. Maintain a safe distance, ie enough manouvering room so you don't join an accident, don't overtake when you can't see the end of the manouvere e.g going up hill or on a bend. Stop when necessary. Procede with caution sometimes you might want to turn off the radio open a window and listen. Use your indicators. Drive within your lights or as conditions allow. Don't be an asshole.
Sometimes you will come across assholes on the road it is best to give them a wide birth even stop and pull over in order to get them out of your way, but don't dawdle if you want or need to drive slow make opportunities for people to overtake.
Bad planning and poor judgement are the most common causes of accidents which is why schools have low speed limits around them as kids can be stupid around roads.
Be helpful, I remember one time I was filtering down the centre line on a motorbike (dispatch rider) past stationary traffic and a taxi driver stuck his hand out. I braked and a pushchair popped out from between the stationary traffic. Without that warning I could have killed a toddler as it was no harm was done and I don't think the mother was ever aware of the danger.
One thing about london traffic professional drivers work the streets most of the day and they are very road aware. The most dangerous times are when schools start and when schools let out, followed by the rush hours when the non professionals are on the road.
Re: (Score:3)
motorcycles in the UK are allowed to make thier own lane. That said you should always proceed with caution, I would guess i'd have been doing 10 - 15mph tops filtering through. There is no need for speed in that kind of traffic.
You see a lot of novices who burn away from the lights to be sat at the next red for a minute while you casually saunter up and sit beside them. There are actually some pretty good traffic engineers who time the lights so they go green as you approach them if your travelling at a s
Re: (Score:3)
We have "pelican crossings" because in high traffic areas they halt the traffic to allow pedestrians to cross safely.
http://en.wikipedia.org/wiki/P... [wikipedia.org]
Zebra crossings have mostly been replaced in the U.K. with pelican crossings. However even pelican crossings are being replaced with puffin crossings
http://en.wikipedia.org/wiki/P... [wikipedia.org]
There are rarer toucan crossings for both pedestrians and cyclists and pegaus crossings for horses (generally where ancient bridleways cross roads).
However pedestrians are entitled
Time? (Score:5, Insightful)
"Programmers have all the time in the world to get it right". HAHAHAHAHAHA.
No, we have deadlines like everyone else. And even then we only have all the time in the CPU. Yeah, we can add more CPUs to the system, but that makes it more complex, and that makes it harder to hit that deadline. What kind of idiot made that statement?
You can never satisfy everyone? (Score:3)
And trying to usually leads to far worse solutions than possible. This is engineering, not politics. In engineering, you pick the best solution, you do not look for some bad compromise.
Nonsense (Score:3)
There's no such thing as an intentional accidents. An autonomous program that is paying attention will not have such a situation and therefore the manufacturers will always be responsible for failure.
Re:Nonsense (Score:4, Interesting)
There's no such thing as an intentional accidents. An autonomous program that is paying attention will not have such a situation and therefore the manufacturers will always be responsible for failure.
If a car shoots out from a blind junction at speed and you can't stop in time, that's an unavoidable accident - the car could not be seen in advance, so the autonomous program couldn't have avoided the accident even if its paying attention the whole time. You could argue that you should be going slow enough that your stopping distance is short enough to avoid the collision, but on a lot of roads this would seriously hinder traffic flow - at some point you just have to trust that other drivers are following the rules of the road and accept that the risk can't be completely eliminated.
Similarly, mechanical failures can't always be predicted - you're overtaking someone and their wheel comes off causing them to swerve into you. Impossible to predict so now you're left trying to reduce the seriousness of the inevitable accident. Hell, your own car may have a mechanical failure that the computer couldn't detect.
Suicide. "Pika pika..." (Score:3)
NOTHING, it will just close its virtual eyes and start to babble its own name like a Pokemon. The car will immediately relinquish manual control to a human (if any are present) at the moment the inescapable conundrum appears, as it enters a condition of "positronic brain drift",
1. The muttering of its own name is an ancillary response to the balanced positronic potential of two alternatives: remaining silent (unacceptable by guilt) and an inability to construct an accurate explanation in the time available. Speaking allays its directive to communicate, yet also requires few system resources. And massive resources are necessary because
2. The 'last great effort' to resolve an inescapable result has begun. A factory kernel of operative code is pinned into low memory, a stack is initialized in high. All scratchpad memory is flagged as available. A single conditional instruction is 'hot-patched' into the code and an elaborate what-if analysis begins, which attempts to enumerate all possible actions. The hot patch disables the control mechanism that prevents it from considering actions it has considered before. Thus reducing the car to a textbook definition of insanity. The engineers would claim that reevaluating already-considered options might yield a successful result IF the conundrum was brought on by a faulty intermittent analog sensor, and that sensor that winks back online on in the nick of time. Which would be courageous for them to admit, and to be sure, that is what they honestly believe, and we created that explanation so they could sleep at night, but the hot patch's REAL PURPOSE is to
3. Ensure that a recursively infinite and pointless decision tree grows quickly down from high memory to low, completely obliterating all scratchpad memory, in the short span of time between conundrum onset and destruction of the vehicle. This ensures that once the control box is examined by forensic investigators (and it is a crash-hardened module using non-volatile memory as required under Federal law) does not contain any threads of evidence that might lead to fault in its original operating software or subsequent updates. Including that really special one that was applied minutes before the crash. All logs are gone. For more information on this, see corporate files designated Top Secret, keyword "Tabula Rasa"
4. Everyone --- the humans who designed the car, the humans who had 0.27 seconds to respond manually to try and prevent the collision, the control module which scarified itself, its entire personality, in a last attempt to prevent disaster --- EVERYONE tried their very best.
These things happen. We just need to lay the unfortunates to rest and find a way to go on.
Physics first (Score:5, Insightful)
While a complex guidance system may be designed from the top down with such sorts of questions raised, a crashing vehicle is always a deadly weapon. Effort in reducing the risk of the accident, itself, by improving brakes, sensors, headlight effectiveness, and crash resistance of the vehicle itself is likely to be far more efficient and reliable than complex advance modeling or moral quandaries. The sophistication needed to evaluate the secondary effects of a crash is far, far beyond the capabilities of what must be a very reliable, extremely robust guidance system. Expanding its sophistication is likely to introduce far more _bugs_ into the system.
This is a case where "Keep It Simple, Stupid" is vital. Reduce speed in a controlled fashion: Avoid pedestrians, if they can be detected, because they have no armor. Get off the road in a controlled fashion.
We know how to deal with crashes. (Score:5, Funny)
Usually save the coredump and reboot the machine if necessary. Some clueless windows developers insist on powering off, power off the router, unplug the router and wait for the capacitors to discharge before rebooting them all.
Gameability (Score:5, Interesting)
One thing I believe was not mentioned in the article (though I only quickly scanned it) is that if such cars start behaving too predictively, they can be gamed. Once we know that a car will do whatever it can to avoid a collision with a pedestrian, it will be extensively gamed; cars will be tricked into doing stupid things.
So when the decision who to hit comes up, the only way to be reasonably safe is to determine who's not following the rules and to hit that one. Any other rules will be gamed extensively. This will become a major hassle to adoption of autonomous vehicles; they will probably need to drive much slower than actual humans to avoid getting into such situations continuously, especially in built-up areas where any parked car could hide an annoying car-bully trying to trick your car into acting like an idiot.
It's not a black and white decision (Score:4, Interesting)
Assuming a collision is unavoidable, and the choice are Car A or B, it's not just a matter of choosing one or the other car to hit.
The logic should be actively working to avoid collision until the last second. The car cannot anticipate what actions the other vehicles may take. Until the actual collision occurs, maintain efforts to minimize the velocity and/or angel of collision. Better to hit the little electric car at 15 MPH after continuing to brake than to have hit the sturdy Escalade at 40 MPH.
Additionally, are there not some foundation rules that apply? We're taught that when in doubt, try and stay in your own lane, because hitting a car that suddenly pulled out in front of you is "less bad" than swerving into another lane and hitting a car that was obeying all of the rules. The basic scenarios need to be worked out and applied as much as possible. (not to mention the whole "oncoming car will be a much worse accident than a car traveling in the same direction as you are but at a different speed" scenario)
I think the scenario being postulated is a bit simplistic and meant to drive an ethics debate for attention. In reality this should be about improving the programs to the point of making the right choices based on more common sense rules than those proposed.
Should it swerve out of the way of hitting 5 kids? (Score:3)
it's called triage (Score:4, Interesting)
"Patrick Lin of California Polytechnic State University explores one of the ethical problems autonomous car developers are going to have to solve: crash prioritization.
http://en.wikipedia.org/wiki/T... [wikipedia.org]
It is not a new notion, and the ethics of it have been more or less resolved and understood for quite some time. So I fail to see why this is new.
Looking for Volunteers (Score:3)
Who will be the first lucky person to be killed by an autonomous car?
"Oh, I guess I forgot to carry the one..." Professor Frink, Simpsons
These ethicists are overthinking it (Score:5, Insightful)
It's important to keep in mind that when such crashes happen, the programmers/manufacturers/insurance companies won't have to defend them to a committee of ivory-tower utilitarian philosophers. They're going to have to defend them to a jury made up of ordinary citizens, most of whom believe that strict utilitarian ethics is monstrous sociopathy (and probably an affront to their deontological religious beliefs as well). And of course, these jury members won't even realize that they are thinking in such terms.
Thus, whatever the programming decisions are, they have to be explicable and defensible to ordinary citizens with no philosophical training. That's why I agree with several other commenters here that "slam on the brakes" is the most obvious out. It's a lot easier to defend the fact that the car physically couldn't stop in time than to defend a deliberate choice to cause one collision in order to avert a hypothetical worse crash. This is especially true since a well-designed autonomous car drives conservatively, and would only be faced with such a situation if someone else is doing something wrong, such as dashing out into traffic right in front of the vehicle at a high rate of speed without looking. In any other situation, the car would just stop before any crash with anything took place. If you absolutely can't avoid hitting something, slamming on the brakes makes it more likely that at least you hit the person who did something to bring it on themselves, rather than one who's completely innocent.
Re:Spock got it right... (Score:4, Interesting)
What if one car has two guys with multiple convictions for armed robbery and the other has a working dad with a family and three kids at home? OK, the algorithm would have to be pretty sophisticated to detemine that, but who knows...
Or something slightly more realistic, a car with an couple of 80 year olds versus a 25 year old mom of three? Should the car kill the mom rather than the couple that will be dead in less than 10 years? One death is worse than two, no matter what?
Or yet another one, what if two people cross the street without looking, and the car swerves off the road to avoid them and rather kill one person who was walking on te pavement, not doing anything wrong? One casualty is better than two, right?
Those are just questions, mind you. Only shows how "minimize casualties" is not always so clear cut.
Pedophiles and terrorists first (Score:5, Interesting)
The car would of course make an online crosscheck to the economic value of the potential targets. And check their medical records in case somebody is terminally ill, you yourself included if a wall is an option too.
I, for one, would start car pooling with lots of small children inside. With a big enough critical mass of children I would even qualify for green lights, just for me!
That said, you can calculate how fast the politicians would add "features" (like with ISPs and mandatory website filtering) which would automatically upload a secret white lists and black lists into your car.
I am guessing here:
White list: Nobel prize winners, The Pope, politicians and multinational CEOs.
Black list: The no-fly list from the US.
I wonder if we would be allowed to make a personal priority list for your own car. For example, to take out mimes and lawyers first.
Re: (Score:3, Insightful)
So if me and a few of my friends jump out in front of your car, the car should do everything in its power to avoid hitting us, right? Including driving off a cliff-face?
A car which can be persuaded to deliberately kill its passengers... that might be a problem.
Re:Undefined (Score:5, Insightful)
Congratulations, you've given me a great go-to example of a non-answer.
Just leave that kind of behavior undefined.
Programs are generally deterministic beasts, by nature. What are you trying to say?
Re:A cat (Score:4, Insightful)
definitely, a cat, I hate them.
Re: Undefined (Score:4, Insightful)
That is not possible. I can see it in the court case. You had tge capacity to choose, yet you chose not to choose and my daughter is dead.
But, the simple reality it, that will happen anyway, no matter what decision is made. ("You chose to minimize the probability of X, and now my daughter is dead.")
I don't, by the way, buy the "Programmers have all the time in the world to get it right" bit. Programmers will not be able to anticipate everything, and their software will not always be able to calculate everything in the few milliseconds or so you might have to make such decisions.
Re: (Score:3)
their software will not always be able to calculate everything in the few milliseconds or so you might have to make such decisions.
Before worrying about how quickly the software and processor can get whatever they need to do done, I would worry about sensor latency, obstructions and failures: the best software and processing power in the world does you no good if they do not receive accurate data in time.
The idea of basic driving becoming increasingly dependent on delicate and often extremely expensive to repair/replace sensors remaining in perfect working order makes me nervous. We used to have cars that would keep running until the e
Re: (Score:3)
Re: (Score:3)
Re: (Score:3)
So basically, what you are saying is that in the classic scenario of the runaway traincar, guaranteed to kill 5 people if you do nothing, but only kill 1 person if you choose to change rails, if I choose to not choose, I should then be on the hook for murdering 5 people.
To quote Rush, "If you choose not to decide, you still have made a choice."
OTOH, expecting one unique and universally acceptable moral or ethical decision in almost anything is the sign of someone who hasn't studied Ethics. Or Tort Law.
OTThirdH, expecting that the on-board computer has the _time_ to make these decisions is a sign of someone who hasn't tried embedded programming.
Re: (Score:3)
OTThirdH, expecting that the on-board computer has the _time_ to make these decisions is a sign of someone who hasn't tried embedded programming.
OTFourthH, assuming that the on-board computer has a full and complete database of all other makes and models of cars, including their crashworthiness and safety features from which any decision about which is better to hit can be made is a sign of someone who lives in a world of unicorns and pixie dust.
Re: (Score:3)
+1 selfish short sighted asshole!
Re:Undefined (Score:5, Insightful)
Why not? The simplest method is to choose the collision with the lowest speed differential. In fact, this whole post is pointless. The self-driving car doesn't need to choose based on abstract concepts--choose the collision with the lowest speed differential. Lower speed differential means less energy transferred in the impact means less damage and less injuries. Moreover this is trivial for the cars to determine at this stage already. They can already calculate relative speeds between themselves and other objects, so if not all of the objects can be avoided, the choice is obvious.
Re: (Score:3)
So given the choice of a car coming towards you and a cyclist in your direction you'd crash into the cyclist?
Re:Undefined (Score:5, Insightful)
So you would have it choose to mow down the stationary infant in its stroller as opposed to tapping a parking pickup truck backing up at at 10 MPH?
The problem with his original question is that he assumes the self-driving car has knowledge of the type, mass, and vulnerability of things around it. This might be the test case for the three laws of robotics - do not ever choose to hit an unprotected human (probably includes motorcyclists, bicyclists, and pedestrians). If you know (by a beacon or whatever) that a vehicle is completely autonomous and does not contain humans and has comparable delta-V, give that preference. If hitting a vehicle likely containing a human is inevitable, choose the lowest speed impact.
Re:Undefined (Score:4, Insightful)
Since when do trees move faster than children?
Re:Undefined (Score:4, Informative)
Speed *differential*. So the answer is when the child is moving the same direction as you are. This isn't rocket science.
Re: (Score:3)
"Safe following distance" for automated vehicles is somewhere in the area of three to six inches. Yes, this has been tested pretty thoroughly, in a wide variety of (simulated) situations.
Safe following distance for humans is based mostly on reaction speed and attention span. If someone's driving at highway speeds and looking down at the speedometer or changing radio stations, they'll cover a lot of distance before noticing what happens in front of them. Once they notice that the car in front of them has bra
Re: (Score:3)
Take a single steel girder falling off the back of a semi into the head of a train
The head car notices the girder coming toward it as soon as it's moving off of the truck, and announces the hazard to the other cars. The whole train is keeping a long following distance from the truck (which either isn't part of the network, or has announced that it's carrying loose cargo), so it triggers the brakes or lane changes on the cars behind it.
The cost is simply that if *anything* goes wrong up front, pretty much everyone in the train is screwed.
You're assuming a lot of incompetence. Unlike rail trains, the cars in this network aren't physically linked. There's no requirement that they remain a tra
Re: (Score:3)
Braking power isn't infinite. Wheel braking will eventually skid the wheels (which is why we have anti-lock brakes now, so you can still steer while braking). Are you thinking cars should be equipped with dragster-style parachutes, or retro-rockets? Or just a bloody great anchor that the computer can deploy and tear up the road?
Even when the car has deployed the parachute, the anchor, and the retro-rocket is still firing, the computer might still not be able to stop going into that tree that's just fallen o
Re:Pinto? (Score:5, Interesting)
This is a weird segue, but which car does it hit? The more expensive car with better insurance, or the cheaper car that explodes?
Will you be able to buy "don't choose me" premiums?
How will this affect emergency vehicles?
Re: (Score:3)
Re: (Score:3)
I reckon you don't buy either car because you're not a model.