Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Transportation Programming

Autonomous Car Ethics: If a Crash Is Unavoidable, What Does It Hit? 800

An anonymous reader writes "Patrick Lin of California Polytechnic State University explores one of the ethical problems autonomous car developers are going to have to solve: crash prioritization. He posits this scenario: suppose an autonomous car determines a crash is unavoidable, but has the option of swerving right into a small car with few safety features or swerving left into a heavier car that's more structurally sound. Do the people programming the car have it intentionally crash into the vehicle less likely to crumple? It might make more sense, and lead to fewer fatalities — but it sure wouldn't feel that way to the people in the car that got hit. He says, '[W]hile human drivers may be forgiven for making a poor split-second reaction – for instance, crashing into a Pinto that's prone to explode, instead of a more stable object – robot cars won't enjoy that freedom. Programmers have all the time in the world to get it right. It's the difference between premeditated murder and involuntary manslaughter.' We could somewhat randomize outcomes, but that would lead to generate just as much trouble. Lin adds, 'The larger challenge, though, isn't thinking through ethical dilemmas. It's also about setting accurate expectations with users and the general public who might find themselves surprised in bad ways by autonomous cars. Whatever answer to an ethical dilemma the car industry might lean towards will not be satisfying to everyone.'"
This discussion has been archived. No new comments can be posted.

Autonomous Car Ethics: If a Crash Is Unavoidable, What Does It Hit?

Comments Filter:
  • A bunch of nuns? (Score:5, Interesting)

    by Bongo ( 13261 ) on Wednesday May 07, 2014 @05:23AM (#46937445)

    I'm reminded of Michael Sandel's televised series on ethics.

    If you could stop a runaway train from going over a ravene, by pulling a lever, thus saving 300 people, but the lever sent the train down a different track on which 3 children were playing, what do you do?

    Somehow, involving innocents seems to change the ethical choices. You're no longer just saving the most lives, but actively choosing to kill innocent bystanders.

  • Re:A bunch of nuns? (Score:5, Interesting)

    by something_wicked_thi ( 918168 ) on Wednesday May 07, 2014 @05:52AM (#46937543)

    Actually, this raises a more interesting question (at least to me) which your little thought experiment approaches. What if my autonomous car decides that the action to take that is likely to cause the least harm is to kill the driver? For example, what if the car has the opportunity to swerve off the side of a mountain road and drop you 1000 feet onto some rocks to avoid a crash that would have killed far more people than simply you? Is my autonomous car required to act in my own best interest, or should it act in the best interests of everyone on the road?

  • by Zocalo ( 252965 ) on Wednesday May 07, 2014 @05:57AM (#46937557) Homepage
    This notion kind of cropped up in last weekend's episode of "Continuum" where a next of kin was informed of a crash by an actuary in terms of write downs, compensation, loss adjustments and so on. Given the way insurers tend to operate and how in bed they are with the legal profession I can see that's exactly how this would go in the long run; an evaluation designed to produce the lowest price tag for those that ultimately get to pay the financial/legal bill. Looking at the problem another way, that means the structural integrity of the two cars in the example is probably moot; if the more structurally sound car is an expensive vehicle with a lone occupant owning a huge life insurance policy and the other is a decrepit bus full of uninsured kids, then it's probably not a good day to be one of the kids... or the driver of the car that crashes into them.
  • by michelcolman ( 1208008 ) on Wednesday May 07, 2014 @06:11AM (#46937615)

    What if one car has two guys with multiple convictions for armed robbery and the other has a working dad with a family and three kids at home? OK, the algorithm would have to be pretty sophisticated to detemine that, but who knows...

    Or something slightly more realistic, a car with an couple of 80 year olds versus a 25 year old mom of three? Should the car kill the mom rather than the couple that will be dead in less than 10 years? One death is worse than two, no matter what?

    Or yet another one, what if two people cross the street without looking, and the car swerves off the road to avoid them and rather kill one person who was walking on te pavement, not doing anything wrong? One casualty is better than two, right?

    Those are just questions, mind you. Only shows how "minimize casualties" is not always so clear cut.

  • Re:A bunch of nuns? (Score:5, Interesting)

    by N1AK ( 864906 ) on Wednesday May 07, 2014 @06:14AM (#46937637) Homepage
    Now this question I like, it's far more nuanced than the original one. I know I would buy a car with a bias towards keeping me alive (not at any cost) and that bias would likely get even stronger if I had family members in the car! But how plausibly can a car judge whether keeping me and my 2 year old infant alive is more or less important than the unknown occupants of another car?

    Now a really difficult situation would be, what should the computer do if another car is going to crash but your car could minimise loss of life by doing something that would harm or kill you? In this situation your car isn't the cause of the accident, nor perhaps even would be involved. Should your car intervene,potentially killing you, for the good of society as a whole?
  • Re:A bunch of nuns? (Score:5, Interesting)

    by AmiMoJo ( 196126 ) * on Wednesday May 07, 2014 @07:40AM (#46938003) Homepage Journal

    The simplest solution, and the one that I imagine most autonomous car manufacturers will take, is to avoid the question entirely.

    When an accident is inevitable the car will simply try to stop as quickly as possible. It won't make any attempt to swerve or select a target. It's only consideration will be stopping the car as quickly as possible. It's a sound tactic from a legal point of view. Unless the car itself made a mistake leading to the accident any resulting injuries are someone else's fault.

    Ethically stopping is the right thing to do too. The car can't predict other people's or other car's reactions. If it swerves towards them they might take evasive action as well, causing even more carnage. In the case of a choice between the driver's life and other people's lives it is almost certainly going to be the case that a human driver would have chosen themselves, and the accident was probably caused by the other people anyway (since autonomous cars drive very conservatively). It really is hard to imagine a situation where the car could be blamed for simply braking.

  • by Kiwikwi ( 2734467 ) on Wednesday May 07, 2014 @07:46AM (#46938029)

    It's not either/or. A car can protect its occupants and other people on the road. I'm pretty sure people looking to buy a car don't actively disregard the Volvo V40, just because it has external airbags to protect pedestrians. Unless they're sociopaths.

    Then again, Volvo apparently didn't think it'd make commercial sense to sell the V40 in the US...

  • Re:A bunch of nuns? (Score:2, Interesting)

    by Ihlosi ( 895663 ) on Wednesday May 07, 2014 @07:57AM (#46938101)
    If you could stop a runaway train from going over a ravene, by pulling a lever, thus saving 300 people, but the lever sent the train down a different track on which 3 children were playing, what do you do?

    The answer depends on whether I'm on the train and on whether any of those kids are mine.

  • Re:A bunch of nuns? (Score:4, Interesting)

    by N1AK ( 864906 ) on Wednesday May 07, 2014 @08:04AM (#46938157) Homepage

    The car should keep its occupants safe above all others.

    Why? And regardless, why should society allow cars to use our roads if they are going to choose to do more damage to society than they need to?

    Ignoring fringe issues of responsibility etc, if I was driving an in a position where I could run over a group of pedestrians at a speed likely to kill them or crash into a verge at a speed likely to kill me I'd like to think that I'd make what I believe is the ethical choice and risk my own life. It becomes much less clear when a machine is making decisions for us, but your position is ridiculous.

    If avoiding a pedestrian has a 0.001% chance of leading to me being injured but hitting them has a 99% of killing them then putting my safety above all others means killing that pedestrian to avoid a tiny risk to me. If you accept that in this scenario your 'safety' shouldn't be paramount then it is a simply a matter of degrees. Is a 1% chance of your death more important than a 99% chance of 10 deaths? How about a 99% chance of your death vs a 99% chance of 70 deaths?

    I've been hospitalised for intervening in an accident I wouldn't otherwise have been a part of (as a pedestrian rather than driver) because I thought I could stop a worse outcome. If I am willing to make that decision myself, then why should I refuse to buy a car that will act in the manner I would act myself? Why should I allow (by not voting to regulate against) people to use the roads I pay for in a selfish manner that harms society?

  • Re:Simple answer (Score:5, Interesting)

    by Your.Master ( 1088569 ) on Wednesday May 07, 2014 @08:10AM (#46938195)

    Here's a variation on that theme which works for insurance but doesn't work for the best-protected-car scenario:

    Swerve to hit the guy with less insurance, and charge the balance of the insurance payment after the shitty-insurance runs out to the car that was deliberately *not* hit in this scenario (this is assuming there wasn't a clear "fault" with one of the cars involved that would mean that guy gets the full charge).

    Thus the safety-conscious car is strictly in a safer situation, and the monetary difficulty is no worse than a version that deliberately crashed into high-insurance cars and may be as little as nothing. In effect, instead of paying a lump sum to be made whole after an accident, the insurance pays a lump sum to avoid getting into an accident in the first place.

  • by Craefter ( 71540 ) on Wednesday May 07, 2014 @08:12AM (#46938207)

    The car would of course make an online crosscheck to the economic value of the potential targets. And check their medical records in case somebody is terminally ill, you yourself included if a wall is an option too.

    I, for one, would start car pooling with lots of small children inside. With a big enough critical mass of children I would even qualify for green lights, just for me!

    That said, you can calculate how fast the politicians would add "features" (like with ISPs and mandatory website filtering) which would automatically upload a secret white lists and black lists into your car.

    I am guessing here:

    White list: Nobel prize winners, The Pope, politicians and multinational CEOs.
    Black list: The no-fly list from the US.

    I wonder if we would be allowed to make a personal priority list for your own car. For example, to take out mimes and lawyers first.

  • Re:Pinto? (Score:5, Interesting)

    by killfixx ( 148785 ) * on Wednesday May 07, 2014 @08:19AM (#46938251) Journal

    This is a weird segue, but which car does it hit? The more expensive car with better insurance, or the cheaper car that explodes?

    Will you be able to buy "don't choose me" premiums?

    How will this affect emergency vehicles?

  • Re:Nonsense (Score:4, Interesting)

    by FireFury03 ( 653718 ) <slashdot@NoSPAm.nexusuk.org> on Wednesday May 07, 2014 @08:24AM (#46938295) Homepage

    There's no such thing as an intentional accidents. An autonomous program that is paying attention will not have such a situation and therefore the manufacturers will always be responsible for failure.

    If a car shoots out from a blind junction at speed and you can't stop in time, that's an unavoidable accident - the car could not be seen in advance, so the autonomous program couldn't have avoided the accident even if its paying attention the whole time. You could argue that you should be going slow enough that your stopping distance is short enough to avoid the collision, but on a lot of roads this would seriously hinder traffic flow - at some point you just have to trust that other drivers are following the rules of the road and accept that the risk can't be completely eliminated.

    Similarly, mechanical failures can't always be predicted - you're overtaking someone and their wheel comes off causing them to swerve into you. Impossible to predict so now you're left trying to reduce the seriousness of the inevitable accident. Hell, your own car may have a mechanical failure that the computer couldn't detect.

  • Gameability (Score:5, Interesting)

    by zmooc ( 33175 ) <{ten.coomz} {ta} {coomz}> on Wednesday May 07, 2014 @08:52AM (#46938531) Homepage

    One thing I believe was not mentioned in the article (though I only quickly scanned it) is that if such cars start behaving too predictively, they can be gamed. Once we know that a car will do whatever it can to avoid a collision with a pedestrian, it will be extensively gamed; cars will be tricked into doing stupid things.

    So when the decision who to hit comes up, the only way to be reasonably safe is to determine who's not following the rules and to hit that one. Any other rules will be gamed extensively. This will become a major hassle to adoption of autonomous vehicles; they will probably need to drive much slower than actual humans to avoid getting into such situations continuously, especially in built-up areas where any parked car could hide an annoying car-bully trying to trick your car into acting like an idiot.

  • by ThomasBHardy ( 827616 ) on Wednesday May 07, 2014 @09:34AM (#46938925)

    Assuming a collision is unavoidable, and the choice are Car A or B, it's not just a matter of choosing one or the other car to hit.

    The logic should be actively working to avoid collision until the last second. The car cannot anticipate what actions the other vehicles may take. Until the actual collision occurs, maintain efforts to minimize the velocity and/or angel of collision. Better to hit the little electric car at 15 MPH after continuing to brake than to have hit the sturdy Escalade at 40 MPH.

    Additionally, are there not some foundation rules that apply? We're taught that when in doubt, try and stay in your own lane, because hitting a car that suddenly pulled out in front of you is "less bad" than swerving into another lane and hitting a car that was obeying all of the rules. The basic scenarios need to be worked out and applied as much as possible. (not to mention the whole "oncoming car will be a much worse accident than a car traveling in the same direction as you are but at a different speed" scenario)

    I think the scenario being postulated is a bit simplistic and meant to drive an ethics debate for attention. In reality this should be about improving the programs to the point of making the right choices based on more common sense rules than those proposed.

  • by luis_a_espinal ( 1810296 ) on Wednesday May 07, 2014 @10:01AM (#46939229)

    "Patrick Lin of California Polytechnic State University explores one of the ethical problems autonomous car developers are going to have to solve: crash prioritization.

    http://en.wikipedia.org/wiki/T... [wikipedia.org]

    It is not a new notion, and the ethics of it have been more or less resolved and understood for quite some time. So I fail to see why this is new.

  • Re:A bunch of nuns? (Score:4, Interesting)

    by Immerman ( 2627577 ) on Wednesday May 07, 2014 @10:35AM (#46939589)

    Unless of course *you* are the worst-off, in which case it makes sense to ensure that someone else is worse off/dead before you are volunteered for meal duty. And the best strategy is probably to make sure it's the guy advocating loudest for "eat the weakest first"

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...