Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Transportation

Drivers Prefer Autonomous Cars That Don't Kill Them (hothardware.com) 451

"A new study shows that most people prefer that self-driving cars be programmed to save the most people in the event of an accident, even if it kills the driver," reports Information Week. "Unless they are the drivers." Slashdot reader MojoKid quotes an article from Hot Hardware about the new study, which was published by Science magazine. So if there is just one passenger aboard a car, and the lives of 10 pedestrians are at stake, the survey participants were perfectly fine with a self-driving car "killing" its passenger to save many more lives in return. But on the flip side, these same participants said that if they were shopping for a car to purchase or were a passenger, they would prefer to be within a vehicle that would protect their lives by any means necessary. Participants also balked at the notion of the government stepping in to regulate the "morality brain" of self-driving cars.
The article warns about a future where "a harsh AI reality may whittle the worth of our very existence down to simple, unemotional percentages in a computer's brain." MIT's Media Lab is now letting users judge for themselves, in a free online game called "Moral Machine" simulating the difficult decisions that might someday have to be made by an autonomous self-driving car.
This discussion has been archived. No new comments can be posted.

Drivers Prefer Autonomous Cars That Don't Kill Them

Comments Filter:
  • News at 5... (Score:5, Insightful)

    by x0ra ( 1249540 ) on Sunday June 26, 2016 @09:35PM (#52395851)
    People value their own lives..
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      People value their own lives, fuck the rest of you.

      Fixed that for you.

      • by duckintheface ( 710137 ) on Sunday June 26, 2016 @10:45PM (#52396165)

        Self driving cars will transfer the liability from the owner of the car to the manufacturer of the car. This is already happening. Otherwise, they could never sell a car to anyone. But if the liability is held by the manufacturer, you can be sure the crash algorithm will be one that minimizes total casualties (and thus total liability).

        And notice that this is the same issue behind the Will Smith film, "I, Robot". Will's character is rescued from drowning by a robot that lets a little girl drown instead. The robot had calculated the chances of saving each and Will won the AI lottery.

        • by Rande ( 255599 )

          How would that be different from any other lifeguard? A trained lifeguard is always going to choose to save the person that they can rather than the person they can't.

    • Re: (Score:3, Interesting)

      by Rei ( 128717 )

      Sigh... this issue is so bloody simple to resolve.

      1. Default to a default set of morals, which include a reasonable (but not excessive) degree of self-sacrifice - based around the sort of decisions a "typical" driver would make.

      2. Make a straightforward procedure for people to customize the vehicle's morals. Just run them through a series of scenarios on the screen to see where their cutoff is. Is this a person who would mow through a couple toddlers to avoid having to drive off the road, or a person who

      • Re:News at 5... (Score:5, Informative)

        by PopeRatzo ( 965947 ) on Sunday June 26, 2016 @10:03PM (#52395989) Journal

        Sigh... this issue is so bloody simple to resolve.

        1. Default to a default set of morals, which include a reasonable (but not excessive) degree of self-sacrifice - based around the sort of decisions a "typical" driver would make.

        That sounds anything but simple.

      • by Scoth ( 879800 )

        "You come upon a sweetroll in the middle of the road..."

      • by fahrbot-bot ( 874524 ) on Sunday June 26, 2016 @11:04PM (#52396247)

        1. Default to a default set of morals...

        Um, what?

        Make a straightforward procedure for people to customize the vehicle's morals.

        Okay. Anyone with a goatee dies first. Child molesters and people that talk in the theater are next in line (in homage to Shepherd Book). I'm flexible after that, but the list *will* include people on cell phones who don't pay attention to their surroundings and people who take more than 5s to make a drink order at Starbucks. Any other suggestions?

      • Sigh... this issue is so bloody simple to resolve.

        1. Default to a default set of morals.

        2. Make a straightforward procedure for people to customize the vehicle's morals.

        Sort of. Realize that your moral choice will affect your insurance rates. Also most companies (manufacturers, renters, even taxi services) will default to protect people other than the passenger, because they have an agreement with the passenger that they can use to help limit their liability, but they don't have that agreement with third parties. The only way that changes is if they compete on morality--but that seems unlikely.

      • Your solution kills a lot of people, both drivers and bystanders. Just how well-tested will that multitude of settings be?

      • I really hate this whole line of AI driver philosophy, because it seems to me to be largely pointless blather about nothing. We live in a world where gigahertz processors are cheap and plentyful. To a computer, that can take data samples thousands of times per second, a 60 mph car is traveling at a glacial speed. What kind of crazy, concocted scenario are you coming up with where the AI controlling the car has to make a Boolean decision that kills people? It might happen, but I would argue that if it was pr
        • if it was properly programmed, it wouldn't let itself be put into this sort of situation in the first place, slowing down to appropriate speeds around people.

          The Need4speed mod was first developed in Central America. A software firm had been hired by a wealthy client to develop the ultimate suite of functions for "emergency kidnap evasion". It took the design limits of the vehicle to the edge, implemented spin and bump tactics for armored cars and the 'bootleg turn', re-ordered the evasion pragma to sideline small object/animal/child avoidance. A complete new class of stratagem for high speed pursuit where pursuing vehicles are recognized and evasion conditio

        • by gsslay ( 807818 )

          What kind of crazy, concocted scenario are you coming up with where the AI controlling the car has to make a Boolean decision that kills people?

          My car is driving down a busy road at a safe and steady 30mph. There is traffic in the opposite direction travelling at 40mph. The sidewalk alongside is crowded with people.

          A child suddenly runs onto the road 4 feet in front of the car. There is nothing my vehicle can do to stop in that distance. It is mechanically not possible. However, it can swerve left or swerve right. One direction means a head-on collision, the other means mowing down a dozen pedestrians. Or maybe it does nothing and strikes the

          • My car is driving down a busy road at a safe and steady 30mph. There is traffic in the opposite direction travelling at 40mph. The sidewalk alongside is crowded with people.

            A child suddenly runs onto the road 4 feet in front of the car. There is nothing my vehicle can do to stop in that distance. It is mechanically not possible. However, it can swerve left or swerve right.

            Bad example. Your car is going 44 fps, so it has 0.09 seconds to do anything about this problem. In that time, it can't stop, and it

      • Even simpler (Score:5, Insightful)

        by TheLink ( 130905 ) on Monday June 27, 2016 @12:19AM (#52396549) Journal
        Hahaha. It's even simpler than that. Everyone seems to be making the assumption that the cars will be such driving geniuses. That's not going to happen for quite a long while.

        0) We all know that stopping in the middle of the highway is dangerous, BUT the way the laws are written in most countries, it's practically always your fault if you drive into the rear of another vehicle especially if it didn't swerve into your path and merely braked suddenly, or worse was stationary for some time.

        1) Thus for legal and liability reasons the robot cars will be strictly obeying all convincing posted speed limits (even if they are stupidly slow by some mistake, or by some prankster), and will stick to speeds where they would be able to brake in time to avoid collisions or at least fatal collisions. Whichever is slower.

        2) In most danger situations the robot cars will brake and try to come to a stop ASAP all while turning on its hazard lights. Which shouldn't be too difficult at those said speeds.

        3) If people die because of tailgating it's the tailgater's fault. Same if the driver behind doesn't stop.

        4) There are hardware/software failures then it's some vendors fault.

        5) If braking won't avoid the problem even at "tortoise speeds", in most cases fancy moves wouldn't either. In the fringe cases where fancy moves would have helped but braking wouldn't AND it would be the robot car's fault if it braked, the insurance companies would be more than willing to take those bets.

        The odds of the car being designed to do fancier moves to save lives are practically zero. If I was designing the car I wouldn't do it - imagine if the car got confused and did some fancy moves to "avoid collision" and killed some little kids. In contrast if it got confused and came to stop ASAP if any little kids are killed it would more likely be someone else's fault.

        If you are a human driver/cyclist/motorcyclist you better not tailgate such cars.

        Look at the Google car accident history, most of the accidents were due to other drivers. Perhaps I'm wrong but my guess is it's because of "tailgating". Those drivers might still believe the AI car was doing it wrong but the law wouldn't be on their side.
        • 2) In most danger situations the robot cars will brake and try to come to a stop ASAP all while turning on its hazard lights. Which shouldn't be too difficult at those said speeds.

          Turning on your hazard lights while driving is illegal in most states, and for good reason. Did you know that many makes and models use the exact same lights for your hazard lights as the turn and/or brake lights? And guess which behavior wins out? The hazard lights, of course. Do you know when the hazard lights are supposed to be used? When you're stuck on the side of the road or stalled in traffic. Not for "Oh no it's raining hard I want to make sure the people behind me notice the bad weather" or "he

    • Re: (Score:2, Insightful)

      by NotInHere ( 3654617 )

      Yeah. In fact, SUVs are well known to cause lots of damage in SUV - non SUV crashes to the normal vehicle, while causing minor damage to the SUV. The passengers of the normal vehicle are much more likely to die than the SUV passengers. So people already do the choice now.

    • by eth1 ( 94901 )

      People value their own lives..

      Yeah... my first reaction was "duh, just look at all the people that buy ginormous SUV's to protect themselves at the expense of everyone they might hit."

  • No one will ever program a autonomous vehicle to choose one life over another. That's a lawsuit waiting to happen, if not an outright murder charge.

    • by swalve ( 1980968 )
      No, of course not. This problem is solved. Program the autonomous driver to follow the same rules that us mere humans have to.
      • And where exactly is this "rule" that tells someone that the life of person A is more important than the life of person B?

    • It isn't even that. Why would you add processing time? Thinking about all this BS would end up killing more people in that extra 600 milliseconds it takes to think through all these scenarios. Just stop the damn car!

      I don't want a car that kills people while it is busy thinking about whether it is ethical to stop.

  • That's normal (Score:5, Insightful)

    by rrohbeck ( 944847 ) on Sunday June 26, 2016 @09:38PM (#52395863)

    Save the environment, reduce carbon emissions, save water, reduce debt... unless it affects me financially.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      I thought the point of you saving water was so I could use more.

  • The greater good is something people can be hypothetically happy about, unless it means they have die.
    Nobody is going to choose to pay for a machine that would rather kill them than protect them.
  • Participants also balked at the notion of the government stepping in to regulate the "morality brain" of self-driving cars.

    This statement makes no sense to me. What do these people want, free market morality? The car should save the richest people? Who the hell else but the government is going to standardize what the right action is for a robot to take in that sort of scenario?

  • by turkeydance ( 1266624 ) on Sunday June 26, 2016 @09:57PM (#52395951)
    they're passengers. the drivers can't be killed because there are none.
    • The pictograms in the morality assessment were a bit odd, considering most cars can survive a head-on collision with a concrete wall. Also, younger adults are more easily repairable and more resilient physically than older adults. I understand that's not the point of the morality assessment, but you can't just ignore those variables. What kind of speeds are the cars traveling in an urban setting that they can't stop in that amount of time? Are we abandoning "pedestrian airbags"? Are passenger airbags a thi
  • by PPH ( 736903 ) on Sunday June 26, 2016 @09:57PM (#52395961)

    If we could get an AI that can kill for a parking space, I'd be fine with that.

    • If we could get an AI that can kill for a parking space, I'd be fine with that.

      If you go through the quiz.. many of the situations involve people crossing the street against the hand... sometimes this was the only difference between two groups and you had to choose which to mow down. This reminds me a lot of Seattle too. If people on foot knew that a driverless car isn't going to stop if they were crossing illegally they might think twice about stepping off that curb.

    • You'll need to program your car to hunt down Murray and the rest of the city council. Only then will their war on cars come to it's inevitable conclusion.

      But hey, if you want to let them keep replacing parking places with "parklets", it's on you. We're certainly not as hip over here on the Eastside, but at least we can find a parking spot.

    • by Ichijo ( 607641 )

      Why does a self-driving car need a parking space?

      • Well, realistically, the car does need to park somewhere if the occupant is going to be working for eight hours, or shopping for over ten minutes.

        But the autonomous car can drop someone off at work or the store, then drive a couple miles away to a central parking facility and wait to be summoned. The future parking facility could even be mechanized to rack-em-and-stack-em [smartparkingsolution.com] to maximize space.

  • So most people think that it's good to sacrifice a passenger in order to save many pedestrians, but they wouldn't want the car to sacrifice them. It's clear then that if they were the driver in their own car, they would choose to save themselves rather than the 10 pedestrians they are about to mow down.

    There are two future possibilities then:

    1. Self-driving cars will sacrifice the driver, which means they will be programmed to be more ethical than they are today.
    2. Self-driving cars will sacrifice the pedes

    • ...Either way we're not any worse off, so what's the problem?...

      It gets interesting when insurance is thrown into the mix. Who pays the insurance premiums for autonomous cars? The owner shouldn't have to because the owner is not the driver.

      .
      However, if the owner chooses an autonomous car that targets pedestrians, then perhaps the owner should pay at least part of the insurance premiums.

      • Who pays the insurance premiums for autonomous cars? The owner shouldn't have to because the owner is not the driver.

        How often do you drive your HOUSE?

  • ...Participants also balked at the notion of the government stepping in to regulate the "morality brain" of self-driving cars....

    I dislike government regulation as muchas (maybe more)than the next person, but....

    .
    Should all autonomous cars, regardless of make, have the same morality rules regarding who gets killed in an accident?

    Or will I, as a pedestrian, need to be able to recognize the various brands of autonomous cars, know the morality of each, and decide which direction to jump in when one of those things is coming at me....

  • I'm just waiting for the next movie where the main character is being chased down either by a draconian government or some super hacker. The main character clearly knows the risk, so he's driving a 1969 Mustang, but suddenly, all the cars on the freeway start chasing him down and trying to run him off the road.

  • I demand the same of my autocar thingie mabob
  • Can I have the car that doesn't crash at all, instead? Guess I'll have to buy foreign again.
  • So, given the randomness, and unpredictability of any specific situation; and given that any attempt at anything can fail, backfire, or be otherwise incomplete; living individuals prefer that effort be focused on survival, rather than altruism.

    You know, I don't often get to say that those around me make sensible decisions, but in this case, I'm overjoyed to say that finally, possibly for the first time in human history, there's actually a consensus regarding the one and only sensible choice!

    $50 follow-up: w

  • other than the occupants of the vehicle itself is everything not just an obstacle to it..

    seems to me if it used that logic and protected the only known life forms (ie the ones in the vehicle) we're fine. Don't give it the information to create the dillema, can it be sure that a person is a person 100% of the time, if not then the only person(s) it knows lives are in its hands are the ones inside it.

  • This debate is a red herring. An automated car would use its software and resources to avoid hitting pedestrians or other cars, but in the event it cannot avoid a collision, the safety of the passengers would come down to the construction and safety features of the car itself.

    This is what we have now and it won't change once the driving is automatic. The physical structure of the car and things like seat belts and airbags will be responsible for protecting the occupants as best it can, but of course the

  • by account_deleted ( 4530225 ) on Sunday June 26, 2016 @11:59PM (#52396479)
    Comment removed based on user account deletion
  • A self-driving car should always be able to judge its stopping distance to a high degree of accuracy. None of these scenarios should ever happen. The car shouldn't be driving that fast to begin with.
  • A self piloted car is less likely to be in a situation like that, than a car being driven by a human. A self piloted car will only go where it knows it can safely go. And it will be surrounded a network of sensors, informing it of what it comin a head.
  • First, the cases where the driver needs to be sacrificed involve either fantastically contrived edge cases or cases where the other party is a moron and has gotten themselves into a moron position where Darwin needs to take them out.

    Nearly every case I can see where the options are something like, avoid the pedestrian by driving into the metal spear tree artwork. First the car should see the pedestrian long before and come to a gentle stop. If the pedestrian jumps out from concealment, then they deserve
  • by coldsalmon ( 946941 ) on Monday June 27, 2016 @08:57AM (#52398079)

    This is the same as the Trolley Problem, a famous philosophical dilemma, first proposed in 1967: https://en.wikipedia.org/wiki/... [wikipedia.org]
    Basically, a runaway trolley is going to kill five people. You can either do nothing and let the trolley kill them, or pull a lever to switch it to another track on which it will kill only one person. There are many variations, including one in which you push a fat man onto the tracks to stop the trolley. Philosophers have written a LOT about it. Here are some humorous variations:
    http://existentialcomics.com/c... [existentialcomics.com]
    https://xkcd.com/1455/ [xkcd.com]
    http://www.mcsweeneys.net/arti... [mcsweeneys.net]

  • A computer should serve its owner's interests with absolute priority over the interests of all other parties. Period. If it's my computer -- my agent -- then I am #1. By default (without my interaction) it should allow a million children to slowly burn to death if it means that I get to skip an ad. (That's a ludicrous example, but if people want to explore the edge cases of the policy I'm advocating, then there you go.)

    You're going to find that this strongly favors protecting other people anyway. The "someone must die, pick who" scenario is extremely rare to the point of non-existent, compared to the routine "avoid having any collision at all, so that no damage or injury happens" scenario. (Stop smoking before you drive yourself crazy with fear of being hit in the head by a meteorite!)

    That's not a global policy; that's just the policy for my computer. I don't mean I'm more important than you; I mean that to my computer I am more important that you. And your computer should serve you, too!

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...