Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Transportation

When Mercedes-Benz Starts Selling Self-Driving Cars, It Will Prioritize Driver's Safety Over Pedestrian's (inverse.com) 367

From a report on Inverse: When Mercedes-Benz starts selling self-driving cars, it will choose to prioritize driver safety over pedestrians', a company manager has confirmed. The ethical conundrum of how A.I.-powered machines should act in life-or-death situations has received more scrutiny as driverless cars become a reality, but the car manufacturer believes that it's safer to save the life you have greater control over. "You could sacrifice the car. You could, but then the people you've saved initially, you don't know what happens to them after that in situations that are often very complex, so you save the ones you know you can save," said Christoph von Hugo, Mercedes' manager of driver assistance systems. "If you know you can save at least one person, at least save that one. Save the one in the car. This moral question of whom to save: 99 percent of our engineering work is to prevent these situations from happening at all. We are working so our cars don't drive into situations where that could happen and [will] drive away from potential situations where those decisions have to be made."As long as they are better at driving and safety than humans, it is a progress, in my opinion.
This discussion has been archived. No new comments can be posted.

When Mercedes-Benz Starts Selling Self-Driving Cars, It Will Prioritize Driver's Safety Over Pedestrian's

Comments Filter:
  • by Anonymous Coward on Sunday October 16, 2016 @11:36AM (#53085399)

    I'm working on self-walking pedestrian Gatling guns. Guess what *it* prioritizes?

    • I'm working on self-walking pedestrian Gatling guns. Guess what *it* prioritizes?

      These will be built into the new self-driving BMWs, enabling anesthesiologists to rule the Earth.

  • by Wycliffe ( 116160 ) on Sunday October 16, 2016 @11:42AM (#53085429) Homepage

    99% of time, the correct action is to stop. If a crash is unavoidable though, if you are solely concerned about the safety of the passenger, then it is safer for the passenger to hit a soft target like a crowd of people than something hard like a telephone pole. The passenger is much more likely to survive hitting a person than a brick wall but a human will usually choose the wall.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Are you so sure? SUV's are very popular in the US, and they are designed in a way where when they hit a normal car, they hit it further above, where the car is "softer" than below, where there is a crunch zone. Unfortunately the soft part is partly made of the inhabitants of the normal car.

      So people already have decided that they like the "crowd" variant and not the "brick wall" one.

    • by AmiMoJo ( 196126 ) on Sunday October 16, 2016 @12:03PM (#53085545) Homepage Journal

      What they are really saying here is that the car is designed with passenger safety in mind, and the AI won't even try to consider pedestrians and other drivers. It will just stop as quickly as possible and avoid things that might hurt the occupant, like most humans given a fraction of a second to act on mostly instinct would.

      The trolley problem relies on there being sufficient time to make a decision, but not enough to take any other action. It's unrealistic and was only ever intended as a thought experiment.

      • The trolley problem, to me, is incomplete. If only strangers are tied to the tracks (if it was a choice between a loved one and 10 strangers of course I would choose to save the loved one), then I would do the thing that lands me in the least amount of trouble with the authorities (I do not want to go to jail for a stranger).

        • I would do the thing that lands me in the least amount of trouble with the authorities (I do not want to go to jail for a stranger).

          Many versions of the "trolley problem" specify that no other person will be aware of your decision, so you shouldn't have to worry about the authorities, unless you blab about what you did.

          When I first heard the trolley problem, it seemed obvious to me to throw the switch to kill one guy rather than allowing five to die through inaction. I was surprised to learn that means I am a psychopath. It still makes no sense to me that so many normal people believe that "inaction" somehow absolves them of moral cul

        • by AmiMoJo ( 196126 )

          From a legal standpoint it's best to do nothing. If both choices are bad, that's the way to avoid liability. And that is precisely what the self diving car manufacturers will do.

          • From a legal standpoint it's best to do nothing. If both choices are bad, that's the way to avoid liability. And that is precisely what the self diving car manufacturers will do.

            I'm not sure that generally works for most companies. E.g.:

            DESIGNER 1: "Hmm... should we put a guard on that spinning blade on our product so someone doesn't get cut?"

            DESIGNER 2: "Well, but if we put the guard on, doesn't that mean someone could stick his finger over here and get the whole finger chopped off?"

            DESIGNER 1: "True, but the guard should at least make it clear that we tried to prevent injury."

            DESIGNER 2: "But people could still get injured badly, and there's nothing we can do to prevent

      • The trolley problem relies on there being sufficient time to make a decision, but not enough to take any other action. It's unrealistic and was only ever intended as a thought experiment.

        Yes, the trolley problem is obviously unrealistic in almost all of its forms. However, its purpose was to tease out an ethical dilemma and perhaps expand that "split-second" decision to allow a person to think deeply about the most "moral" choice.

        Just because an AI car can be programmed to act like a human would act in a split-second decision-making process (i.e., "slow down fast, avoid stuff where possible") doesn't mean that manufacturers will avoid getting into legal trouble if the car ends up mowing

    • The articles' logic is flawed. If everyone (car, bicycle, pedestrian) is following laws of the road and the above logic is used, this could be considered homicide almost to the point of premeditated. I bet if this logic is used some countries will ban the sale of said vehicles.

      The thing that should be considered is if a collision is unavoidable, the person in the vehicle will have WAY more protection than a bicycle of pedestrian. In other words the car should hit the wall, not the pedestrian.

      Mercedes ne

      • by Imrik ( 148191 )

        If everyone is following the laws of the road, an accident would be pretty unlikely. The problem is that this is rarely the case.

        • True, accidents are usually caused by driver/ped/bicycle not following road rules and/or not paying attention. This all works well if only vehicles are in use on the roads, but this is not usually the case in most major cities. To plow into ped/bicycle with a two ton vehicle as the logic of the driver assistance has been told to do exactly that might be questionable. I guess the problem here is should we allow the programming to make the decision to selectively edit the gene pool (for better or worse).
  • by swb ( 14022 ) on Sunday October 16, 2016 @11:43AM (#53085433)

    S-class & AMG Models: Maximum driver and driver property prioritization.

    E-class models: Minor driver prioritization, slightly better than 50/50 odds

    C-class models: Pedestrian prioritization

  • Logical (Score:2, Insightful)

    by sinij ( 911942 )
    Sounds logical to me. Otherwise, why would I pay Mercedes-Benz to save other people? I am not an altruist and don't inspire to be one in life&death situations.
    • Sounds logical to me. Otherwise, why would I pay Mercedes-Benz to save other people? I am not an altruist and don't inspire to be one in life&death situations.

      Given the number of accidents that have been caused and have killed drivers by simple things like trying to avoid an animal crossing the road, I think you're comment is well and truly off base.

      Most of the time human drivers will try to avoid death and don't get as far as thinking of their own lives in the process.

  • by Wrath0fb0b ( 302444 ) on Sunday October 16, 2016 @11:48AM (#53085457)

    Saving the occupants of the car is the only choice that makes sense in the context of potentially malicious input. For instance, if Mercedes stated that their car would swerve into a tree instead of hitting a crowd of 5 pedestrians, what's to stop me and 4 friends from jumping out in front of the cars just to laugh as it crashes itself to "save" us.

    We have got to start embedding deep into the mind of every software engineer that any information from outside your system can be manipulated to cause maximum damage or disruption. It is your system's responsibility to safely handle malformed and malicious inputs. Until this becomes a common mode of thought, expect more IoT botnets, SQL injections, buffer overflows, DOS amplifiers and the entire realm of "oh crap someone somewhere could be evil, I only engineered for the happy case".

    • any information from outside your system can be manipulated to cause maximum damage or disruption

      So true

    • For instance, if Mercedes stated that their car would swerve into a tree instead of hitting a crowd of 5 pedestrians, what's to stop me and 4 friends from jumping out in front of the cars just to laugh as it crashes itself to "save" us.

      How about the same thing that stops you dropping rocks on cars from a bridge over a road? You know, basic ethics and the consequences of breaking the law. In your example the problem is you and your psychopathic fiends, not the decision made by the car. The best arguments for the self preserving algorithm is that this is what a human driver will instinctually do so it is no worse in causing deaths than a human (and given the far faster reaction time almost certainly far better) and that nobody will ever bu

      • I think the point is that there are unethical people and lawbreakers. If the car cannot handle them correctly by identifying that the danger they face is one they created by their own incorrect behavior, then it is deficient.

        In other words, humans have an implicit understanding that "person jumping out in front of traffic" and "pedestrian minding their own business who is in the path of an accident" are in two vastly different ethical positions. Colloquially, "even a dog knows the difference between being k

    • what's to stop me and 4 friends from jumping out in front of the cars just to laugh as it crashes itself to "save" us.

      This example is one of "malicious behavior", which is an issue for the courts. With any luck the "Just for laughs" comment would reach the judge.

      It is not an example of malicious input. The car correctly sensed a risk to human life/health and correctly identified the best alternative to maintain its"prime directive". The vehicle's decision would have been exactly correct (presuming there were no better alternatives, such as stopping). An example of "malformed/malicious input" would be when the side of [google.com]

      • The car correctly sensed a risk to human life/health and correctly identified the best alternative to maintain its"prime directive".

        It correctly sensed it but it did not accurately assess it. A risk to the life of a human who is a pedestrian innocently minding his own business is not ethically equivalent to the life of a human who jumps out in front of traffic, either maliciously or out of recklessness.

        In the US, the aphorism is "even a dog knows the difference between being kicked and being stumbled over". Intent & responsibility are things we all implicitly understand, but which is lost when you say that one should swerve into a t

    • I don't see why there has to be a single answer to this. There are already laws in place governing traffic flow. Violators of those laws (people running red lights, jaywalkers, etc) should be given lower priority in the "attempt to save" ranking.
      • If the vehicle calculates it has to leave the road to avoid an accident, but its trajectory would make it hit pedestrians on the sidewalk, protect the pedestrians and do not avoid the accident. Basically, accident avoidance in this case requires the vehicle to
  • by olsmeister ( 1488789 ) on Sunday October 16, 2016 @11:51AM (#53085477)
    There really is no other logical way to approach this. If they went the other way and prioritized the pedestrian, a psychopath could sprint back and forth across a busy freeway, causing accident after accident and injuring or killing lots of innocent passengers.
    • There really is no other logical way to approach this. If they went the other way and prioritized the pedestrian, a psychopath could sprint back and forth across a busy freeway, causing accident after accident and injuring or killing lots of innocent passengers.

      I agree with the first statement but your argument does not hold water because with this priority setting that same psychopath can now just drive back and forth setting up situations in which the car mow down pedestrians. The problem here is that you have a psychopath, it has nothing to do with the decisions made by the car.

      • The psychopath can already do this.
        No need for self driving cars. So in that sense nothing has changed.
      • The argument is that the car (or generally the humans) need not accord the same ethical weight to running over a person who recklessly or maliciously jumps out in front of traffic as to a pedestrian that happened to be unfortunate and in the path of an accident.

        Of course the psycho (or just mental) person can still do it. The question is whether or not I'm required to risk my own limb to save the psycho or whether his risk is his own doing.

    • by uncqual ( 836337 )

      Well, until that old guy who clings to his 1964 Mustang comes along -- oops...

  • Seriously, makes sense as Pedestrians and Cyclists should be looking out for themselves as part of the activity of walking.

    Good to see some thought going into this.

  • Self-Driving Car Ethics [smbc-comics.com]

    • by afgam28 ( 48611 )

      I disagree that a utilitarian car should sacrifice the pedestrian.

      Every car on the road creates some amount of risk to the safety of the general public (which is why we're having this discussion in the first place) whereas the risk that pedestrians create for others is negligible.

      Programming cars to always sacrifice the pedestrian would send a strong message to society that it's safer to be a passenger in a self-driving car than a pedestrian, and encourage people to create more risk (which is then offloaded

  • If cars prioritized pedestrian safety over that of the driver, I can see a "challenge" developing where the same kind of morons who get burned in those "how much cinnamon can you swallow" games step in front of self-driving cars at the last second to see how close they can come to getting killed and/or how much damage they can inflict on a vehicle forced to avoid them.

    They'd probably call it "Bullfighting", or something similar.

    • by afgam28 ( 48611 )

      This game has been around for a long time, when I was in school it was called "chicken". Adults are programmed to stop for children, and it's not considered a bug or a design flaw. The people playing the game are the ones that need to change, and there are ways to do that without having cars run over people on purpose.

  • For more than a hundred years, millions of cars have shared the roads, driven by people who prioritize their own safety in an emergency, because self-preservation is part of human nature. Around that, codes and conventions have been built. That assumption is baked in every piece of existing infrastructure and equipment, and it's baked in the way human drivers that will soon share the roads with AIs, react to circumstances and the environment. It would actually be unsafe to turn around that assumption for

  • The side effect of your Mercedes choosing to impact the young mother with her baby stroller instead of the nearby telephone pole (ouch! that could hurt!) is that the customer's testicles fall off, and his dick never rises for the rest of his miserable, injury-free life (female customers sensibly snipped the wires on this pathetic contraction long ago).

    The Mercedes survivor can always tell his disappointed women, "not MY fault, the Mercedes made me do it". Mercedes! Modestly dressed women cross themselves.

  • >> As long as they are better at driving and safety than humans, it is a progress, in my opinion.

    Well, in my opinion, everyone seems to be too quick to presume all automated cars are necessarily safer than all drivers.
    Its probably actually true for some people in the US at least but not everyone. On my commute I frequently see people (especially women) texting and driving at the same time, even on the freeway. For example on Friday evening in rush hour I saw a lone female Lexus driver (illegally) in H

  • Who would you prioritize, and why should the others not hate your guts and call you names as a result?
  • If I paid for the car you can be damn sure I want it to prioritize my safety over some outsider's. Particularly since they may have caused the problem themselves (assuming that the car AI has very high safety). Imagine if it was the opposite: anybody could jump in front of the car and laugh as it swerves and crashes into a tree to avoid you...
  • Since the occupants of the vehicle will have no input (except possibly as witnesses, but probably worse witnesses that the vehicles instruments and recorders) there will be nobody in the frame for liability except those who were killed or injured by the collision and the organisation who defined the vehicle's behaviour in that situation.

    If that court finds there was any way that the vehicle makers could have avoided the "accident", they will assign liability and costs. So we can expect that on the one ha

  • by slazzy ( 864185 )
    People won't buy a car that will drive itself off a Cliff if someone tosses a human shaped meatbag on the road.
  • by holophrastic ( 221104 ) on Sunday October 16, 2016 @07:06PM (#53087521)

    "As long as they are better at driving and safety than humans, it is a progress, in my opinion."

    I'm not convinced. Right now, when people die in car crashes, and I can blame a human driver for something, then it's totally understandable. When humans die by the hands of other humans, and especially through the errors of other humans, that's just a reality that I can comprehend and accept.

    But when a self-driving car is ultimately responsible for killing a human, that's a different thing entirely. That's a lot closer to just humans-get-killed-at-random scenario. That's not something that I can accept.

    It's actually even worse than that. It's like a neighbourhood pet dog kills a neighbour. If your typically-well-behaved-and-friendly boxer suddenly kills your neighbour's teenager one day, what happens? Look, your dog killed one neighbour over the course of thirty years of you owning dogs. Most wild animals are far more dangerous than that. But I think we all know what happens. I think your dog is dead pretty quickly -- even if that teenager provoked your dog; even if it was a lot; even if your dog was defending its own life.

    I accept, today, that millions of humans driving millions of cars on millions of roads, kills thousands of people every year. I'm not happy about it, but I accept it as a part of humans being free to not be perfect. But I don't think that I'd be accepting of millions of self-driving cars on millions of roads, killing dozens of people every year.

To the systems programmer, users and applications serve only to provide a test load.

Working...