When Mercedes-Benz Starts Selling Self-Driving Cars, It Will Prioritize Driver's Safety Over Pedestrian's (inverse.com) 367
From a report on Inverse: When Mercedes-Benz starts selling self-driving cars, it will choose to prioritize driver safety over pedestrians', a company manager has confirmed. The ethical conundrum of how A.I.-powered machines should act in life-or-death situations has received more scrutiny as driverless cars become a reality, but the car manufacturer believes that it's safer to save the life you have greater control over. "You could sacrifice the car. You could, but then the people you've saved initially, you don't know what happens to them after that in situations that are often very complex, so you save the ones you know you can save," said Christoph von Hugo, Mercedes' manager of driver assistance systems. "If you know you can save at least one person, at least save that one. Save the one in the car. This moral question of whom to save: 99 percent of our engineering work is to prevent these situations from happening at all. We are working so our cars don't drive into situations where that could happen and [will] drive away from potential situations where those decisions have to be made."As long as they are better at driving and safety than humans, it is a progress, in my opinion.
What a coincidence (Score:5, Funny)
I'm working on self-walking pedestrian Gatling guns. Guess what *it* prioritizes?
Re: (Score:3)
I'm working on self-walking pedestrian Gatling guns. Guess what *it* prioritizes?
These will be built into the new self-driving BMWs, enabling anesthesiologists to rule the Earth.
The fringe cases are still going to be hard (Score:5, Informative)
99% of time, the correct action is to stop. If a crash is unavoidable though, if you are solely concerned about the safety of the passenger, then it is safer for the passenger to hit a soft target like a crowd of people than something hard like a telephone pole. The passenger is much more likely to survive hitting a person than a brick wall but a human will usually choose the wall.
Re: (Score:2, Interesting)
Are you so sure? SUV's are very popular in the US, and they are designed in a way where when they hit a normal car, they hit it further above, where the car is "softer" than below, where there is a crunch zone. Unfortunately the soft part is partly made of the inhabitants of the normal car.
So people already have decided that they like the "crowd" variant and not the "brick wall" one.
Re:The fringe cases are still going to be hard (Score:4, Interesting)
What they are really saying here is that the car is designed with passenger safety in mind, and the AI won't even try to consider pedestrians and other drivers. It will just stop as quickly as possible and avoid things that might hurt the occupant, like most humans given a fraction of a second to act on mostly instinct would.
The trolley problem relies on there being sufficient time to make a decision, but not enough to take any other action. It's unrealistic and was only ever intended as a thought experiment.
Re: (Score:2)
The trolley problem, to me, is incomplete. If only strangers are tied to the tracks (if it was a choice between a loved one and 10 strangers of course I would choose to save the loved one), then I would do the thing that lands me in the least amount of trouble with the authorities (I do not want to go to jail for a stranger).
Re: (Score:2)
I would do the thing that lands me in the least amount of trouble with the authorities (I do not want to go to jail for a stranger).
Many versions of the "trolley problem" specify that no other person will be aware of your decision, so you shouldn't have to worry about the authorities, unless you blab about what you did.
When I first heard the trolley problem, it seemed obvious to me to throw the switch to kill one guy rather than allowing five to die through inaction. I was surprised to learn that means I am a psychopath. It still makes no sense to me that so many normal people believe that "inaction" somehow absolves them of moral cul
Re: (Score:2)
From a legal standpoint it's best to do nothing. If both choices are bad, that's the way to avoid liability. And that is precisely what the self diving car manufacturers will do.
Re: (Score:2)
From a legal standpoint it's best to do nothing. If both choices are bad, that's the way to avoid liability. And that is precisely what the self diving car manufacturers will do.
I'm not sure that generally works for most companies. E.g.:
DESIGNER 1: "Hmm... should we put a guard on that spinning blade on our product so someone doesn't get cut?"
DESIGNER 2: "Well, but if we put the guard on, doesn't that mean someone could stick his finger over here and get the whole finger chopped off?"
DESIGNER 1: "True, but the guard should at least make it clear that we tried to prevent injury."
DESIGNER 2: "But people could still get injured badly, and there's nothing we can do to prevent
Re: (Score:3)
The trolley problem relies on there being sufficient time to make a decision, but not enough to take any other action. It's unrealistic and was only ever intended as a thought experiment.
Yes, the trolley problem is obviously unrealistic in almost all of its forms. However, its purpose was to tease out an ethical dilemma and perhaps expand that "split-second" decision to allow a person to think deeply about the most "moral" choice.
Just because an AI car can be programmed to act like a human would act in a split-second decision-making process (i.e., "slow down fast, avoid stuff where possible") doesn't mean that manufacturers will avoid getting into legal trouble if the car ends up mowing
Re: (Score:2)
The articles' logic is flawed. If everyone (car, bicycle, pedestrian) is following laws of the road and the above logic is used, this could be considered homicide almost to the point of premeditated. I bet if this logic is used some countries will ban the sale of said vehicles.
The thing that should be considered is if a collision is unavoidable, the person in the vehicle will have WAY more protection than a bicycle of pedestrian. In other words the car should hit the wall, not the pedestrian.
Mercedes ne
Re: (Score:2)
If everyone is following the laws of the road, an accident would be pretty unlikely. The problem is that this is rarely the case.
Re: (Score:2)
With Mercedes, I expect it by model (Score:5, Funny)
S-class & AMG Models: Maximum driver and driver property prioritization.
E-class models: Minor driver prioritization, slightly better than 50/50 odds
C-class models: Pedestrian prioritization
Re:With Mercedes, I expect it by model (Score:5, Funny)
If I'm driving an S class, I expect a little more granularity than that. I need to know that, given the possibility of hitting one or more people in a crowd, the avoidance decision process will go Wealthy > White People > Males > Everyone else.
Re: (Score:3)
A-Class: Doesn't matter, it's upside down anyway.
Before most people's time, but unless you inherited that ID you're old enough to remember what I'm talking about.
Logical (Score:2, Insightful)
Re: (Score:2)
Sounds logical to me. Otherwise, why would I pay Mercedes-Benz to save other people? I am not an altruist and don't inspire to be one in life&death situations.
Given the number of accidents that have been caused and have killed drivers by simple things like trying to avoid an animal crossing the road, I think you're comment is well and truly off base.
Most of the time human drivers will try to avoid death and don't get as far as thinking of their own lives in the process.
Re: (Score:2)
If you're worried to the point of stupidity/paralysis ("be prepared to be sued out of existence") then you've already chosen to never drive even a manually-operated car, because you were overwhelmed by your fears. Most people don't have that attitude going on, so they already drive cars anyway, where they face constant daily risk of injuring or even killing pedestrians.
And some of them end up occasionally doing it, to many peoples' grief. For whatever reason, society didn't give up and decide the existence
Re: (Score:3)
That will be what's going to be investigated.
Did you manually drive onto the wrong side of the road and then enable the AI just to see what would happen, resulting in the car running over some pedestrian in an attempt to get off the road?
Most likely you will be liable.
Did the AI fuck up and take a wrong turn? Probably Mercedes is at fault.
The AI will never be held accountable, since it's just a program written by people.
There is not going to be a single answer for who is at fault for every
Resiliency in the face of malicious inputs (Score:5, Interesting)
Saving the occupants of the car is the only choice that makes sense in the context of potentially malicious input. For instance, if Mercedes stated that their car would swerve into a tree instead of hitting a crowd of 5 pedestrians, what's to stop me and 4 friends from jumping out in front of the cars just to laugh as it crashes itself to "save" us.
We have got to start embedding deep into the mind of every software engineer that any information from outside your system can be manipulated to cause maximum damage or disruption. It is your system's responsibility to safely handle malformed and malicious inputs. Until this becomes a common mode of thought, expect more IoT botnets, SQL injections, buffer overflows, DOS amplifiers and the entire realm of "oh crap someone somewhere could be evil, I only engineered for the happy case".
Re: (Score:2)
any information from outside your system can be manipulated to cause maximum damage or disruption
So true
Basic Ethics and the Law (Score:2)
For instance, if Mercedes stated that their car would swerve into a tree instead of hitting a crowd of 5 pedestrians, what's to stop me and 4 friends from jumping out in front of the cars just to laugh as it crashes itself to "save" us.
How about the same thing that stops you dropping rocks on cars from a bridge over a road? You know, basic ethics and the consequences of breaking the law. In your example the problem is you and your psychopathic fiends, not the decision made by the car. The best arguments for the self preserving algorithm is that this is what a human driver will instinctually do so it is no worse in causing deaths than a human (and given the far faster reaction time almost certainly far better) and that nobody will ever bu
Re: (Score:2)
I think the point is that there are unethical people and lawbreakers. If the car cannot handle them correctly by identifying that the danger they face is one they created by their own incorrect behavior, then it is deficient.
In other words, humans have an implicit understanding that "person jumping out in front of traffic" and "pedestrian minding their own business who is in the path of an accident" are in two vastly different ethical positions. Colloquially, "even a dog knows the difference between being k
Re: (Score:2)
what's to stop me and 4 friends from jumping out in front of the cars just to laugh as it crashes itself to "save" us.
This example is one of "malicious behavior", which is an issue for the courts. With any luck the "Just for laughs" comment would reach the judge.
It is not an example of malicious input. The car correctly sensed a risk to human life/health and correctly identified the best alternative to maintain its"prime directive". The vehicle's decision would have been exactly correct (presuming there were no better alternatives, such as stopping). An example of "malformed/malicious input" would be when the side of [google.com]
Re: (Score:3)
The car correctly sensed a risk to human life/health and correctly identified the best alternative to maintain its"prime directive".
It correctly sensed it but it did not accurately assess it. A risk to the life of a human who is a pedestrian innocently minding his own business is not ethically equivalent to the life of a human who jumps out in front of traffic, either maliciously or out of recklessness.
In the US, the aphorism is "even a dog knows the difference between being kicked and being stumbled over". Intent & responsibility are things we all implicitly understand, but which is lost when you say that one should swerve into a t
Re: (Score:2)
Re: (Score:2)
So now we're just negotiating the number of people we're running over. Offers, anyone? Five is clearly too low.
Re: (Score:2)
The black box will show that the driver initiated the swerve that made an accident inevitable. The driver will be civilly liable (and the car company/programmer might also be) and the driver will be criminally liable (the car company probably not assuming they/the programmer was not grossly negligent).
Re: (Score:2)
The same that happens when a psychopath does it currently. People die and hopefully he dies along with them or goes to jail. You can take your hands of the wheel as-is, it won't stop your car from barreling down a group of disabled kids. In the "future" this won't be any different, if you instruct your machine to kill or cause it to kill intentionally you are still responsible.
Re: (Score:2)
What stops you from shooting your rifle at a target within, but nevertheless distinct from, a crowd right now? Might there already be reasons to abstain from this behavior, in spite of the lack of technical restraints?
Re: (Score:2)
Your fantasy scenario
Fantasy scenario? On a daily basis we hear stories of people throwing bricks from bridges at cars for shits and giggles. I personally have nearly been cut in half by some little shit who decided it was a good idea to take a safety barrier and position it side ways in the middle of a lane on a quiet road to see if someone would run into it (I got out, caught the shit and put the fear of god in him so much he wet his pants).
Immoral and malicious is right, but this is far from some fantasy scenario.
Re: (Score:2)
The GP's example was of one where the malicious input was from someone other than the driver.
Re: (Score:2)
The above post deserves to be seen. It is the only logical response to the power-trip fantasies otherwise being proposed in this thread. Anything else makes Mercedes-Benz liable as a company.
I'm not sure you understand liability, but at least in the US, if a person recklessly or maliciously jumped out in front of traffic, a driver is not liable if that person is hit. Ethically, the driver should try to avoid it, if doing so would not risk his own life, but that's not legally mandatory.
And this is not a power-trip, it's just a simple point that the car has to distinguish between hitting people that were unfortunately in the way of an accident versus hitting people that deliberately or wantonly p
This has to be the way it works. (Score:5, Interesting)
Agreed but bad argument (Score:2)
There really is no other logical way to approach this. If they went the other way and prioritized the pedestrian, a psychopath could sprint back and forth across a busy freeway, causing accident after accident and injuring or killing lots of innocent passengers.
I agree with the first statement but your argument does not hold water because with this priority setting that same psychopath can now just drive back and forth setting up situations in which the car mow down pedestrians. The problem here is that you have a psychopath, it has nothing to do with the decisions made by the car.
Re: (Score:2)
No need for self driving cars. So in that sense nothing has changed.
Re: (Score:2)
Re: (Score:2)
The argument is that the car (or generally the humans) need not accord the same ethical weight to running over a person who recklessly or maliciously jumps out in front of traffic as to a pedestrian that happened to be unfortunate and in the path of an accident.
Of course the psycho (or just mental) person can still do it. The question is whether or not I'm required to risk my own limb to save the psycho or whether his risk is his own doing.
Re: (Score:2)
Well, until that old guy who clings to his 1964 Mustang comes along -- oops...
Re: (Score:2)
Psychopath with a gun then. I know that if somebody was standing in the middle of the road aiming a gun at me (and not wearing the required uniform and signs that show him to be a police officer), I would not stop. I would either try to go around him or hit him.
Re: (Score:2)
If I see a person pointing a gun at me without clear reason to do so, I'd still run them over (or most likely try to avoid, because the impact could render your car unusable) , uniformed or not.
Re: (Score:2)
Re: (Score:2)
Works great until the day the car needs to be scrapped, then suddenly you have a revolution on your hands.
Pedestrians (and Cyclists) should be responsible (Score:2)
Seriously, makes sense as Pedestrians and Cyclists should be looking out for themselves as part of the activity of walking.
Good to see some thought going into this.
Obligatory SMBC (Score:2, Funny)
Self-Driving Car Ethics [smbc-comics.com]
Re: (Score:2)
I disagree that a utilitarian car should sacrifice the pedestrian.
Every car on the road creates some amount of risk to the safety of the general public (which is why we're having this discussion in the first place) whereas the risk that pedestrians create for others is negligible.
Programming cars to always sacrifice the pedestrian would send a strong message to society that it's safer to be a passenger in a self-driving car than a pedestrian, and encourage people to create more risk (which is then offloaded
I even have a name for it... (Score:2)
If cars prioritized pedestrian safety over that of the driver, I can see a "challenge" developing where the same kind of morons who get burned in those "how much cinnamon can you swallow" games step in front of self-driving cars at the last second to see how close they can come to getting killed and/or how much damage they can inflict on a vehicle forced to avoid them.
They'd probably call it "Bullfighting", or something similar.
Re: (Score:2)
This game has been around for a long time, when I was in school it was called "chicken". Adults are programmed to stop for children, and it's not considered a bug or a design flaw. The people playing the game are the ones that need to change, and there are ways to do that without having cars run over people on purpose.
It would actually be less safe the other way aroun (Score:2)
For more than a hundred years, millions of cars have shared the roads, driven by people who prioritize their own safety in an emergency, because self-preservation is part of human nature. Around that, codes and conventions have been built. That assumption is baked in every piece of existing infrastructure and equipment, and it's baked in the way human drivers that will soon share the roads with AIs, react to circumstances and the environment. It would actually be unsafe to turn around that assumption for
algorithmic morality long-term side effects (Score:2)
The side effect of your Mercedes choosing to impact the young mother with her baby stroller instead of the nearby telephone pole (ouch! that could hurt!) is that the customer's testicles fall off, and his dick never rises for the rest of his miserable, injury-free life (female customers sensibly snipped the wires on this pathetic contraction long ago).
The Mercedes survivor can always tell his disappointed women, "not MY fault, the Mercedes made me do it". Mercedes! Modestly dressed women cross themselves.
Too big a thing to be decided by someones opinion. (Score:2)
>> As long as they are better at driving and safety than humans, it is a progress, in my opinion.
Well, in my opinion, everyone seems to be too quick to presume all automated cars are necessarily safer than all drivers.
Its probably actually true for some people in the US at least but not everyone. On my commute I frequently see people (especially women) texting and driving at the same time, even on the freeway. For example on Friday evening in rush hour I saw a lone female Lexus driver (illegally) in H
Who would YOU prioritize? (Score:2)
It's not a philosophical conundrum... (Score:2)
Decision subject to change (Score:2)
Since the occupants of the vehicle will have no input (except possibly as witnesses, but probably worse witnesses that the vehicles instruments and recorders) there will be nobody in the frame for liability except those who were killed or injured by the collision and the organisation who defined the vehicle's behaviour in that situation.
If that court finds there was any way that the vehicle makers could have avoided the "accident", they will assign liability and costs. So we can expect that on the one ha
Also (Score:2)
A thought on progress (Score:3)
"As long as they are better at driving and safety than humans, it is a progress, in my opinion."
I'm not convinced. Right now, when people die in car crashes, and I can blame a human driver for something, then it's totally understandable. When humans die by the hands of other humans, and especially through the errors of other humans, that's just a reality that I can comprehend and accept.
But when a self-driving car is ultimately responsible for killing a human, that's a different thing entirely. That's a lot closer to just humans-get-killed-at-random scenario. That's not something that I can accept.
It's actually even worse than that. It's like a neighbourhood pet dog kills a neighbour. If your typically-well-behaved-and-friendly boxer suddenly kills your neighbour's teenager one day, what happens? Look, your dog killed one neighbour over the course of thirty years of you owning dogs. Most wild animals are far more dangerous than that. But I think we all know what happens. I think your dog is dead pretty quickly -- even if that teenager provoked your dog; even if it was a lot; even if your dog was defending its own life.
I accept, today, that millions of humans driving millions of cars on millions of roads, kills thousands of people every year. I'm not happy about it, but I accept it as a part of humans being free to not be perfect. But I don't think that I'd be accepting of millions of self-driving cars on millions of roads, killing dozens of people every year.
Re:What does this even mean ? (Score:5, Insightful)
It means if they are much safer than human drivers it doesn't really matter who they are prioritizing since everyone will be benefitting still.
Re: (Score:2)
Re: (Score:3)
Self driving cars don't need to be safer than the most safe human driver, they only need to be safer than the average human driver. And those are pretty disregarding of safety. So yes, maybe for a safe driver stepping into a self driving car will the risk will be increased, but for most people it will be lower.
This is declared intent to cause injury
If it crashes into the tree, it injures the driver. It will cause injury one way or another.
Re: (Score:2)
So yes, maybe for a safe driver stepping into a self driving car will the risk will be increased
This would be a huge deal though.... most people think their own driving is safer, even though it's not.
So how are you supposed to know if you are increasing or decreasing the risk to you by using a self-driving car?
This kind of doubt would hinder adoption of the technology.
Re: (Score:2)
So how are you supposed to know if you are increasing or decreasing the risk to you by using a self-driving car?
Test your reaction time. It takes a typical human about 1500 milliseconds from the time they see a hazard, until they start depressing the brake pedal. An SDC takes about 1ms.
Even if you ignore all the other advantages of SDCs, this difference in reaction time alone gives them a huge safety advantage.
Re:What does this even mean ? (Score:5, Insightful)
Re: (Score:2)
"Driving is about anticipation of events way more than reaction time."
This is how the drooling morons that justify tailgating and extreme speeding in heavy traffic and residential areas justify what they do.
Re: (Score:2)
unlike humans, it can look in all directions simultaneously.
It can look in all directions, but apparently (at least with some vehicles) has been less reliable at identifying hazards than humans...... being really good at identifying Any hazard in view may be more important than the number of directions and number of ways you have at looking out.
To some extent they cover up this "driving like a granny" by taking advantage of superior reaction time and better eyes.
The self-driving car product is still
Re: (Score:2)
unlike humans, it can look in all directions simultaneously.
It can look in all directions, but apparently (at least with some vehicles) has been less reliable at identifying hazards than humans......
Highly debatable.
There are sleepy drivers, drunk drivers, distracted drivers, etc., etc.
The difference must be made between "accidentally less reliable" and "consistently less reliable".
If an automated driving car is consistently less reliable than the average human driver, then it shouldn't be released.
Now, concerning "accidentally less reliable", that's not measurable, because human drivers are accidentally less reliable as well, for many, many reasons that automated driving can reliably avoid:
- kids figh
Re: (Score:3)
Who's the asshole here?
PS The mouth does not drive.
Re: (Score:2)
It takes a typical human about 1500 milliseconds from the time they see a hazard, until they start depressing the brake pedal. An SDC takes about 1ms.
So what? Why don't SDCs manager to be statistically better than the safest human drivers, then?
If SDCs are less safe as implemented in spite of this advantage, than that's all the more reasons to not use a SDC.
The next-best alternative to a SDC might be an extremely safe human driver PLUS an Automated breaking system.
Having a self-driving car is not nece
Re: (Score:2)
they only need to be safer than the average human driver.
Actually, not even that is true. It is likely that the worst human drivers would be the most likely to use SDCs, so even if the SDC was below average, it would still be an improvement.
I tend to daydream while driving, thinking about tech problems or whatever, and to be honest, I am a pretty bad driver. I would love an SDC, so I would no longer have to worry about my train of thought being interrupted by stuff happening on the road.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Self driving cars don't need to be safer than the most safe human driver, they only need to be safer than the average human driver.
False. Not if they want to survive in our current world full of trigger-happy litigators and sensationalist media.
You may not like it, but that's the reality of our world. Just for a second imagine that a self-driving car crashed into a school bus and killed some kids. It doesn't matter if a human would have ended up doing the same (or worse) in that scenario. What WILL happen is the headlines that night will be "KILLER ROBOT CAR CRASHES INTO SCHOOLBUS; KIDS DEAD!"
And you'll have lawyers jumping on
Re: (Score:2)
Re: (Score:3)
AI vehicles currently have about 4.5 times the fatality rate of human drivers per mile traveled.
I would say this is a bogus averaging process that's come up with 4.5x fatality rate, Because there are too few accidents involving AI vehicles
for that calculation to have a theoretical foundation.
There have been what, less than 10 accidents involving AI? Not even enough to discount random chance and successfully calculate a meaningful average accident frequency.
Re: (Score:2)
Plus, it'd be interesting to know exactly what metric is being measured here. What fatalities are we talking about: just the occupants of the car itself, or other fatalities (occupants of other vehicles, pedestrians etc)? Are fatalities related to "AI vehicles" being counted even when they're being driven by humans at the time? What about when they're being placed under AI control in circumstances where they shouldn't be (so driver error in choosing to use the tool incorrectly, rather than the tool itself b
Re: (Score:2)
AI vehicles drive less than 0.0001% of the miles driven by humans per year. the sample size is so small that no information at all can be extrapolated from it in any way. Plus Semi Trucks skew that number extremely hard due to the excessive miles they drive each day.
Based on the National Safety Council's 2009 estimate of 10 million auto accidents per year, approximately 27,000 accidents occur every day caused by humans driving. there were 4 accidents caused by AI in the 2016 year, and 3 of those acciden
Re: (Score:2)
That's a big "if", is currently false and will be false for hundreds of years still.
Given the kind of advances we've had in computer control in the past 20 years, and the already spectacular record of current systems what the heck is your basis for "hundreds of years still"? Unhealthy pessimism? Do you own shares in companies against self driving cars?
As far as I can see it it's currently false, and that will change well within our lifetimes.
Re: (Score:2)
Six year olds could drive if we used your measure of safety.
Re: (Score:2)
Re: (Score:3)
Clearly you've never been to Paris. I'm not even sure if brakes exist there.
If you're lucky, they might blow the horn.
Re: (Score:2)
Re: (Score:2)
AI would need to break the laws of physics to be comparable.
Last time I checked, computers and human brains adhere to the same laws of physics.
Re: What does this even mean ? (Score:3)
Re:What does this even mean ? (Score:5, Insightful)
That's a big "if", is currently false and will be false for hundreds of years still. This is declared intent to cause injury, making it a bit past borderline illegal. It is poorly thought through immoral marketing buzz. There is no positive angle to this "story" or even much to say except Mercedez-Benz has decided to let the interns do PR.
Actually, it's not that big an if.
Earlier this year, at a roundtable on connected car security headed up by the NHTSA, the chairman of the NHTSA stood up and cited some interesting numbers. A bit more than 32,000 people had died in vehicle-related accidents the prior year, and about 97% of those were the direct result of, and I quote, "driver error or driver choice." He went on to point out that autonomous vehicles would, if done correctly, eliminate most of those deaths. A car that will refuse to drive in certain conditions if, for example, the tire pressure is too low on one or more tires, or the brakes require more than a certain amount of force to slow the car to a certain standards...these are the less-obvious ways in which such cars are safer. Obviously, they can't drive drunk, don't commit road rage, and don't have any sense of ego about saying that they are having trouble with their eyesight. The car can be objective about its limits, its skills, and any impairment it suffers due to weather, maintenance issues, or any other potential problems. Just the degree of data logging alone that is inherent to autonomous vehicles is already producing useful information about how to prevent crashes, and that's before there are any such vehicles for sale. (And I hear it now..."Tesla sells autonomous vehicles!...but Tesla's system doesn't count, as evidenced by the fact that the maker of that system has cut ties with Tesla, basically saying "It's not supposed to be used that way!") Cars have reached the point where humans are the main source of the risk, and while the technology isn't quite ready-for-market, it's not "hundreds of years" away and it's very, very promising.
And no, what Mercedes is saying is not intent to cause injury. It's a statement about which injury to try and prevent in situations where...as this has been discussed for quite some time now...an injury is deemed inevitable. They have not said, "our cars will drive through schools for no particular reason, just to annoy Jzanu,." They have said, "our car's logic knows what's in the car, what's going on with the car, and can directly control the car. It does not know that much about the rest of the world, so we believe the odds of the best possible outcome in a situation with no good outcomes lies with letting the car preserve its own passengers."
And there is absolutely nothing illegal about that whatsoever. It's the same logic behind why paramedics don't run, ambulances slow down through intersections where they can't see past a certain distance, and a whole bunch of other situations where you have to weigh risk of one bad outcome against risk of another one.
Re: (Score:2, Insightful)
It means instead of rolling the car in a ditch when children jump out in front, like a normal person would do, the car will kill the children. The wipers may automatically activate to clean their gore off your windshield as well.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Anything is better than the driving skills of the typical Mercedes owner. There is a real reason why those people need lane departure warnings, blindspot warnings, vehicle is braking in front of you warnings, and autopilot....
Re: (Score:2)
Re: (Score:2)
Re: Real Problem (Score:2)
This is also evolution. But instead of using trial-and-error starting with a few chemicals which happened to be at the right place at the right time, it is intelligence-driven evolution starting with the knowledge gained through all that original evolution.
And the gazelle never drove a car. Driving is only a few generations old. That comparison is a huge stretch.
Re: (Score:2)
Re: (Score:2)
Re: Real Problem (Score:2)
Re: (Score:2)
Re: Real Problem (Score:2)
Re: (Score:2)
Problem is Law and Media, Not Common Sense (Score:2)
This is common sense. All the self driving car moral bullshit...
I agree but the problem here is the law which can rarely be accused of following common sense. A reasonable person would look at the number of lives saved by the car and decide that this was, on average, a very good thing. The law will look at one instance where a life was lost and, unlike a "gut reaction" of human will show that this was a calculated decision (I expect pre-meditated might even be used) to kill a pedestrian and will then sue the manufacturer who has far deeper pockets than the driver.
Wh
Re: (Score:2, Interesting)
Re: (Score:2)