Drivers Prefer Autonomous Cars That Don't Kill Them (hothardware.com) 451
"A new study shows that most people prefer that self-driving cars be programmed to save the most people in the event of an accident, even if it kills the driver," reports Information Week. "Unless they are the drivers." Slashdot reader MojoKid quotes an article from Hot Hardware about the new study, which was published by Science magazine.
So if there is just one passenger aboard a car, and the lives of 10 pedestrians are at stake, the survey participants were perfectly fine with a self-driving car "killing" its passenger to save many more lives in return. But on the flip side, these same participants said that if they were shopping for a car to purchase or were a passenger, they would prefer to be within a vehicle that would protect their lives by any means necessary. Participants also balked at the notion of the government stepping in to regulate the "morality brain" of self-driving cars.
The article warns about a future where "a harsh AI reality may whittle the worth of our very existence down to simple, unemotional percentages in a computer's brain." MIT's Media Lab is now letting users judge for themselves, in a free online game called "Moral Machine" simulating the difficult decisions that might someday have to be made by an autonomous self-driving car.
The article warns about a future where "a harsh AI reality may whittle the worth of our very existence down to simple, unemotional percentages in a computer's brain." MIT's Media Lab is now letting users judge for themselves, in a free online game called "Moral Machine" simulating the difficult decisions that might someday have to be made by an autonomous self-driving car.
News at 5... (Score:5, Insightful)
Re: (Score:3, Insightful)
People value their own lives, fuck the rest of you.
Fixed that for you.
It's a liability issue (Score:5, Interesting)
Self driving cars will transfer the liability from the owner of the car to the manufacturer of the car. This is already happening. Otherwise, they could never sell a car to anyone. But if the liability is held by the manufacturer, you can be sure the crash algorithm will be one that minimizes total casualties (and thus total liability).
And notice that this is the same issue behind the Will Smith film, "I, Robot". Will's character is rescued from drowning by a robot that lets a little girl drown instead. The robot had calculated the chances of saving each and Will won the AI lottery.
Re: (Score:3)
How would that be different from any other lifeguard? A trained lifeguard is always going to choose to save the person that they can rather than the person they can't.
Re: It's a liability issue (Score:5, Funny)
Jesus Christ. Please never reference the film version of "I, Robot" ever again.
Why not? It's way better than the boring stories that dude Asimov ripped off from the movie.
Re: It's a liability issue (Score:5, Insightful)
Re: It's a liability issue (Score:5, Interesting)
Ultimately the Three Laws were a literary device. Asimov was tired of stories where robots run amok, so he made up rules that would, on the face of it, make robots running amok seem impossible. He then used these rules to make superior robots-run-amok stories.
What makes those stories interesting is that they're all about how our simplistic reasoning leads us to dismiss real possibilities too quickly. Most people simply assume things work they way they were designed to work, but smart people realize that purposes can be gamed as long as the letter of the rules aren't broken. It is true that Asimov introduced a 0th Law, but the other laws remain in effect; robots in his stories are conflicted. In Jeff Vintar's screenplay the 0th law simply overrides the other laws; the lower priority rules are in effect nullified, which doesn't happen in Asimov's stories. The screenplay was a bog-standard robots run amok story with a little Asimovian window dressing thrown in, nowhere as good as anything Asimov did. Because Jeff Vintar isn't anywhere near as smart as Isaac Asimov.
But then again, neither am I, and probably not you either.
I very much doubt Asimov thought that people would ever build something like the Three Laws into technology in such a fundamental way; that was just a literary device that enabled him to display his astounding cleverness. I don't think it'll ever happen either, for the simple reason that killing people will be a driving for in the adoption of autonomous robot technology.
Re: (Score:3)
Ultimately the Three Laws were a literary device. Asimov was tired of stories where robots run amok, so he made up rules that would, on the face of it, make robots running amok seem impossible. He then used these rules to make superior robots-run-amok stories.
[...]
I very much doubt Asimov thought that people would ever build something like the Three Laws into technology in such a fundamental way; that was just a literary device that enabled him to display his astounding cleverness. I don't think it'll ever happen either, for the simple reason that killing people will be a driving for in the adoption of autonomous robot technology.
OTOH, what's really going to be different? When humanity builds something smarter than humans and it becomes NP-hard to figure out the loopholes in the thicket of governing rules that allow said AI to kill unauthorized people and cause other mischief, there will be a need for failsafe rules that always work. You will end up with something like the Three Rules as a result.
One could also say the same of murder mysteries and crime dramas. In the good old days, primitive man inflicted all sorts of viole
Re: (Score:3)
Well on some level though we have already made these moral decisions as a society. Paramedics and ER docs are trained to triage. Yes they do address the people who are in the most immediate jeopardy of life first, but of those they focus on the ones the suspect can be saved and the ones they suspect are to badly injured are moved to the back of the line at least within the class of people immediately endangered.
The issue when it comes to cars is that we don't usually get to make choices. Once a dangerous
Re: (Score:3)
Well on some level though we have already made these moral decisions as a society. Paramedics and ER docs are trained to triage. Yes they do address the people who are in the most immediate jeopardy of life first, but of those they focus on the ones the suspect can be saved and the ones they suspect are to badly injured are moved to the back of the line at least within the class of people immediately endangered.
That's the point though isn't it. Paramedics and that are trained to take the humanity out of it and act on rational judgement. I'm sure some of them have terrible issues over things they've done or not done but ultimately have done the right thing. Regular people just flap around and everyone dies. In this instance it's the regular people now that want control over the cold hard calculations that need doing. The person who calls the paramedic does't even try to tell them what to do so why should they now?
Re: (Score:3)
That takes care of the simple problems (Score:4, Informative)
The emergency instruction you have provided "Stop as quickly as possible to avoid hitting thing" will work in 70%-80% of all emergencies. In many cases "Stop as quickly as possible" will not solve the problem.
The next option could be "Swerve around impediment/obstacle" so the car drives around objects that you can't stop for (moving debris, running people, swerving cars/bikes/motorcycles). The swerve option may be safer for the traffic pattern than a crash stop. You need to prevent being rear ended as well.
These two directives alone can probably take care of 95% of the issues out there. Stop or go around the problem. The issues arise in the last 5% where the two directives conflict with each other. Easy scenario: There is oncoming traffic in the oncoming lane, a person/child steps out from behind a visual obstruction (signage, truck, etc). The person has stepped into your path of travel and is inside your braking distance. If you continue straight you will hit the person, If you swerve left you will hit an oncoming car, If you swerve right you will swerve into an obstruction (car, lamp post, etc).
I don't expect you have an answer for this no-win-scenario. People have to recognize that 5% where something bad is going to happen; and there is some choice/action that can be taken that will affect the outcome (number of people injured, types of injuries, etc). This is the problem that people are trying to wrestle with. You are presented with an ugly-no-win-scenario. Make the best of it and decide who gets killed, injured, maimed or saved - Yourself, the pedestrian, another driver?
Re: (Score:3)
Seriously?
I want my car saving MY LIFE first, plain and simple.
People pay extra money these days just for having cars that are more safe and likely to save their lives in an accident.
Would folks have to pay extra for a car programmed to this of THEIR life ahead of others'?
Would the Volvo's of the future be the ones that keep you alive at the cost of others?
Hell, might be the first time I'd ever consider buying a volvo.
Re: (Score:3, Interesting)
Sigh... this issue is so bloody simple to resolve.
1. Default to a default set of morals, which include a reasonable (but not excessive) degree of self-sacrifice - based around the sort of decisions a "typical" driver would make.
2. Make a straightforward procedure for people to customize the vehicle's morals. Just run them through a series of scenarios on the screen to see where their cutoff is. Is this a person who would mow through a couple toddlers to avoid having to drive off the road, or a person who
Re:News at 5... (Score:5, Informative)
That sounds anything but simple.
Re:News at 5... (Score:5, Funny)
Maybe program them to go after smokers and kids skateboarding in skateboarding free zones....just give them the entire government employee roster of major departments: IRS, EPA, Congress, etc.
Re: (Score:3, Funny)
Is there a scoring system? There has to be a scoring system.
Re: (Score:2)
"You come upon a sweetroll in the middle of the road..."
Re:News at 5... (Score:5, Funny)
1. Default to a default set of morals...
Um, what?
Make a straightforward procedure for people to customize the vehicle's morals.
Okay. Anyone with a goatee dies first. Child molesters and people that talk in the theater are next in line (in homage to Shepherd Book). I'm flexible after that, but the list *will* include people on cell phones who don't pay attention to their surroundings and people who take more than 5s to make a drink order at Starbucks. Any other suggestions?
Re: (Score:2)
Liability... (Score:2)
Sigh... this issue is so bloody simple to resolve.
1. Default to a default set of morals.
2. Make a straightforward procedure for people to customize the vehicle's morals.
Sort of. Realize that your moral choice will affect your insurance rates. Also most companies (manufacturers, renters, even taxi services) will default to protect people other than the passenger, because they have an agreement with the passenger that they can use to help limit their liability, but they don't have that agreement with third parties. The only way that changes is if they compete on morality--but that seems unlikely.
Re: (Score:2)
Your solution kills a lot of people, both drivers and bystanders. Just how well-tested will that multitude of settings be?
Silly navel gazing (Score:2)
need4speed (Score:2)
if it was properly programmed, it wouldn't let itself be put into this sort of situation in the first place, slowing down to appropriate speeds around people.
The Need4speed mod was first developed in Central America. A software firm had been hired by a wealthy client to develop the ultimate suite of functions for "emergency kidnap evasion". It took the design limits of the vehicle to the edge, implemented spin and bump tactics for armored cars and the 'bootleg turn', re-ordered the evasion pragma to sideline small object/animal/child avoidance. A complete new class of stratagem for high speed pursuit where pursuing vehicles are recognized and evasion conditio
Re: (Score:3)
What kind of crazy, concocted scenario are you coming up with where the AI controlling the car has to make a Boolean decision that kills people?
My car is driving down a busy road at a safe and steady 30mph. There is traffic in the opposite direction travelling at 40mph. The sidewalk alongside is crowded with people.
A child suddenly runs onto the road 4 feet in front of the car. There is nothing my vehicle can do to stop in that distance. It is mechanically not possible. However, it can swerve left or swerve right. One direction means a head-on collision, the other means mowing down a dozen pedestrians. Or maybe it does nothing and strikes the
Re: (Score:3)
Bad example. Your car is going 44 fps, so it has 0.09 seconds to do anything about this problem. In that time, it can't stop, and it
Re: (Score:3)
Doesn't matter if it's a car, or a crate full of little girls with cute bunnies.
Swerving anywhere is never the right answer.
If you (or the computer) had enough time to consider whether or not to swerve into some direction, all you did was waste time you could have spent slowing down.
Also, in that situation, your car should already be slowing down the moment it detects another car cutting you off.
Even simpler (Score:5, Insightful)
0) We all know that stopping in the middle of the highway is dangerous, BUT the way the laws are written in most countries, it's practically always your fault if you drive into the rear of another vehicle especially if it didn't swerve into your path and merely braked suddenly, or worse was stationary for some time.
1) Thus for legal and liability reasons the robot cars will be strictly obeying all convincing posted speed limits (even if they are stupidly slow by some mistake, or by some prankster), and will stick to speeds where they would be able to brake in time to avoid collisions or at least fatal collisions. Whichever is slower.
2) In most danger situations the robot cars will brake and try to come to a stop ASAP all while turning on its hazard lights. Which shouldn't be too difficult at those said speeds.
3) If people die because of tailgating it's the tailgater's fault. Same if the driver behind doesn't stop.
4) There are hardware/software failures then it's some vendors fault.
5) If braking won't avoid the problem even at "tortoise speeds", in most cases fancy moves wouldn't either. In the fringe cases where fancy moves would have helped but braking wouldn't AND it would be the robot car's fault if it braked, the insurance companies would be more than willing to take those bets.
The odds of the car being designed to do fancier moves to save lives are practically zero. If I was designing the car I wouldn't do it - imagine if the car got confused and did some fancy moves to "avoid collision" and killed some little kids. In contrast if it got confused and came to stop ASAP if any little kids are killed it would more likely be someone else's fault.
If you are a human driver/cyclist/motorcyclist you better not tailgate such cars.
Look at the Google car accident history, most of the accidents were due to other drivers. Perhaps I'm wrong but my guess is it's because of "tailgating". Those drivers might still believe the AI car was doing it wrong but the law wouldn't be on their side.
Re: (Score:3)
2) In most danger situations the robot cars will brake and try to come to a stop ASAP all while turning on its hazard lights. Which shouldn't be too difficult at those said speeds.
Turning on your hazard lights while driving is illegal in most states, and for good reason. Did you know that many makes and models use the exact same lights for your hazard lights as the turn and/or brake lights? And guess which behavior wins out? The hazard lights, of course. Do you know when the hazard lights are supposed to be used? When you're stuck on the side of the road or stalled in traffic. Not for "Oh no it's raining hard I want to make sure the people behind me notice the bad weather" or "he
Re: News at 5... (Score:5, Insightful)
The premise that people make split second ethical judgements is delusional. What people do is react. And many times they don't react in time to avoid the worst possible outcome.
And if they survive their brains spend hours, days, weeks and years even going over and over what happened. The brain in trying to learn from what happened adds more processing than was therein the first place.
Our false recollections. All the things you could have, should have, but didn't have time to think about when really all you had time to do was jerk the wheel and BAM!
And now a bunch of delusional people are trying to apply some false notion of ethics to decision making that should be as simple as stop the car before hitting something. There isn't enough time to consider other options. There never was enough time. People just think there was because our brains work that way.
Re: News at 5... (Score:5, Insightful)
The premise that people make split second ethical judgements is delusional. What people do is react. And many times they don't react in time to avoid the worst possible outcome.
Humans can and do make "split second" ethical judgements, based on their own ethics. A mother will likely try and save her children, even at the expense of her own life. A teenager may be more about self-preservation whatever the cost. The future will be an algorithm deciding for you, no matter what your position is in society, and no matter what your beliefs are. Or perhaps your position in society will matter, as the President's vehicle may be programmed for self-preservation no matter what.
And if they survive their brains spend hours, days, weeks and years even going over and over what happened...
Oh yes, that will never happen once the magical machines start taking lives. No mother will do this once their child is gone, wondering how the algorithm got it so wrong. No father will want to punish the machine programmer for taking their child.
If we think the "trial and error" period for IoT will be bad, this will be fucking horrible.
And that's just the shit we have to deal with before the hacking starts.
Re: News at 5... (Score:5, Insightful)
So much this.
I have a really hard time thinking of any realistic situation where killing you will save 10 people.
What? Ten people are just standing out in the street and the only other option is to drive off a cliff?
Fuck them. Why are they in the middle of the road?
Just hit the brakes and hope for the best.
Since it's an automatic car it shouldn't be driving fast in a zone with pedestrians anyway, and people shouldn't be walking on highways.
Re: (Score:3)
Your soft, meat brain can only react, but a computer can make decisions far quicker and more rationally than you can. It has plenty of time to decide what to do.
This assumption is totally false. You assume the scenery is already understood by the computer which only have to decide on basic questions like break/speedup, right/left and this kind of stuff. The truth is the computer has to assess the situation exactly like a human brain using sensors which have a finite accuracy and some uncertainity on the readings. Decode everything, including images from cameras, reconstruct a 3-D representation of the scene and have some mean to make decisions from the scene and th
Re: (Score:2)
Yes and no. Yes, it has to assess the situation much like a human brain would by taking inputs and choosing a course of action. No, it isn't at all like a human brain. For one thing, an autonomous car has a lot more information at its disposal than a human brain does. In particular, it has multiple cameras, so it doesn't have to look to see if
Re: (Score:2)
Denying a problem exists doesn't actually solve the problem.
Re: (Score:2)
The real elephant in the room is the suggestion that the government get involved in the regulation on how the decision will actually be carried out.
The government will put out a spec to kill the sole occupant if the car plows toward a group of people by detonating explosives planted in front of the passengers face.
Oh! Cra *Boom*.
Re: (Score:2)
I realize that the OP's suggestion ("programmed to avoid getting in accidents and nothing else") was ridiculously oversimplified, but... that's no less ridiculous than the "dilemmas" presented in the game.
Scenario 1: Crash directly into a concrete barrier or into a crowd of people and cats. Really, there are no other possible outcomes at all? Not rapid controlled deceleration, not swerving off the road, nothing else comes to mind?
Come back to me when you have *realistic* scenarios.
Re:News at 5... (Score:5, Informative)
Come back to me when you have *realistic* scenarios.
Indeed. One of the things that was covered in my motorcycle safety class was the concept of 'traction management'.
To keep it simple, depending on the type and condition of the road and your tires, you only have so much traction. It takes traction capability to do anything - speed up, slow down, or turn. It was part of them teaching us that you are not to brake in a turn on a motorcycle. Cars can get away with that, bikes(pushed to limit) can't. You brake, then turn. If you need to stop during a turn, you straighten and brake.
Anyways, to get back to the point - it takes traction to turn. For motorcycles and cars, they covered that it's better to brake than to dodge for any substantial obstacle - if you have the luxury of dodging it, you could have braked to stop hitting it.
So, in the situations mentioned, they're stuck using trains, which have stopping distances that no car maker would be allowed to release a vehicle with. Short of the langoliers being behind you eating everything, braking is pretty much the universal solution.
Re: (Score:2)
Unless you start with the scenario being 'brakes? what brakes?' this covers the major moral problem here--and even then, the question of why an autonomous vehicle manages to suffer unexpected and sudden brake failure needs to be covered. Also left is the question of why it cannot hit the horn and trust the pedestrians to scatter, since it's reasonable to expect that the car is going to be evidence of some sort and thus it's desirable to preserve it as intact as possible--if it's due to a manufacturing defe
Re: (Score:2)
Scenario 1: Crash directly into a concrete barrier or into a crowd of people and cats. Really, there are no other possible outcomes at all? Not rapid controlled deceleration, not swerving off the road, nothing else comes to mind?
Since the car is already looking ahead and can calculate the car's reaction to all control inputs based on the road conditions (because, after all, the car is already trying to slow down, and it knows the coefficient of friction with the road), it knows that it can't stop or steer off the road in time, it's already computed that regardless of what evasive actions it takes, those are the only two options, so it has to decide which is better - veer off to the side and hit the hard concrete barrier and kill th
Re: (Score:2)
Brilliant!
Why didn't anybody else think of this?
But what if the car, through no fault of it's own, ends up in a situation where an accident cannot be avoided?
Re: News at 5... (Score:5, Insightful)
The problem is that people replay traumatic accidents over and over in their heads and end up thinking that they had time to consider all the scenarios when they probably barely had time to react in the first place. The brain's attempt to learn from an accident and think of the "what-if" is what creates these embellished recollections.
Now people are applying this delusional thought process to machines and setting unrealistic expectations.
Just need to keep it simple and stop the car as safely as possible.
Re: (Score:2)
Re: (Score:2, Insightful)
Yeah. In fact, SUVs are well known to cause lots of damage in SUV - non SUV crashes to the normal vehicle, while causing minor damage to the SUV. The passengers of the normal vehicle are much more likely to die than the SUV passengers. So people already do the choice now.
Re: (Score:2)
People value their own lives..
Yeah... my first reaction was "duh, just look at all the people that buy ginormous SUV's to protect themselves at the expense of everyone they might hit."
It Doesn't Matter; It Won't Ever Happen (Score:2)
No one will ever program a autonomous vehicle to choose one life over another. That's a lawsuit waiting to happen, if not an outright murder charge.
Re: (Score:3)
Re: (Score:2)
And where exactly is this "rule" that tells someone that the life of person A is more important than the life of person B?
Re: It Doesn't Matter; It Won't Ever Happen (Score:2)
It isn't even that. Why would you add processing time? Thinking about all this BS would end up killing more people in that extra 600 milliseconds it takes to think through all these scenarios. Just stop the damn car!
I don't want a car that kills people while it is busy thinking about whether it is ethical to stop.
Re:It Doesn't Matter; It Won't Ever Happen (Score:5, Insightful)
But the car doesn't need to make a psychic "This is the most valuable life" calculation/decision.
It just uses its regular crash-avoidance behavior (say, hitting the brakes), and maybe somebody dies. The cop on the scene decides that the pedestrian probably shouldn't have been trying to cross the freeway, and everyone else moves on with their lives. The end.
Re:It Doesn't Matter; It Won't Ever Happen (Score:4, Insightful)
You apparently have just arrived here from another planet, because that's not how the legal system works here.
Re:It Doesn't Matter; It Won't Ever Happen (Score:4, Interesting)
Except that's an unavoidable situation. Sometimes there is no option where everybody ends up fine.
It's not necessary for the vehicle to make a decision here.
It can simply attempt to avoid a crash by utilizing only normal means which a human would use, such as braking, or safe swerving within the confines of the road.
Not "Erratic behavior" such as attempting to self-run-off-the-road-into-the-ditch or other self-sacrificial strategies.
The car should just take steps to avoid or mitigate crashes, not to make a tradeoff where other actions are used that further-endanger the driver or passengers.
Extreme avoidance strategies might be considered, but only for the benefit of making sure there is no crash at all and making people in the car more likely to survive or escape serious injury
That's normal (Score:5, Insightful)
Save the environment, reduce carbon emissions, save water, reduce debt... unless it affects me financially.
Re: (Score:2, Funny)
I thought the point of you saving water was so I could use more.
Re: (Score:2)
Exactly.
If I could get back all the money I ever spent on pot, I could buy A LOT of pot!!
No suicide option then. (Score:2)
Nobody is going to choose to pay for a machine that would rather kill them than protect them.
No standards for robot morality? (Score:2)
Participants also balked at the notion of the government stepping in to regulate the "morality brain" of self-driving cars.
This statement makes no sense to me. What do these people want, free market morality? The car should save the richest people? Who the hell else but the government is going to standardize what the right action is for a robot to take in that sort of scenario?
Re: (Score:2)
Re: (Score:2)
"drivers?" (Score:3)
Re: (Score:2)
I'm from Seattle (Score:5, Funny)
If we could get an AI that can kill for a parking space, I'd be fine with that.
Re: (Score:2)
If we could get an AI that can kill for a parking space, I'd be fine with that.
If you go through the quiz.. many of the situations involve people crossing the street against the hand... sometimes this was the only difference between two groups and you had to choose which to mow down. This reminds me a lot of Seattle too. If people on foot knew that a driverless car isn't going to stop if they were crossing illegally they might think twice about stepping off that curb.
Re: (Score:2)
You'll need to program your car to hunt down Murray and the rest of the city council. Only then will their war on cars come to it's inevitable conclusion.
But hey, if you want to let them keep replacing parking places with "parklets", it's on you. We're certainly not as hip over here on the Eastside, but at least we can find a parking spot.
Re: (Score:3)
Why does a self-driving car need a parking space?
Re: (Score:3)
Well, realistically, the car does need to park somewhere if the occupant is going to be working for eight hours, or shopping for over ten minutes.
But the autonomous car can drop someone off at work or the store, then drive a couple miles away to a central parking facility and wait to be summoned. The future parking facility could even be mechanized to rack-em-and-stack-em [smartparkingsolution.com] to maximize space.
Still better than the status quo (Score:2)
So most people think that it's good to sacrifice a passenger in order to save many pedestrians, but they wouldn't want the car to sacrifice them. It's clear then that if they were the driver in their own car, they would choose to save themselves rather than the 10 pedestrians they are about to mow down.
There are two future possibilities then:
1. Self-driving cars will sacrifice the driver, which means they will be programmed to be more ethical than they are today.
2. Self-driving cars will sacrifice the pedes
Re: (Score:2)
...Either way we're not any worse off, so what's the problem?...
It gets interesting when insurance is thrown into the mix. Who pays the insurance premiums for autonomous cars? The owner shouldn't have to because the owner is not the driver.
.
However, if the owner chooses an autonomous car that targets pedestrians, then perhaps the owner should pay at least part of the insurance premiums.
Re: (Score:2)
How often do you drive your HOUSE?
Re: (Score:2)
The exploitability thing is an interesting problem, but that's not what the article is about. It specifically talks about a "no-win situation" where the car has to choose between innocent pedestrians and innocent passengers. It also has a well-defined definition of ethics - utilitarian - which they found matches up well with what most of their survey respondents consider to be ethical.
I never said that nothing could go wrong, I said that the problem raised by the article is a non-problem.
Moral decisions (Score:2)
...Participants also balked at the notion of the government stepping in to regulate the "morality brain" of self-driving cars....
I dislike government regulation as muchas (maybe more)than the next person, but....
.
Should all autonomous cars, regardless of make, have the same morality rules regarding who gets killed in an accident?
Or will I, as a pedestrian, need to be able to recognize the various brands of autonomous cars, know the morality of each, and decide which direction to jump in when one of those things is coming at me....
Hacker car chase scene (Score:3)
I'm just waiting for the next movie where the main character is being chased down either by a draconian government or some super hacker. The main character clearly knows the risk, so he's driving a 1969 Mustang, but suddenly, all the cars on the freeway start chasing him down and trying to run him off the road.
All Respectable Craft Have A Self-Destruct. (Score:2)
Wait what? (Score:2)
Whoa, big surprise (Score:2)
So, given the randomness, and unpredictability of any specific situation; and given that any attempt at anything can fail, backfire, or be otherwise incomplete; living individuals prefer that effort be focused on survival, rather than altruism.
You know, I don't often get to say that those around me make sensible decisions, but in this case, I'm overjoyed to say that finally, possibly for the first time in human history, there's actually a consensus regarding the one and only sensible choice!
$50 follow-up: w
How does the car know how many people are were (Score:2)
other than the occupants of the vehicle itself is everything not just an obstacle to it..
seems to me if it used that logic and protected the only known life forms (ie the ones in the vehicle) we're fine. Don't give it the information to create the dillema, can it be sure that a person is a person 100% of the time, if not then the only person(s) it knows lives are in its hands are the ones inside it.
Red Herring (Score:2)
This debate is a red herring. An automated car would use its software and resources to avoid hitting pedestrians or other cars, but in the event it cannot avoid a collision, the safety of the passengers would come down to the construction and safety features of the car itself.
This is what we have now and it won't change once the driving is automatic. The physical structure of the car and things like seat belts and airbags will be responsible for protecting the occupants as best it can, but of course the
Comment removed (Score:3)
None of the above (Score:2)
Self piloted car less likely to be in that situati (Score:2)
This is a stupid argument (Score:2)
Nearly every case I can see where the options are something like, avoid the pedestrian by driving into the metal spear tree artwork. First the car should see the pedestrian long before and come to a gentle stop. If the pedestrian jumps out from concealment, then they deserve
The Trolley Problem (Score:3)
This is the same as the Trolley Problem, a famous philosophical dilemma, first proposed in 1967: https://en.wikipedia.org/wiki/... [wikipedia.org]
Basically, a runaway trolley is going to kill five people. You can either do nothing and let the trolley kill them, or pull a lever to switch it to another track on which it will kill only one person. There are many variations, including one in which you push a fat man onto the tracks to stop the trolley. Philosophers have written a LOT about it. Here are some humorous variations:
http://existentialcomics.com/c... [existentialcomics.com]
https://xkcd.com/1455/ [xkcd.com]
http://www.mcsweeneys.net/arti... [mcsweeneys.net]
I think this is ethically easy and simple (Score:3)
A computer should serve its owner's interests with absolute priority over the interests of all other parties. Period. If it's my computer -- my agent -- then I am #1. By default (without my interaction) it should allow a million children to slowly burn to death if it means that I get to skip an ad. (That's a ludicrous example, but if people want to explore the edge cases of the policy I'm advocating, then there you go.)
You're going to find that this strongly favors protecting other people anyway. The "someone must die, pick who" scenario is extremely rare to the point of non-existent, compared to the routine "avoid having any collision at all, so that no damage or injury happens" scenario. (Stop smoking before you drive yourself crazy with fear of being hit in the head by a meteorite!)
That's not a global policy; that's just the policy for my computer. I don't mean I'm more important than you; I mean that to my computer I am more important that you. And your computer should serve you, too!
Re:contrived examples (Score:5, Insightful)
These scenarios are just a little bit contrived... I can't fathom any real life scenario where any of these situations would occur with the odds of both options being equal, which is the point where the software would be called upon to exhibit a preference of one option over another.
Exactly. Why don't people discuss the millions of small decisions - "how quickly shall I go through this stop sign?", "should I signal this turn or is it too much hassle?". Those are where the existing human software is causing bad consequences on a daily basis.
No, let's discuss the one in a billion corner case instead.
Re: (Score:2)
Agreed. This ridiculously overplayed scenario is the Y2K scare of our time. Watch... nothing will come of it, because (gasp) computers are so bloody fast, they'll have been slowing down in a dangerous situation long before a human driver was aware of the problem. Or they'll be able to get away with simple braking, again, because they can react instantly, perhaps not avoiding a collision, but avoiding a fatal injury. But one thing a car AI must *never* do is decide to sacrifice the passengers, no matter
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
These scenarios are just a little bit contrived... I can't fathom any real life scenario where any of these situations would occur with the odds of both options being equal, which is the point where the software would be called upon to exhibit a preference of one option over another.
Maybe the autonomous car should not have decided to put itself in the situation where it has to make a last second decision.
If a deer runs in front of a speeding car, that is pretty much an unpredictable incident (although one could argue, lets not). Its unwise to swerve to miss a deer in such a situation, it better to just drive straight and run it over if need be. But what if the deer is wearing pants?
Re: (Score:2, Interesting)
where everyone's car knows when every toddler on Earth decided to wander onto a road?
No.... if a Toddler wanders onto the highway, then that toddler's parents just committed a homicide.
What do you think is going to happen?
Replacing human-driven cars for self-driving ones doesn't change that.
And no, the self-driving car should not facilitate endangerment of more people in the event that a toddler wanders into the road.
Also, in a world where that happens, parents become more careless (If they think
Re: (Score:2)
I think a well-programmed self-driving car knows to slow down in the close proximity of pedestrians.
Re: (Score:2)
It's the fault of multiple people. A. Road maintenance crews for not keeping the streets safe enough to prevent a blowout and black ice (no salt?) B. The driver of the vehicle for making sure their tires were of proper condition and that the roads were of safe driving condition. Also, the driver for going fast enough for a blowout on ice to give them enough momentum to carry them into a playground (what the fuck it's doing directly next to the road for an accident to happen is beyond me, most playgrounds ar
Re: (Score:2)
Meet Christy Sheats - white, Baptist, Republican Trump supporter and Second Amendment activist:
https://s3.amazonaws.com/everi... [amazonaws.com]
This is what she did yesterday:
http://www.nydailynews.com/new... [nydailynews.com]
Re: (Score:2)
Re: (Score:2)
A "family spat" is when you disagree with your uncle about whether Bart Starr or Aaron Rogers is the better quarterback.
When you murder two family members, it's a shooting spree, not a "family spat". The fact that it was perpetrated by a radicalized Second Amendment activist is not just coincidental. She recently complained online about how Obama was going to come and take her eight guns away.
Re: (Score:3)
You'd be idiotic to purchase a car which might sacrifice your life or health in ANY circumstances.
If everyone followed that logic, the only vehicles sold would be SUVs, with a speed limiter to 20mph.
Re: (Score:2)
It's good and very useful to ask such hypothetical questions, but only in certain contexts. It expands people's view on what they are and should be doing, and helps guide distant future decisions.
But these are mostly useful only to those inside the industry. When you start to quiz people on their preferences, as if such an imaginary hypothetical vehicle exists, pretending the answers matter to anyone, anywhere, for anything, you're just doing a lot of mental masturbation.
So tell me: How many angels can d
Re: So Republicanism wins again (Score:2)
Wait a minute... Aren't the republicans the pro-life party?
Federal BS is still BS (Score:2)
They already are "superheros" (Score:2)
When's the last time you saw a manually operated air bag?
In these high-speed, blink of the eye situations, cars already perform as if they were superheros, and that is exactly what we want. We are greatly outclassed by machinery in most high-speed tasks, and this will become ever more so because it is to our advantage.
From the perspective of an automaton, choosing between alternative outcomes in the event of an imminent crash is no hard
Re: (Score:2)