The Problem With Self Driving Cars: Who Controls the Code? (theguardian.com) 235
schwit1 writes with Cory Doctorow's story at the Guardian diving into the questions of applied ethics that autonomous cars raise, especially in a world where avoiding accidents or mitigating their dangers may mean breaking traffic laws. From the article: The issue is with the 'Trolley Problem' as applied to autonomous vehicles, which asks, if your car has to choose between a maneuver that kills you and one that kills other people, which one should it be programmed to do? The problem with this formulation of the problem is that it misses the big question that underpins it: if your car was programmed to kill you under normal circumstances, how would the manufacturer stop you from changing its programming so that your car never killed you?
make it user-selectable (Score:2)
Re: (Score:3)
Would you buy a car that came equipped with an explosive that would, under certain circumstances, explode and kill the driver?
This whole "trolley problem" is bullshit.
From TFA, Doctrow uses the "trolley problem" to get to a different point:
Nice switch there but the basis is still bullshit. No one will buy a machine that has code in it specifically designed to
Re: (Score:2, Insightful)
Not only that, another big problem with the "trolley problem" is that it doesn't pass the "could a human driver do better" test.
It assumes that you have lost control of the vehicle to the extent where you can only select between two choices. While every driver will claim that they are superior it is all just bullshit and they will be unable tho make a choice at all in those situations.
The main point of automatic drivers are to not get into or cause situations where you don't have control of the vehicle and
Re: (Score:2)
Dr. Lotfi Zadeh [wikipedia.org] can help us slay that spherical cow!
Re:make it user-selectable (Score:4, Insightful)
If a child can run into the street from a blind spot faster than you can break, you're driving too fast. The autonomous car will not drive too fast to break in such a situation.
If the child is deliberately hiding in a place it shouldn't be, near a higher speed road and manages to quickly cross what should usually be a wide clear area around such roads (or in an unpopulated area), then there won't be time for a human to react at all. An autonomous car could probably cut down the speed a bit, but avoiding people who deliberately try to throw themselves in front of traffic simply isn't doable or even something to care that much about. You're not going to be able to avoid jumpers, or the human cannonball either, nor is an autonomous car. As the other reply pointed out, whether it's wildlife or people, usually the best option is to simply do a predictable controlled break unless it's basically a slow-motion situation playing out over many seconds (such as road conditions making braking very ineffective, while speed isn't particularly high).
Re: make it user-selectable (Score:5, Insightful)
What you are taught when you learn how to drive is "always brake, never swerve". Which also happens to be exactly what a self-driving car would and should do in an emergency situation.
Reformulated as a trolley problem, this is: do not pull the lever.
Why? Because swerving will almost always put the car in more danger for itself and others, since it may cause the car to spin, or approach cars in incoming lanes or pedestrians on the footpath, or any number of bad things.
Since, by law, other cars must maintain a safe distance behind you, slamming on your brakes is always a safe response to an obstruction. Any other response, such as changing lanes, should be considered if it is safe or not. If it is unsafe, then you shouldn't do it, things like "busses of schoolchildren" is completely outside the parameters of the problem.
This whole self driving car issue took a wrong turn the day that someone had decided that they had the moral imperative to break the law in order to prevent crashes. You do not have that imperative yourself, you have the imperative to follow the law in order to prevent crashes. The reason that two drivers can pass each other at enormous speeds without prior communication is that the law forces them into their lanes. The law is the protocol that all drivers follow to interact, since they are unable to talk to each other and as soon as you break it, you expose yourself and others to untold danger. If the law says "don't leave your lane" and best driving practices says "don't swerve", then you don't swerve.
Re:make it user-selectable (Score:5, Insightful)
To begin with: if everyone sticks to the rules of the road and drives normally, there is very little chance of an accident occurring. If an exceptional situation occurs, the fault of an ensuing accident primarily lies with whomever caused that exceptional situation (even if it's unintentional). If someone's tyre blows and they swerve into your lane as a result, if a child chases a ball into the road, or if a cyclist runs a red light in front of your car, you'd (probably) do everything to avoid a crash and so should a self-driving car, but you are not under any moral obligation to drive yourself into the side of a building in order to avoid the other car, child or cyclist. Self driving cars should operate under the same premise: it should never be considered necessary to sacrifice the driver.
If something unexpected happens, cars might follow a protocol similar to this one:
1) Stay in your lane and come to a controlled stop.
2) If a controlled stop will not prevent a collision (and this is something that self-driving cars should be able to assess fairly accurately), change to a different lane if there is an unobstructed one.
3) If there are no unobstructed lanes but the road ahead is clear and the local speed limit is below x, change into oncoming traffic.
4) If all else fails, reduce speed as much as possible and allow the collision to happen.
These are not meant to be complete and valid for all situations, it's just to give an idea of how such "laws" could be formulated, in the form of a decision tree that self-driving cars would be able to follow, without having to make complex judgment calls or difficult moral decisions. And I can well imagine that a basic set of such rules will be set into law so that all self-driving cars will follow the same basic protocol. As a driver you'd have little incentive to change the programming in your favour, and if you did, it would become immediately apparent as soon as you're involved in an accident.
Re: (Score:2)
I'm not criticizing your protocol as I know it's merely hypothetical, but even un
Re: (Score:3)
Futurama Suicide Booths (Score:2)
That's precisely the point: sideswiping the car next to you is such a risky manoeuver that machine nor man can probably make that judgment call very well, with any vehicle.
That depends - if there is an oncoming lorry in your lane and the only way to avoid it is to side swipe the vehicle next to you that's what I would do to avoid what looks like impending death. That's what I would expect a self-driving car to do to: take whatever it perceives to be the lowest risk to the health of the car's occupants because that is what a human driver would do instinctively. To do otherwise and you are only one step above Futurama's suicide booths. How long will it be before you get some i
Re: (Score:3)
For instance, the proper response to an impending accident might be to sideswipe the car in the adjacent lane, but only if it's actually a vehicle that could take the hit, as opposed to a motorcycle. ...
Driving is a far more complex activity than a lot of people realize,
Which is why we drive over a group of nuns right now. Driving may be complex but human decision are ultimately entirely based on luck, self preservation (which is why the seat BEHIND the driver is the safest in the car), and whatever you're able to do with your hands in a sheer moment of thoughtless panic.
The perfect computer will be no different. It won't come down to side swiping or some major calculation onto the future prospects or occupancy of the lane beside you, it won't be a case of can you safely s
Re: (Score:3)
Why is there a car in the adjacent lane in a high-speed situation with objects that can conceivably exhibit behaviour that could cause in impact faster than you can do a controlled break? Sounds like you're driving too fast and tailgating someone while you're passing someone else. How about, you know, not doing that? Accident avoided.
The trick to avoid serious accidents is not to be able to make complex judgement calls in an emergency, humans suck at that, and life isn't Groundhog Day where you get to pract
Re: (Score:2)
Use your imagination. Some other potential causes of that situation: accidents in other lanes resulting in quick and unexpected vehicles/debris/people in your lane, mechan
Re: (Score:2)
I think it's simpler than this, because the computer will never with 100% accuracy predict the future. For example, if there's an object in the road is it an inanimate object that dropped off a truck? Is it a child running across the road after a ball? Is it a wild animal that could run scared? And the manufacturer don't want any legal liability for making the accident worse. So I think even if it's 99,9% "change into oncoming traffic and it'll be okay" and 0,1% "giant manslaughter fuck-up" they'll default
Re: (Score:2)
That's what I think too. The manufacturers will write code that will keep THEM out of trouble with the local legal system. In most cases it will avoid killing the driver as well, but there's no way that they're going to make the car software swerve off the road and mow down a queue at a bus stop to preserve the life of the driver.
People are saying: it's an ethical issue, no, it's primarily a legal issue, the programmers/company execs will keep themselves out of prison, everything else is secondary.
Re: (Score:3)
but you are not under any moral obligation to drive yourself into the side of a building in order to avoid the other car, child or cyclist. Self driving cars should operate under the same premise: it should never be considered necessary to sacrifice the driver.
While you are correct, don't think for a minute that there won't be a lot of pressure to give a heirarchy weighting to decisions made by the cars.
We make fun all the time by saying "Think of the Children", but I'll bet if you polled a sizable group of people in some hypothetical You could save one person, an adult or a child scenario, probably most would save the child and leave the adult to die.
All of which is to say, if there is the possibility to apply weighting to a vehicle's collision avoidance sys
Context Always Required (Score:2)
We make fun all the time by saying "Think of the Children", but I'll bet if you polled a sizable group of people in some hypothetical You could save one person, an adult or a child scenario, probably most would save the child and leave the adult to die.
True, but suppose we change the "adult" to your wife/husband who is in the car with you and the "child" to a 12 year-old running away from a policeman and who dashes into the street. Do you swerve into the oncoming lorry and kill the adult you love or hit the child who was probably a delinquent? In that situation I think you'd get far more people saying they would protect the adult they love over the child: the closeness of the relationship with the people affected is a huge factor.
This is the problem w
Re: make it user-selectable (Score:3)
The problem with any of those rules is that they will never be used and reflect a human driver decision based on human reaction time.
A self driving vehicle can anticipate an accident much faster than any human driver and you can often (as a human) come to a complete stop or decelerate enough in cities before the worst has happened.
Sure the kid and cyclist might get hit (or more likely and in most situations they hit your car) but they most likely won't die from the impact (and if they do its not your fault)
Re: (Score:2)
A bus full of nuns?
I once rescued a bus full of cheerleaders. I rescued them three times, if you know what I mean.
Batmanuel
Re: (Score:2)
And this is a good thing your property should never choose something else over you.
Re: (Score:2)
No, it can not be users selectable unless cars that have chosen the user first mode are not allowed on public roadways. When you drive on public roadways you accept that you have to obey certain "rules of the road", such as speed limits. I do not want anyone (obviously some hackers are going to hack the cars, but that is a very small percentage) being able to override the safety and rules of the road features built into the autonomous cars. That also impacts insurance issues, since someone that choose unsaf
Re: (Score:3)
Who said selectable? In any event it's safe to assume the fully automated car is obeying all related traffic laws etc. Obeying all the laws does not mean it will never be in a situation where it has to choose. Easy example is 50mph speed limit rural roads guard rails on each side, kid pops out through a hedge and runs into road after a ball or whatever, you have an oncoming truck doing 50 do you hit the kid or do a head on into the truck you have no time to effectively break or any forewarning of the even
Re: (Score:2)
You entirely missed guard rails on each side (no breakdowns/shoulder) and no distance to react (kid comes through bushes) for my rather contrived scenario. Thus no room to put her sideways and avoid putting the front/back end into the truck. It realy does not matter as even with your 3rd option your still looking at a head on with a truck that's your fault, putting her in the ditch thats your fault or hitting the kid thats not your fault. I'm quite familiar with "advanced" driving raced quarter mile and
Re: (Score:2)
Re: (Score:2)
As I said not enough time to signal etc think 20 feet at 50mph . Kids jump guardrails all the time. At 20f at 50mph normal human reaction (forget processing time) you have already hit them before you realize they are there. Jersey barriers do very well being run into from the side, you do not want to hit wire rails with a modern car it starts looking like a cheese wire, your fairly standard galvanized metal ones have a lot of give but I still dont want to hit one head on again would rather scrape the sid
Re: (Score:2)
You're going down the road, you'll hit the guardrail at an angle and be just fine. Also, you should have noticed the kid climbing up and getting on the guardrail. If you can't see that, you're driving too fast for the conditions.
You might not be familiar with my comments but I've been rather staunch, to the point of offering bets, that claim AV is not going to get here for a very long time - if at all. (Partially autonomous is already here, fully autonomous isn't going to be here on the road, in any great n
Re: (Score:2)
You're absolutely right. Everything that ever happens to anyone is his own fault.
We should all go no more than 1mph ever because a stork carrying a baby might drop it into the path of our cars and we have to be able to avoid it in time. If you hit the stork-baby when it suddenly appears in front of you, you were driving too fast for conditions.
Insurance Companies (Score:3)
A home spun vehicular AI will be considered an unknown risk and will be treated as such by the insurance companies. If you want to roll your own vehicular AI, feel free but you'll be responsible for having the AI rated and
Will void privilege of taking car on public road (Score:2)
(1) The vehicle will no longer be in compliance with regulations that permit the privilege of taking your vehicle onto a public road. Merely driving it on a public road may make one vulnerable to civil charges, loss of driving privilege, confiscation of vehicle, etc. Much like drunk driving regardless of whether an accident occurs or not.
(2) It will void your insurance coverage and make you fully liable for anything that occurs. There will probably be no
So ban human drivers? (Score:2)
No, it can not be users selectable unless cars that have chosen the user first mode are not allowed on public roadways. When you drive on public roadways you accept that you have to obey certain "rules of the road", such as speed limits.
You do realize that this would automatically disqualify all human drivers right? Humans will always prioritize themselves and will not always obey the rules of the road. A computer which prioritizes is occupants but which always obeys traffic rules would still be a huge improvement.
Re: (Score:2)
You've already lost. The self-driver will identify roaming people and limited visiblility, and slow. When the "a kid steps out in front of you and your choice is come to a complete stop before them under control, or accelerate wildly into the child, killi
Re:make it user-selectable (Score:5, Insightful)
I'd also have it sound the horn and flash the lights.
And this is, IMO, exactly what will happen.
Remember, these vehicles have complete records of everything that is happening around them at all times. Everything that can be recorded, that is. So the insurance companies will have exact records of how the robot was 100% within the law AND had taken every REASONABLE response to mitigate the collision.
The robots do not have to be 100% at determining whether your life is worth more/less than someone else's. They just have to be 100% at showing that they were following the law and attempting to avoid the collision.
The legal system and the insurance companies will sort out the rest. And the insurance companies will pay to have the legal system write the new laws to reflect that.
Re: (Score:2)
This response is exactly what will happen.
If there is no way to avoid an accident, the car will attempt to stop in its lane as quickly as possible. There is no other conceivable way this could work due to the extreme liability any other decision would imply.
This will in most cases greatly minimize the forces involved in a collision as well.
Re: (Score:2)
Agreed. Stop in your lane.
I call it the squirrel problem.
Anyone who has tried to avoid running over a squirrel knows that squirrels' panic mode is to dart back and forth, so no matter where you point your car or swerve, you wind up squishing the squirrel anyway. You're better off continuing in a predictable path so the squirrel has a chance of solving the problem with its superior speed and reflexes.
Are pedestrians significantly smarter than squirrels? Perhaps in Manhattan, where everyone is a pedestrian an
Re: (Score:3)
Would you buy a car that came equipped with an explosive that would, under certain circumstances, explode and kill the driver?
You mean like an airbag? https://www.google.co.uk/searc... [google.co.uk]
Re: (Score:2)
FTFY. Prove that there isn't code in all Japanese made vehicles sold in America designed to kill their passengers on a certain day at a certain time.
That would be a Herculean task if the source was Open. With closed source firmware and "Trusted Computing" implemented (i.e. You can trust that the code you are running is the code they want you to run, but not necessarily the code you want to run; it says nothing ab
Re: (Score:2)
FTFY. Prove that there isn't code in all Japanese made vehicles sold in America designed to kill their passengers on a certain day at a certain time.
That would be a Herculean task if the source was Open. With closed source firmware and "Trusted Computing" implemented (i.e. You can trust that the code you are running is the code they want you to run, but not necessarily the code you want to run; it says nothing about trustworthiness of the actual code), it is impossible.
Calling BS on this one. No one has time, and few would have the ability, to meaningfully audit all the code in systems affecting their lives. Thus, "auditing" is only as good as the chain of trust it represents... Open source gets you nothing except better post-mortems (no pun intended).
Given that trade-off, I'd actually prefer trusting that *manufacturer-intended* code is indeed running than trusting that OSS/many-eyes auditing has caught fundamental errors. I can sue a manufacturer and there's process for
There will probably be inspections (Score:2)
Either annual or ongoing, if easily enough done the police would probably do it. You change the code so the car doesn't, say, drive off a cliff instead of straight into the middle of a class of school girls (just to make it clear, I'd drive into the kids. It's my car after all and facing the choice between killing a dozen kids and me, the rugrats croak), then this is an illegal modification of your car and it is no longer considered safe for traffic and shut down.
Re: (Score:2)
It will not drive off a cliff if it is aware of the cliff under any circumstances, it will instead come to a stop before the road ends.
If stopping in time is impossible as something was basically dropped into its path, it will end up hitting the object at the lowest speed it can achieve. It will never intentionally hit anything for any reason at all, and my expectation is that they will be very good at this. Accidents so far always involve the automated car being struck by rather than striking an object f
Re: (Score:2)
If my brakes malfunction it's an accident. Period. I may be charged if it can be shown that I could have avoided it, but in the end, accidents happen. And I don't know about your country, but in mine you do not have to endanger your life to protect or safe that of another person. It is considered normal behaviour to avoid damage to yourself. Of course proportionality plays a role (you won't get away with driving through that school class to avoid the deer whose antlers would have caused more damage to your
Re: (Score:2)
but in the end, accidents happen.
Less than 1% of the time, so anyone looking for a mechanical fault is excusing his own known bad driving.
Re: (Score:2)
The meme "Accidents happen," in the driving of automobiles arose from a campaign of collusion between various interests in the motor vehicle, oil, and insurance companies as a strategy to clear the roads of pedestrians, equestrians, and cyclists, so that motorists could live the "dream of freedom" promised to them in automobile advertisements.
There are not nearly as many accidents in the operation of motor vehicles as there are collisions caused by negligence or malice, mislabeled as accidents in order to a
stupid question (Score:2)
Re: stupid question (Score:3)
Re: (Score:2)
or how does the manufactorer stop me from simply driving too fast? in an age where most cars have country-specific software & hardware modifications, it makes zero sense for a car to be able to go (much) faster than the maximum allowed speed limit.
You are simply applying to much logical thought to the problem. first off there are some freeways where the top speed is 55 and others where it's 70+. But limit a vehicle to 75-80 and sales will drop dramatically. This is true on economy cars (many of which can't go that much faster anyhow) on up. Simply put its not profitable.
Re: (Score:3)
Sorry I don't accept that limitation. If my Corvette could only go 80 or 90 why would I of bought it? At that point there would be nothing to distinguish it from a Prius.
I grew up in a racing family. Horspower and Torque are fun. Cars that are speed limited or drive themselves equal zero fun. Lifes too short to deprive yourself of an flying down the road in an open cockpit car on warm summers day. Damn, I ha
Re: (Score:2)
Why would I HAVE bought it? Where did this (relatively) new bit of illiteracy come from? And when? Are they really teaching this in school, or are more people getting through HS/College without ever having to write anything?
Re: (Score:2)
3 bowls of Purple Kush, 2 bottles of Guiness after an 11 hour day. Hell, it's a wonder that it's written as well as it is.
I will draw the line at whacking me across the knuckles with your ruler however.
It's the weekend, roll with it!
Re: (Score:2)
All cars in Japan are limited to 180kph/114mph by a gentleman's agreement between manufacturers and the government. Since about 2000 sports models have used GPS to detect when the car is at a track and disable the limiter.
Re: (Score:2)
What if you cross a border where the "maximum speed limit" is higher (or lower)?
What if the "maximum speed limit" is changed?
How do you prevent someone from tampering with the setting?
On the other hand, going above the "maximum speed limit" of a country (or state) is not the most unsafe type of speeding. It's much more problematic to speed in locations where the actual speed limit is lower than the "maximum speed limit".
Another problem. Multiple codebases. (Score:2)
Another thing. How are these various autonomous car software platforms going to interact with one another.
It's one thing to build in recognition protocols for your own vehicles, so that multiple vehicles of the same type act in a concerted manner.
But what happens where you have four or five different codebases? How are the notoriously closed car manufacturers going to deal with car behavior from another system?
I can foresee some rather nasty interactions. Head on collisions where one car tries toavoid by
No Responsibility, No Freedom (Score:2)
This is just sensationalism. The real issue is that, if people are willing to give up their responsibilities to control a vehicle, they necessarily give up their freedom to decide how that vehicle behaves in certain situations. If you want to decide how a vehicle behaves there are probably two options: get a manually operated vehicle, or build your own "automatic" vehicle with your own rules. But good luck on that latter; just as there are regulations on acceptable behavior with manually-operated vehicle
A huge hurtle for autonomy (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It sounds like what you want is a pre-purchase system where you buy life credits. Your daughter is paid up with "insurance" so when the car has the choice of h
It's a dumb question (Score:5, Insightful)
Who controls the code? Maybe you, maybe them. If you tamper with it, you're responsible. Otherwise, they will probably be responsible for its behavior. But the computer is not going to be "programmed to kill you", that is bollocks. The computer is going to be programmed to follow the law. That means that it's going to be less likely to be at fault in an accident to begin with, that it's going to be more likely to successfully mitigate the accidents it does get into, and it means that rather than being programmed to kill you, it's going to be programmed to stay in the lane and hit the truck rather than swerve and hit the pedestrian because to do otherwise would be illegal — not just because of the pedestrian, but because of the lane marking. That is not remotely the same thing.
The car will be programmed to do its best not to kill you, and that's going to take the form of yielding gracefully to fuckheads rather than getting in an accident to begin with.
Re: (Score:2)
Actually that is an interesting situation you outline. In the UK if you did swerve to avoid the truck you wouldn't be liable for the death of the pedestrian. The truck driver would be. They created a situation that caused you to react instinctively to preserve your own life, when you couldn't reasonably be expected to choose suicide.
Such a situation is unlikely because speed limits around pedestrians are low, but the point is that legally speaking acting to save your own life is unlikely to make you liable
These discussions are getting dumber (Score:3)
Who controls the code? They do. Just like they do now. How many people here are actively changing the code in their cars? Legally they are responsible for it.
Oh you chipped your car? Well now you're responsible.
Seriously the anti-car crap is getting ridiculous, as is the question of ethics. Car making a decision to kill the driver? Car breaking the road rules? When every car is driven perfectly according to the rules the death rate will be decimated and bystander accidents will be treated in the same way as any other idiot stepping into a cage with a running robot arm.
I don't understand how people have made this so complicated.
Re: (Score:2)
If it only decimates it I will be sorely disappointed, I expect automated cars to do much better than that.
Incidentally, I have programmed that robot in a cage. Mine stops moving if you trip an optical sensor on the way in (possibly damaging the robot due to the application of too much force in the process).
Re: (Score:2)
It may do slightly better than decimate, but a large portion of road accidents are actually nothing to do with the driver or motor vehicle. e.g. bicycle, pedestrian, kangaroo etc. Actually for all the near misses I've had over the years the only things I've actually hit were a pedestrian and a kangaroo. Mind you by *hit* the pedestrian would have walked into me even if I was parked. There's only so much you can do.
I also used to work in a palletising area. I've seen a robot sensor fail to realise a full pal
Re: (Score:2)
Mine would be easily circumvented if someone wanted to do that intentionally, but I consider my job done if it takes an intentional bypass of two safety systems to mangle yourself.
Re: (Score:3)
The better question is who will QA/QC your car's code. The unintended acceleration episode is a good example of life critical code being poorly implemented, so can we trust the entire code base to the same guys? I am more afraid of coding bugs than moral weightings.
What's the correct answer for human driver? (Score:5, Insightful)
if your car has to choose between a maneuver that kills you and one that kills other people, which one should it be programmed to do?
How about you tell us what should a HUMAN driver choose in a similar situation first, before you ask what should a computer do?
These kind of stupid questions are well, stupid. And they come up often simply because there is no real valid worry about autonomous cars. Humans make lots of mistakes and having a computer drive would remove a whole range of avoidable accidents. Worrying about a few boundary cases is as stupid as all the "what if my car is burning and I need to get out quickly?!" objections to wearing seat belts. It is unfounded fear that is not based on facts.
Re: (Score:2)
Why worry about a correct answer when we haven't even figured out a possible answer?
What does a human do? Slam on the brakes and if they are super alert with above human reflexes they may even decide to turn the wheel in a sensible direction, though chances are if they turn the wheel it will be in a random direction.
Let the computer do the same thing. Hit all pedals at the same time and let physics decide who dies.
Re:What's the correct answer for human driver? (Score:5, Insightful)
How about you tell us what should a HUMAN driver choose in a similar situation first, before you ask what should a computer do?
How about if we ask how often that situation has happened at all? How many drivers have ever been in the situation where their car was definitely going to kill someone, but the driver could decide whether the car would kill someone else or the driver? Now subtract the cases where the situation was created by something stupid that the driver did. Then subtract the cases where the driver has a choice, but no chance to react fast enough to make a conscious choice. I think we will come up with a big fat zero number.
c.f. "I, Robot" (Score:2)
if your car has to choose between a maneuver that kills you and one that kills other people, which one should it be programmed to do?
How about you tell us what should a HUMAN driver choose in a similar situation first, before you ask what should a computer do?
^ THIS.
Cory's article completely misses the point. Or rather, he brings up the Trolley Problem and then moves on to his own point. The reason it's an ethical dilemma is because it brings up ethical issues. That dilemma doesn't change just because a computer is involved, it just shifts the burden to the system. An obvious solution that would probably occur for the first 10-15 years is "Transfer control back to the human in the event of an emergency", which of course just puts right back where we started.
My b
Why is this an issue? (Score:2)
In Switzerland, you are required by law to help if you see a person in danger. However, it is understood that you are to make sure that you can operate safely first. It makes no sense to go in with the best intentions only to produce a second victim for the firefighters to rescue.
Thinking that further, it is clear that your car cannot take responisibilty for other participants in traffic since it cannot control them. It will save your life at all cost. Now if the decision lies between possible injury of you
Much Ado about nothing, considering TRAINS (Score:3, Insightful)
How much sleep have you lost over the engineering decision to make trains so large and heavy that the simply CAN'T stop for pedestrians and other vehicles. Yeah, I thought so. People will kvetch about how self driving cars are programmed right up until they become every-day objects an after that they'll be just as accepted (benefits AND dangers) as trains are today.
Red Herring; real threat is detainment (Score:3)
The Trolley Problem is a red herring that distracts from the real danger: government remote-controlled detainment of political opponents, as depicted in Minority Report [youtube.com]. Plus, any number of variations: script-kiddies hacking, drug cartel kidnapping, kidnapping/trafficking of women/children, murder-for-hire (drive off cliff), nation-state espionage and assassination. When major crimes, and not just credit card scams, become available to the push of a button, the risk threshold to the criminal is lowered for heinous crimes.
What other decisions will be forced ... (Score:2)
If I am forbidden from hacking my car's software will I be unable to stop it when:
Re: (Score:2)
If I am forbidden from hacking my car's software will I be unable to stop it when:
You can stop the adverts by purchasing a "no adverts" upgrade.
Seriously, there probably will be a kindle-like discount if you allow ads.
Re: (Score:2, Funny)
Re: (Score:2)
Even at a cross walk, they often just start walking without any regard for courtesy or the laws of physics.
Sounds like a reckless driver at fault. The crosswalk is a right of way mark and it should be approached in the same was as an intersection with a give way sign that is frequented by semi-trailers.
The bigger-er question (Score:2)
it misses the big question
A better question for an IT forum would be to ask how the hell do you test whether the implementation (of which party to kill) works as designed?
It should be immediately obvious to anyone in a capitalist society that who dies is a cost-option. Let's say that opting to save the car's occupants comes at a $1million price premium on the cost of the vehicle.
Easy: Destroy the Power Looms (Score:2)
Luddites had it right.
Stick with horses? (Score:2)
Luddites had it right.
Stick with horses? That's also a self driving vehicle that you merely issue driving commands to.
Human.. (Score:2)
A human driver will always choose self preservation even if it means killing others, so why should an autonomous car behave any differently?
Re: (Score:2)
Assuming sufficiently advanced AI, what's to stop the car from choosing itself over the passengers? Asimov's Three Laws?
VW (Score:2)
Hypothetical questions (Score:2)
Why in the world would we require autonomous cars to answer hypothetical questions on morality?
Three Laws of Robotics (Score:2)
The three laws of robotics state:
To me, the interesting ramifications of these laws in many stories, and one movie involving Will Smith, are more than enough to answer all questions reg
Good Question (Score:2)
Will Linux be used in the analysis?
Car DRM = dealer only service (Score:2)
Car DRM = dealer only service and based on how evil they want to get all the way down to tires, windshield wipers, oil changes, lights.
Terrorists (Score:2)
If terrorist activity is detected, should the AI drive the car to the incident so the driver can assist in fighting the terrorists, or should it flee the area?
I'm voting for everyone piling on, and the AI could allow access to a locked weapon compartment.
Re:As someone who doesn't drive anymore... (Score:4, Informative)
...I find it very hard to understand how people drive in modern cars which have so much closed source programming.
How modern are you talking? I have cars which speak directly to this; all I'm lacking is a fully computerized car. I have a W126 300SD, which is all-mechanical down to a vacuum shutoff on the engine. I have a D2 A8 Quattro, which is fully electronically regulated and can't run without the PCM, but which still has a transmission (ZF5HP42) with an actual shift linkage and a limp mode, and hydraulic power steering (which is lovely, but ironically not quite as communicative as the W126, even though that has a recirculating ball box.) A truly modern car with e.g. a ZF9 doesn't have a shift linkage, and if the TCM goes up in smoke, so do your hopes of driving home. It also doesn't have hydraulic power steering, so if the power goes out while you're doing something tricky, it's going to get trickier. At which point do you pucker?
In the D2 you can still conceivably replace literally all of the computers with devices of your own design. In some european markets there is even a heater-only control unit that operates the flaps with bowden cables, as in cars of yore, which puts it ahead of the W126 body. The AT can be replaced with a six-spool which doesn't require any power, and people have megasquirted the ABZ before. You need either the ABS or an adjustable proportioning valve, though. The car doesn't have one because it has EBD, which you will be throwing away. It is possible to source a LSD for the rear, though; you can use your ring and pinion with the guts from the Audi V8's rear diff, which is a LSD (Torsen, IIRC, like the center.)
Re:As someone who doesn't drive anymore... (Score:5, Interesting)
Now I know how users must feel when I try to explain virtualization. I know most of these words but it is very hard to makse sense out of them.
Re: (Score:2)
Now I know how users must feel when I try to explain virtualization. I know most of these words but it is very hard to makse sense out of them.
I am completely guilty of trolling for that response, but not by adding any obfuscation; I only omitted the explanations I usually include for the Slashdot audience because they could have doubled the length of the comment :)
Re: (Score:2)
I have a W126 300SD, which is all-mechanical down to a vacuum shutoff on the engine.
Sounds like a major point of failure, vacuum tube leaks, vacuum pump or belt (I assume vacuum pump is driven by belt) failure and your vehicle is dead. I'd advise buying a choke kit and hooking it up to the fuel pumps kill point. Makes it fun watching mechanics and such trying to figure out how to turn the engine off.
But yes, I loved having a purely mechanical Nissan diesel truck. Given a hill to start it, didn't even need a battery though it would have been nice if it had a generator instead of an alternat
Re: (Score:2)
I have a W126 300SD, which is all-mechanical down to a vacuum shutoff on the engine.
Sounds like a major point of failure, vacuum tube leaks, vacuum pump or belt (I assume vacuum pump is driven by belt) failure and your vehicle is dead. I'd advise buying a choke kit and hooking it up to the fuel pumps kill point.
If I were designing the system I would use a bowden cable for the shutoff, and I might well redesign it to do that in the future if I keep the car. But the vacuum system in these cars is actually fairly reliable, and parts are easy to come by. I've taken a whole circuit out of it, in fact.
Re: (Score:2)
My Nissan Leaf is 99% computer controlled. The accelerator is just an input to the ECU. The brake is input at first but if you push it far enough does eventually have physical linkage. Without the computers the car is impossible to drive, and good luck replacing them with your own.
That's the future of cars I'm afraid. In many ways it's a good thing. More efficient, lower emissions, better safety. What we need are strong laws regulating it.
Re: (Score:2)
Re: (Score:2)
*snip*
I don't think you should judge people who drive modern cars more than you judge people relying on other tools.
I do not judge them. People may choose to do as they wish. I find it all very hard to understand, that's all.
With all due respect, the poster here has what I'd consider to be a modern form of OCD. Maybe we can call it FSF-OCD.
In theory, any given doorknob can be infected with MRSA and something that will make you seriously ill that could be prevented from transmission if you rigorously cleaned it each time before touching. Also, in theory, a device with a microcontroller in it may have an unknown safety-critical bug in it others missed that you might find if you audited the code.**
In practice, a "normal" per
Re: (Score:2)
I'll give you a hit we have fully computer controlled manual gearboxes for a long time now. Running a clutch and changing gears it not that hard of a problem. You get all the efficiency of a stick shift with none on of the annoyance factor.
Re: (Score:2)
What ethics underpin the training sets used to develop the cars' rule base? Given a 'Trolley Problem', what scores were assigned to killing one group vs the other? What scores were assigned to killing you (the car's occupant) vs a pedestrian or cyclist?
Re:Of course make it Non-Free Software (Score:5, Interesting)
There's no reason to make the software non-free. It's not like the code will have some SacrificeBusfullOfChildrenToSaveDriver variable that some idiot can change, and even if there were hardly anyone capable of reprogramming the car would be stupid enough to risk their lives on their own untested edits to the program.It would probably be a good idea to have signed software for security purposes, but that's different and compatible with free software.
There is no need to worry about the moral dilemma of choosing who to sacrifice. The answer is simple and obvious to everyone -- pick the driver who never drives drunk, never drives sleepy or otherwise impaired, never gets distracted, has lightning reflexes, always drives carefully, and is less likely to kill everyone. Morally speaking, we want to start using self-driving cars, even if they are worse than the average driver, starting first with replacing arthritic old grandma with failing eyesight*.
* Roughly speaking, the morally correct thing to do is replace any driver who's insurance premiums (aka professional estimate of actual driving ability) are higher than those of a self-driving car.
Re: (Score:2)
Re: (Score:2)
That is true. Also some car manufactures (only Honda I know for sure) has put a great deal of R&D into making their cars safer on the outside incase of running into pedestrians.
If you want to sell a car in Europe or the USA, you have to consider not only what happens in a crash, but also if you bounce a pedestrian off the front of it, or even off of one of the side mirrors.