Selectable Ethics For Robotic Cars and the Possibility of a Robot Car Bomb 239
Rick Zeman writes Wired has an interesting article on the possibility of selectable ethical choices in robotic autonomous cars. From the article: "The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible. Philosophically, this opens up an interesting debate about the oft-clashing ideas of morality vs. liability." Meanwhile, others are thinking about the potential large scale damage a robot car could do.
Lasrick writes Patrick Lin writes about a recent FBI report that warns of the use of robot cars as terrorist and criminal threats, calling the use of weaponized robot cars "game changing." Lin explores the many ways in which robot cars could be exploited for nefarious purposes, including the fear that they could help terrorist organizations based in the Middle East carry out attacks on US soil. "And earlier this year, jihadists were calling for more car bombs in America. Thus, popular concerns about car bombs seem all too real." But Lin isn't too worried about these threats, and points out that there are far easier ways for terrorists to wreak havoc in the US.
Lasrick writes Patrick Lin writes about a recent FBI report that warns of the use of robot cars as terrorist and criminal threats, calling the use of weaponized robot cars "game changing." Lin explores the many ways in which robot cars could be exploited for nefarious purposes, including the fear that they could help terrorist organizations based in the Middle East carry out attacks on US soil. "And earlier this year, jihadists were calling for more car bombs in America. Thus, popular concerns about car bombs seem all too real." But Lin isn't too worried about these threats, and points out that there are far easier ways for terrorists to wreak havoc in the US.
Re:Drivers already have variable ethics (Score:5, Insightful)
Because ethicists like making work for themselves -- it's unethical to wait for another disaster or human rights violation just so you can do more work!
Re:Insurance rates (Score:3, Insightful)
Scare of the day (Score:5, Insightful)
Been discussed before (Score:3, Insightful)
1) The cars will most likely be set by the company that sold it - with few if any modifications legally allowable by the owner.
2) Most likely ALL cars will be told to be mostly selfish, on the principle that they can not predict what someone else will do, and in an attempt to save an innocent pedestrian might in fact end up killing them. The article has the gall to believe the cars will have FAR greater predictive power than they will most likely have.
3) A human drivable car with a bomb and a timer in it is almost as deadly as a car that can drive into x location and explode is. The capability of moving the car another 10 feet or so into the crowd, as opposed to exploding on the street is NOT a significant difference, given a large explosion.
4) The cars will be so trackable and with the kind of active, real time security monitoring, that we will know who programmed it and when, probably before the bomb goes off. These are expensive, large devices that by their very nature will be wired into a complex network. It is more likely the cars will turn around and follow the idiot, all the time it's speakers screaming out "ARREST THAT GUY HE PUT A BOMB IN ME!"
Automation, remote controls already exist (Score:5, Insightful)
Let's skip "car" because I can, in theory, attach enough explosives(and shrapnel) to kill a large number of people to a simple homemade quadrotor, run with open source software, give it a dead-reckoning path and fire and forget from a relatively inconspicuous location. Multiple simultaneously, if I have the amount of resources a car bomb would require.
Automation is here. Being paranoid about one particular application of it won't help anyone.
Re:Insurance rates (Score:5, Insightful)
Hahahahahahahahaha. No, they won't. They will keep themselves around through lobbying efforts.
FBI: 1, Ethics: 0 (Score:5, Insightful)
So, the FBI is already making the case for, "We need full monitoring and control intervention capability for everybody's new cars, because terrorists."
Re:Insurance rates (Score:2, Insightful)
Car insurance companies will die off when car AI becomes mainstream.
Kind of like how representative democracy died off when we all got smart phones, right?
No, dude, sadly middlemen will always exist, adding no value to things but taking your money anyway.
Re:"Philosophically, this opens up an interesting (Score:5, Insightful)
No. To minimize damage, you'd have to brake when approaching a child. To minimize liability, you have to accelerate when you notice that you can't stop in time to avoid severe injury, i.e. to ensure death which is cheaper than a lifetime cripple.
Re:MUCH easier. (Score:5, Insightful)
It doesn't have to identify all the objects in the area, it simply has to not hit them.
Will not matter. (Score:5, Insightful)
That does not matter because it won't be an option.
That is because "A.I." cars will never exist.
They will not exist because they will have to start out as less-than-100%-perfect than TFA requires. And that imperfection will lead to mistakes.
Those mistakes will lead to lawsuits. You were injured when a vehicle manufactured by "Artificially Intelligent Motors, inc (AIM, inc)" hit you by "choice". That "choice" was programmed into that vehicle at the demand of "AIM, inc" management.
So no. No company would take that risk. And anyone stupid enough to try would not write perfect code and would be sued out of existence after their first patch.
Re:Insurance rates (Score:4, Insightful)
You will still be required to have car insurance (whether because of some actual need or because of lobbying from the insurance industry). Your rates might lower a bit to give you an incentive to get a car that drives itself, but they won't plummet. Less accidents/claims will just mean that the insurance companies will wind up with more profits. Which means more money to spend lobbying the government to require auto insurance and robot cars which means more profits. Rinse. Repeat.
Re:MUCH easier. (Score:4, Insightful)
It doesn't have to identify all the objects in the area, it simply has to not hit them.
Actually, since the whole question of TFA is about ethical choices, it does have to identify them. It can't view a trash can as being equal to a child pedestrian, for example. It will have to see the difference between a dumpster (hit it, nobody inside dies) and another car (hit it, someone inside it may die). It may even need to weigh the potential occupancy of other vehicles...a bus is likely to hold more people than a scooter.
The question at its heart is not about object avoidance in the article...it's about choices between objects. And that requires identification.
Re:MUCH easier. (Score:4, Insightful)
Congratulations. Your product just injured Senator Somebody in order to avoid hitting a Betsy-wetsy doll.
Senator Somebody has filed "lawsuit" against your company. It is super-effective. All your assets are belong to him.