Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Transportation AI Crime

Selectable Ethics For Robotic Cars and the Possibility of a Robot Car Bomb 239

Rick Zeman writes Wired has an interesting article on the possibility of selectable ethical choices in robotic autonomous cars. From the article: "The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible. Philosophically, this opens up an interesting debate about the oft-clashing ideas of morality vs. liability." Meanwhile, others are thinking about the potential large scale damage a robot car could do.

Lasrick writes Patrick Lin writes about a recent FBI report that warns of the use of robot cars as terrorist and criminal threats, calling the use of weaponized robot cars "game changing." Lin explores the many ways in which robot cars could be exploited for nefarious purposes, including the fear that they could help terrorist organizations based in the Middle East carry out attacks on US soil. "And earlier this year, jihadists were calling for more car bombs in America. Thus, popular concerns about car bombs seem all too real." But Lin isn't too worried about these threats, and points out that there are far easier ways for terrorists to wreak havoc in the US.
This discussion has been archived. No new comments can be posted.

Selectable Ethics For Robotic Cars and the Possibility of a Robot Car Bomb

Comments Filter:
  • by kruach aum ( 1934852 ) on Monday August 18, 2014 @12:15PM (#47695885)

    Because ethicists like making work for themselves -- it's unethical to wait for another disaster or human rights violation just so you can do more work!

  • Re:Insurance rates (Score:3, Insightful)

    by Twinbee ( 767046 ) on Monday August 18, 2014 @12:15PM (#47695887)
    Car insurance companies will die off when car AI becomes mainstream.
  • Scare of the day (Score:5, Insightful)

    by Iamthecheese ( 1264298 ) on Monday August 18, 2014 @12:16PM (#47695897)
    Dear government, Please shut up bout terrorism and get out of the way of innovation. sincerely, informed citizen
  • by gurps_npc ( 621217 ) on Monday August 18, 2014 @12:16PM (#47695901) Homepage
    Not news, not interesting.

    1) The cars will most likely be set by the company that sold it - with few if any modifications legally allowable by the owner.

    2) Most likely ALL cars will be told to be mostly selfish, on the principle that they can not predict what someone else will do, and in an attempt to save an innocent pedestrian might in fact end up killing them. The article has the gall to believe the cars will have FAR greater predictive power than they will most likely have.

    3) A human drivable car with a bomb and a timer in it is almost as deadly as a car that can drive into x location and explode is. The capability of moving the car another 10 feet or so into the crowd, as opposed to exploding on the street is NOT a significant difference, given a large explosion.

    4) The cars will be so trackable and with the kind of active, real time security monitoring, that we will know who programmed it and when, probably before the bomb goes off. These are expensive, large devices that by their very nature will be wired into a complex network. It is more likely the cars will turn around and follow the idiot, all the time it's speakers screaming out "ARREST THAT GUY HE PUT A BOMB IN ME!"

  • by i kan reed ( 749298 ) on Monday August 18, 2014 @12:17PM (#47695911) Homepage Journal

    Let's skip "car" because I can, in theory, attach enough explosives(and shrapnel) to kill a large number of people to a simple homemade quadrotor, run with open source software, give it a dead-reckoning path and fire and forget from a relatively inconspicuous location. Multiple simultaneously, if I have the amount of resources a car bomb would require.

    Automation is here. Being paranoid about one particular application of it won't help anyone.

  • Re:Insurance rates (Score:5, Insightful)

    by Lunix Nutcase ( 1092239 ) on Monday August 18, 2014 @12:23PM (#47695987)

    Hahahahahahahahaha. No, they won't. They will keep themselves around through lobbying efforts.

  • FBI: 1, Ethics: 0 (Score:5, Insightful)

    by some old guy ( 674482 ) on Monday August 18, 2014 @12:29PM (#47696039)

    So, the FBI is already making the case for, "We need full monitoring and control intervention capability for everybody's new cars, because terrorists."

  • Re:Insurance rates (Score:2, Insightful)

    by CanHasDIY ( 1672858 ) on Monday August 18, 2014 @12:40PM (#47696161) Homepage Journal

    Car insurance companies will die off when car AI becomes mainstream.

    Kind of like how representative democracy died off when we all got smart phones, right?

    No, dude, sadly middlemen will always exist, adding no value to things but taking your money anyway.

  • by Opportunist ( 166417 ) on Monday August 18, 2014 @12:42PM (#47696203)

    No. To minimize damage, you'd have to brake when approaching a child. To minimize liability, you have to accelerate when you notice that you can't stop in time to avoid severe injury, i.e. to ensure death which is cheaper than a lifetime cripple.

  • Re:MUCH easier. (Score:5, Insightful)

    by Qzukk ( 229616 ) on Monday August 18, 2014 @12:45PM (#47696239) Journal

    You are speculating on a system that would be able to correctly identify ALL THE OBJECTS IN THE AREA and that is never going to happen.

    It doesn't have to identify all the objects in the area, it simply has to not hit them.

  • Will not matter. (Score:5, Insightful)

    by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday August 18, 2014 @12:47PM (#47696265)

    I wonder whether your insurance company would demand to know how you have set your car, and adjust your rates accordingly?

    That does not matter because it won't be an option.

    That is because "A.I." cars will never exist.

    They will not exist because they will have to start out as less-than-100%-perfect than TFA requires. And that imperfection will lead to mistakes.

    Those mistakes will lead to lawsuits. You were injured when a vehicle manufactured by "Artificially Intelligent Motors, inc (AIM, inc)" hit you by "choice". That "choice" was programmed into that vehicle at the demand of "AIM, inc" management.

    So no. No company would take that risk. And anyone stupid enough to try would not write perfect code and would be sued out of existence after their first patch.

  • Re:Insurance rates (Score:4, Insightful)

    by Jason Levine ( 196982 ) on Monday August 18, 2014 @12:50PM (#47696301) Homepage

    You will still be required to have car insurance (whether because of some actual need or because of lobbying from the insurance industry). Your rates might lower a bit to give you an incentive to get a car that drives itself, but they won't plummet. Less accidents/claims will just mean that the insurance companies will wind up with more profits. Which means more money to spend lobbying the government to require auto insurance and robot cars which means more profits. Rinse. Repeat.

  • Re:MUCH easier. (Score:4, Insightful)

    by Shoten ( 260439 ) on Monday August 18, 2014 @01:23PM (#47696583)

    You are speculating on a system that would be able to correctly identify ALL THE OBJECTS IN THE AREA and that is never going to happen.

    It doesn't have to identify all the objects in the area, it simply has to not hit them.

    Actually, since the whole question of TFA is about ethical choices, it does have to identify them. It can't view a trash can as being equal to a child pedestrian, for example. It will have to see the difference between a dumpster (hit it, nobody inside dies) and another car (hit it, someone inside it may die). It may even need to weigh the potential occupancy of other vehicles...a bus is likely to hold more people than a scooter.

    The question at its heart is not about object avoidance in the article...it's about choices between objects. And that requires identification.

  • Re:MUCH easier. (Score:4, Insightful)

    by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday August 18, 2014 @02:19PM (#47697095)

    Given a choice, I think autonomous cars at some point WILL be programmed with such a choice. For example, hitting an elderly person in order to avoid hitting a small child.

    Congratulations. Your product just injured Senator Somebody in order to avoid hitting a Betsy-wetsy doll.

    Senator Somebody has filed "lawsuit" against your company. It is super-effective. All your assets are belong to him.

Genetics explains why you look like your father, and if you don't, why you should.

Working...