Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Transportation

The Problem With Self Driving Cars: Who Controls the Code? (theguardian.com) 235

schwit1 writes with Cory Doctorow's story at the Guardian diving into the questions of applied ethics that autonomous cars raise, especially in a world where avoiding accidents or mitigating their dangers may mean breaking traffic laws. From the article: The issue is with the 'Trolley Problem' as applied to autonomous vehicles, which asks, if your car has to choose between a maneuver that kills you and one that kills other people, which one should it be programmed to do? The problem with this formulation of the problem is that it misses the big question that underpins it: if your car was programmed to kill you under normal circumstances, how would the manufacturer stop you from changing its programming so that your car never killed you?
This discussion has been archived. No new comments can be posted.

The Problem With Self Driving Cars: Who Controls the Code?

Comments Filter:
  • nowadays you got the freedom to drive like an asshole - just give the car a user selectable setting, if it should preserve your life at all costs, preserve the life of others or make a decision that minimizes overall damage (but may harm you) - if you select the "me first" option, you are responsible for your car mowing down a row of krishnas. "simple" as that.
    • by khasim ( 1285 )

      Would you buy a car that came equipped with an explosive that would, under certain circumstances, explode and kill the driver?

      This whole "trolley problem" is bullshit.

      From TFA, Doctrow uses the "trolley problem" to get to a different point:

      Forget trolleys: the destiny of self-driving cars will turn on labour relationships, surveillance capabilities, and the distribution of capital wealth.

      Nice switch there but the basis is still bullshit. No one will buy a machine that has code in it specifically designed to

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Not only that, another big problem with the "trolley problem" is that it doesn't pass the "could a human driver do better" test.
        It assumes that you have lost control of the vehicle to the extent where you can only select between two choices. While every driver will claim that they are superior it is all just bullshit and they will be unable tho make a choice at all in those situations.
        The main point of automatic drivers are to not get into or cause situations where you don't have control of the vehicle and

        • "It assumes that you have lost control of the vehicle to the extent where you can only select between two choices."

          Dr. Lotfi Zadeh [wikipedia.org] can help us slay that spherical cow!

      • by JaredOfEuropa ( 526365 ) on Sunday December 27, 2015 @07:55AM (#51189737) Journal
        Exactly: there is never a such clear-cut decision between "my owner or that bus full of nuns". I can imagine that cars will be programmed to make decisions based on certain basic principles, but those principles will be nothing like the 3 laws of robotics. Taking drastic action to avoid an accident may lead to a worse one, and it'll be a long while before our machines will be anywhere near able to make such complex decisions.

        To begin with: if everyone sticks to the rules of the road and drives normally, there is very little chance of an accident occurring. If an exceptional situation occurs, the fault of an ensuing accident primarily lies with whomever caused that exceptional situation (even if it's unintentional). If someone's tyre blows and they swerve into your lane as a result, if a child chases a ball into the road, or if a cyclist runs a red light in front of your car, you'd (probably) do everything to avoid a crash and so should a self-driving car, but you are not under any moral obligation to drive yourself into the side of a building in order to avoid the other car, child or cyclist. Self driving cars should operate under the same premise: it should never be considered necessary to sacrifice the driver.

        If something unexpected happens, cars might follow a protocol similar to this one:
        1) Stay in your lane and come to a controlled stop.
        2) If a controlled stop will not prevent a collision (and this is something that self-driving cars should be able to assess fairly accurately), change to a different lane if there is an unobstructed one.
        3) If there are no unobstructed lanes but the road ahead is clear and the local speed limit is below x, change into oncoming traffic.
        4) If all else fails, reduce speed as much as possible and allow the collision to happen.
        These are not meant to be complete and valid for all situations, it's just to give an idea of how such "laws" could be formulated, in the form of a decision tree that self-driving cars would be able to follow, without having to make complex judgment calls or difficult moral decisions. And I can well imagine that a basic set of such rules will be set into law so that all self-driving cars will follow the same basic protocol. As a driver you'd have little incentive to change the programming in your favour, and if you did, it would become immediately apparent as soon as you're involved in an accident.
        • These are not meant to be complete and valid for all situations, it's just to give an idea of how such "laws" could be formulated, in the form of a decision tree that self-driving cars would be able to follow, without having to make complex judgment calls or difficult moral decisions. And I can well imagine that a basic set of such rules will be set into law so that all self-driving cars will follow the same basic protocol.

          I'm not criticizing your protocol as I know it's merely hypothetical, but even un
          • That's precisely the point: sideswiping the car next to you is such a risky manoeuver that machine nor man can probably make that judgment call very well, with any vehicle. So an automatic car wouldn't. Take evasive action if it's safe (more or less within traffic rules, i.e. without hitting anything else; the car should be able to make that call), stop in your own lane if possible, or slow down as much as you can and crash into the obstacle in your lane. Avoid the impending accident if you can do so saf
            • That's precisely the point: sideswiping the car next to you is such a risky manoeuver that machine nor man can probably make that judgment call very well, with any vehicle.

              That depends - if there is an oncoming lorry in your lane and the only way to avoid it is to side swipe the vehicle next to you that's what I would do to avoid what looks like impending death. That's what I would expect a self-driving car to do to: take whatever it perceives to be the lowest risk to the health of the car's occupants because that is what a human driver would do instinctively. To do otherwise and you are only one step above Futurama's suicide booths. How long will it be before you get some i

          • For instance, the proper response to an impending accident might be to sideswipe the car in the adjacent lane, but only if it's actually a vehicle that could take the hit, as opposed to a motorcycle. ...

            Driving is a far more complex activity than a lot of people realize,

            Which is why we drive over a group of nuns right now. Driving may be complex but human decision are ultimately entirely based on luck, self preservation (which is why the seat BEHIND the driver is the safest in the car), and whatever you're able to do with your hands in a sheer moment of thoughtless panic.

            The perfect computer will be no different. It won't come down to side swiping or some major calculation onto the future prospects or occupancy of the lane beside you, it won't be a case of can you safely s

          • by Znork ( 31774 )

            Why is there a car in the adjacent lane in a high-speed situation with objects that can conceivably exhibit behaviour that could cause in impact faster than you can do a controlled break? Sounds like you're driving too fast and tailgating someone while you're passing someone else. How about, you know, not doing that? Accident avoided.

            The trick to avoid serious accidents is not to be able to make complex judgement calls in an emergency, humans suck at that, and life isn't Groundhog Day where you get to pract

            • Why is there a car in the adjacent lane in a high-speed situation with objects that can conceivably exhibit behaviour that could cause in impact faster than you can do a controlled break? Sounds like you're driving too fast and tailgating someone while you're passing someone else. How about, you know, not doing that? Accident avoided.

              Use your imagination. Some other potential causes of that situation: accidents in other lanes resulting in quick and unexpected vehicles/debris/people in your lane, mechan
        • by Kjella ( 173770 )

          I think it's simpler than this, because the computer will never with 100% accuracy predict the future. For example, if there's an object in the road is it an inanimate object that dropped off a truck? Is it a child running across the road after a ball? Is it a wild animal that could run scared? And the manufacturer don't want any legal liability for making the accident worse. So I think even if it's 99,9% "change into oncoming traffic and it'll be okay" and 0,1% "giant manslaughter fuck-up" they'll default

          • That's what I think too. The manufacturers will write code that will keep THEM out of trouble with the local legal system. In most cases it will avoid killing the driver as well, but there's no way that they're going to make the car software swerve off the road and mow down a queue at a bus stop to preserve the life of the driver.

            People are saying: it's an ethical issue, no, it's primarily a legal issue, the programmers/company execs will keep themselves out of prison, everything else is secondary.

        • but you are not under any moral obligation to drive yourself into the side of a building in order to avoid the other car, child or cyclist. Self driving cars should operate under the same premise: it should never be considered necessary to sacrifice the driver.

          While you are correct, don't think for a minute that there won't be a lot of pressure to give a heirarchy weighting to decisions made by the cars.

          We make fun all the time by saying "Think of the Children", but I'll bet if you polled a sizable group of people in some hypothetical You could save one person, an adult or a child scenario, probably most would save the child and leave the adult to die.

          All of which is to say, if there is the possibility to apply weighting to a vehicle's collision avoidance sys

          • We make fun all the time by saying "Think of the Children", but I'll bet if you polled a sizable group of people in some hypothetical You could save one person, an adult or a child scenario, probably most would save the child and leave the adult to die.

            True, but suppose we change the "adult" to your wife/husband who is in the car with you and the "child" to a 12 year-old running away from a policeman and who dashes into the street. Do you swerve into the oncoming lorry and kill the adult you love or hit the child who was probably a delinquent? In that situation I think you'd get far more people saying they would protect the adult they love over the child: the closeness of the relationship with the people affected is a huge factor.

            This is the problem w

        • The problem with any of those rules is that they will never be used and reflect a human driver decision based on human reaction time.

          A self driving vehicle can anticipate an accident much faster than any human driver and you can often (as a human) come to a complete stop or decelerate enough in cities before the worst has happened.

          Sure the kid and cyclist might get hit (or more likely and in most situations they hit your car) but they most likely won't die from the impact (and if they do its not your fault)

        • by clovis ( 4684 )

          A bus full of nuns?

          I once rescued a bus full of cheerleaders. I rescued them three times, if you know what I mean.
          Batmanuel

      • And this is a good thing your property should never choose something else over you.

        • by frnic ( 98517 )

          No, it can not be users selectable unless cars that have chosen the user first mode are not allowed on public roadways. When you drive on public roadways you accept that you have to obey certain "rules of the road", such as speed limits. I do not want anyone (obviously some hackers are going to hack the cars, but that is a very small percentage) being able to override the safety and rules of the road features built into the autonomous cars. That also impacts insurance issues, since someone that choose unsaf

          • Who said selectable? In any event it's safe to assume the fully automated car is obeying all related traffic laws etc. Obeying all the laws does not mean it will never be in a situation where it has to choose. Easy example is 50mph speed limit rural roads guard rails on each side, kid pops out through a hedge and runs into road after a ball or whatever, you have an oncoming truck doing 50 do you hit the kid or do a head on into the truck you have no time to effectively break or any forewarning of the even

          • I expect that hacking a car's vehicular AI will change its insurance rates. Insurance companies charge insurance based upon known risk. A vehicular AI that has been rated by an independent rating agency (UL, etc) is a known quantity and an insurance company will be willing to insure it.

            A home spun vehicular AI will be considered an unknown risk and will be treated as such by the insurance companies. If you want to roll your own vehicular AI, feel free but you'll be responsible for having the AI rated and
            • I expect it will go far beyond that in two ways.

              (1) The vehicle will no longer be in compliance with regulations that permit the privilege of taking your vehicle onto a public road. Merely driving it on a public road may make one vulnerable to civil charges, loss of driving privilege, confiscation of vehicle, etc. Much like drunk driving regardless of whether an accident occurs or not.

              (2) It will void your insurance coverage and make you fully liable for anything that occurs. There will probably be no
          • No, it can not be users selectable unless cars that have chosen the user first mode are not allowed on public roadways. When you drive on public roadways you accept that you have to obey certain "rules of the road", such as speed limits.

            You do realize that this would automatically disqualify all human drivers right? Humans will always prioritize themselves and will not always obey the rules of the road. A computer which prioritizes is occupants but which always obeys traffic rules would still be a huge improvement.

      • by AK Marc ( 707885 )
        Not to mention that the scenarios are usually invalid. Your car, limited to 30 in a residential area with poor visibility and people around, is going 165 mph in that environment. A boy scout troop steps out, and blocks the entire road. What do you do?

        You've already lost. The self-driver will identify roaming people and limited visiblility, and slow. When the "a kid steps out in front of you and your choice is come to a complete stop before them under control, or accelerate wildly into the child, killi
        • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Sunday December 27, 2015 @09:19AM (#51189951)

          Anyway, the answer is always: Stop as fast as you can in your own lane. Do not weave.

          I'd also have it sound the horn and flash the lights.

          Code that response into law and indemnify the maker/driver.

          And this is, IMO, exactly what will happen.

          Remember, these vehicles have complete records of everything that is happening around them at all times. Everything that can be recorded, that is. So the insurance companies will have exact records of how the robot was 100% within the law AND had taken every REASONABLE response to mitigate the collision.

          The robots do not have to be 100% at determining whether your life is worth more/less than someone else's. They just have to be 100% at showing that they were following the law and attempting to avoid the collision.

          The legal system and the insurance companies will sort out the rest. And the insurance companies will pay to have the legal system write the new laws to reflect that.

          • This response is exactly what will happen.

            If there is no way to avoid an accident, the car will attempt to stop in its lane as quickly as possible. There is no other conceivable way this could work due to the extreme liability any other decision would imply.

            This will in most cases greatly minimize the forces involved in a collision as well.

        • by clovis ( 4684 )

          Agreed. Stop in your lane.
          I call it the squirrel problem.

          Anyone who has tried to avoid running over a squirrel knows that squirrels' panic mode is to dart back and forth, so no matter where you point your car or swerve, you wind up squishing the squirrel anyway. You're better off continuing in a predictable path so the squirrel has a chance of solving the problem with its superior speed and reflexes.

          Are pedestrians significantly smarter than squirrels? Perhaps in Manhattan, where everyone is a pedestrian an

      • Would you buy a car that came equipped with an explosive that would, under certain circumstances, explode and kill the driver?

        You mean like an airbag? https://www.google.co.uk/searc... [google.co.uk]

      • "No one will knowingly buy a machine that has code in it specifically designed to kill them."

        FTFY. Prove that there isn't code in all Japanese made vehicles sold in America designed to kill their passengers on a certain day at a certain time.

        That would be a Herculean task if the source was Open. With closed source firmware and "Trusted Computing" implemented (i.e. You can trust that the code you are running is the code they want you to run, but not necessarily the code you want to run; it says nothing ab

        • by Etcetera ( 14711 )

          "No one will knowingly buy a machine that has code in it specifically designed to kill them."

          FTFY. Prove that there isn't code in all Japanese made vehicles sold in America designed to kill their passengers on a certain day at a certain time.

          That would be a Herculean task if the source was Open. With closed source firmware and "Trusted Computing" implemented (i.e. You can trust that the code you are running is the code they want you to run, but not necessarily the code you want to run; it says nothing about trustworthiness of the actual code), it is impossible.

          Calling BS on this one. No one has time, and few would have the ability, to meaningfully audit all the code in systems affecting their lives. Thus, "auditing" is only as good as the chain of trust it represents... Open source gets you nothing except better post-mortems (no pun intended).

          Given that trade-off, I'd actually prefer trusting that *manufacturer-intended* code is indeed running than trusting that OSS/many-eyes auditing has caught fundamental errors. I can sue a manufacturer and there's process for

  • Either annual or ongoing, if easily enough done the police would probably do it. You change the code so the car doesn't, say, drive off a cliff instead of straight into the middle of a class of school girls (just to make it clear, I'd drive into the kids. It's my car after all and facing the choice between killing a dozen kids and me, the rugrats croak), then this is an illegal modification of your car and it is no longer considered safe for traffic and shut down.

    • It will not drive off a cliff if it is aware of the cliff under any circumstances, it will instead come to a stop before the road ends.

      If stopping in time is impossible as something was basically dropped into its path, it will end up hitting the object at the lowest speed it can achieve. It will never intentionally hit anything for any reason at all, and my expectation is that they will be very good at this. Accidents so far always involve the automated car being struck by rather than striking an object f

  • - how does the manufacturer stop me today from modifying my car so it endangers others (e.g. mount a flamethrower on it) ?
    • or how does the manufactorer stop me from simply driving too fast? in an age where most cars have country-specific software & hardware modifications, it makes zero sense for a car to be able to go (much) faster than the maximum allowed speed limit.
      • or how does the manufactorer stop me from simply driving too fast? in an age where most cars have country-specific software & hardware modifications, it makes zero sense for a car to be able to go (much) faster than the maximum allowed speed limit.

        You are simply applying to much logical thought to the problem. first off there are some freeways where the top speed is 55 and others where it's 70+. But limit a vehicle to 75-80 and sales will drop dramatically. This is true on economy cars (many of which can't go that much faster anyhow) on up. Simply put its not profitable.

      • " it makes zero sense for a car to be able to go (much) faster than the maximum allowed speed limit."

        Sorry I don't accept that limitation. If my Corvette could only go 80 or 90 why would I of bought it? At that point there would be nothing to distinguish it from a Prius.
        I grew up in a racing family. Horspower and Torque are fun. Cars that are speed limited or drive themselves equal zero fun. Lifes too short to deprive yourself of an flying down the road in an open cockpit car on warm summers day. Damn, I ha
        • why would I of bought it?

          Why would I HAVE bought it? Where did this (relatively) new bit of illiteracy come from? And when? Are they really teaching this in school, or are more people getting through HS/College without ever having to write anything?

          • "Why would I HAVE bought it? Where did this (relatively) new bit of illiteracy come from?"

            3 bowls of Purple Kush, 2 bottles of Guiness after an 11 hour day. Hell, it's a wonder that it's written as well as it is.
            I will draw the line at whacking me across the knuckles with your ruler however.
            It's the weekend, roll with it!
        • by AmiMoJo ( 196126 )

          All cars in Japan are limited to 180kph/114mph by a gentleman's agreement between manufacturers and the government. Since about 2000 sports models have used GPS to detect when the car is at a track and disable the limiter.

      • by 3247 ( 161794 )

        What if you cross a border where the "maximum speed limit" is higher (or lower)?
        What if the "maximum speed limit" is changed?
        How do you prevent someone from tampering with the setting?

        On the other hand, going above the "maximum speed limit" of a country (or state) is not the most unsafe type of speeding. It's much more problematic to speed in locations where the actual speed limit is lower than the "maximum speed limit".

  • Another thing. How are these various autonomous car software platforms going to interact with one another.

    It's one thing to build in recognition protocols for your own vehicles, so that multiple vehicles of the same type act in a concerted manner.

    But what happens where you have four or five different codebases? How are the notoriously closed car manufacturers going to deal with car behavior from another system?

    I can foresee some rather nasty interactions. Head on collisions where one car tries toavoid by

  • This is just sensationalism. The real issue is that, if people are willing to give up their responsibilities to control a vehicle, they necessarily give up their freedom to decide how that vehicle behaves in certain situations. If you want to decide how a vehicle behaves there are probably two options: get a manually operated vehicle, or build your own "automatic" vehicle with your own rules. But good luck on that latter; just as there are regulations on acceptable behavior with manually-operated vehicle

  • People, in America at least, will absolutely bust a gasket once the first actual deaths roll in and some egghead behind a desk in some remote part of the world "decides" who lives and who dies. Until it happens people won't care. Americans are very independent and a me first kind of crowd so it may be the righ ^h^h^h^h profitable choice for manufacturers is to simply protect the occupants no matter the collateral damage as long as they can't be held liable.
    • by AK Marc ( 707885 )
      40,000 deaths a year without it. Why would the first with it be an issue. Wasn't with airbags. Even when their issue was decapitating babies. Wasn't with ABS, even in situations where ABS was provably worse than locked brakes. Your assertion doesn't match history. Why is it different this time?
      • Because none of those things decide the large scale decison making. Your car doesn't drive off a bridge to avoid people, or crash head on into a concrete barrier insread of some pedestrians. You are using logic and not emotion.
        • by AK Marc ( 707885 )
          So we want to make complex buggy code to cater for grey areas, when the best way to minimize loss of life is known and simple? That's not using "emotion", that's just stupid.
          • Again lol. Try hitting yourself over the head with a cast iron pan a few times while chugging some vodka. That will help you understand how it is the same complexity code, minimizing only the life inside the car with a preference to the driver, that matters. Ask consumers if they would rather be dead or 12 other people they don't know. Or if they would rather have their daughter killed than two street bums. It's simple free market enterprise.
            • by AK Marc ( 707885 )
              Now you are a lying sack of shit. I never said anything about minimizing only the life inside the car, or preferential treatment of the driver. I stated that the greatest life savings would come from stopping in the shortest distance possible without trying to avoid the obstacle. That you can't understand that doesn't mean it would kill innocents.

              It sounds like what you want is a pre-purchase system where you buy life credits. Your daughter is paid up with "insurance" so when the car has the choice of h
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday December 27, 2015 @07:46AM (#51189711) Homepage Journal

    Who controls the code? Maybe you, maybe them. If you tamper with it, you're responsible. Otherwise, they will probably be responsible for its behavior. But the computer is not going to be "programmed to kill you", that is bollocks. The computer is going to be programmed to follow the law. That means that it's going to be less likely to be at fault in an accident to begin with, that it's going to be more likely to successfully mitigate the accidents it does get into, and it means that rather than being programmed to kill you, it's going to be programmed to stay in the lane and hit the truck rather than swerve and hit the pedestrian because to do otherwise would be illegal — not just because of the pedestrian, but because of the lane marking. That is not remotely the same thing.

    The car will be programmed to do its best not to kill you, and that's going to take the form of yielding gracefully to fuckheads rather than getting in an accident to begin with.

    • by AmiMoJo ( 196126 )

      Actually that is an interesting situation you outline. In the UK if you did swerve to avoid the truck you wouldn't be liable for the death of the pedestrian. The truck driver would be. They created a situation that caused you to react instinctively to preserve your own life, when you couldn't reasonably be expected to choose suicide.

      Such a situation is unlikely because speed limits around pedestrians are low, but the point is that legally speaking acting to save your own life is unlikely to make you liable

  • by thegarbz ( 1787294 ) on Sunday December 27, 2015 @08:11AM (#51189773)

    Who controls the code? They do. Just like they do now. How many people here are actively changing the code in their cars? Legally they are responsible for it.
    Oh you chipped your car? Well now you're responsible.

    Seriously the anti-car crap is getting ridiculous, as is the question of ethics. Car making a decision to kill the driver? Car breaking the road rules? When every car is driven perfectly according to the rules the death rate will be decimated and bystander accidents will be treated in the same way as any other idiot stepping into a cage with a running robot arm.

    I don't understand how people have made this so complicated.

    • If it only decimates it I will be sorely disappointed, I expect automated cars to do much better than that.

      Incidentally, I have programmed that robot in a cage. Mine stops moving if you trip an optical sensor on the way in (possibly damaging the robot due to the application of too much force in the process).

      • It may do slightly better than decimate, but a large portion of road accidents are actually nothing to do with the driver or motor vehicle. e.g. bicycle, pedestrian, kangaroo etc. Actually for all the near misses I've had over the years the only things I've actually hit were a pedestrian and a kangaroo. Mind you by *hit* the pedestrian would have walked into me even if I was parked. There's only so much you can do.

        I also used to work in a palletising area. I've seen a robot sensor fail to realise a full pal

        • Mine would be easily circumvented if someone wanted to do that intentionally, but I consider my job done if it takes an intentional bypass of two safety systems to mangle yourself.

    • The better question is who will QA/QC your car's code. The unintended acceleration episode is a good example of life critical code being poorly implemented, so can we trust the entire code base to the same guys? I am more afraid of coding bugs than moral weightings.

  • by khchung ( 462899 ) on Sunday December 27, 2015 @08:16AM (#51189793) Journal

    if your car has to choose between a maneuver that kills you and one that kills other people, which one should it be programmed to do?

    How about you tell us what should a HUMAN driver choose in a similar situation first, before you ask what should a computer do?

    These kind of stupid questions are well, stupid. And they come up often simply because there is no real valid worry about autonomous cars. Humans make lots of mistakes and having a computer drive would remove a whole range of avoidable accidents. Worrying about a few boundary cases is as stupid as all the "what if my car is burning and I need to get out quickly?!" objections to wearing seat belts. It is unfounded fear that is not based on facts.

    • Why worry about a correct answer when we haven't even figured out a possible answer?

      What does a human do? Slam on the brakes and if they are super alert with above human reflexes they may even decide to turn the wheel in a sensible direction, though chances are if they turn the wheel it will be in a random direction.

      Let the computer do the same thing. Hit all pedals at the same time and let physics decide who dies.

    • by gnasher719 ( 869701 ) on Sunday December 27, 2015 @12:47PM (#51190653)

      How about you tell us what should a HUMAN driver choose in a similar situation first, before you ask what should a computer do?

      How about if we ask how often that situation has happened at all? How many drivers have ever been in the situation where their car was definitely going to kill someone, but the driver could decide whether the car would kill someone else or the driver? Now subtract the cases where the situation was created by something stupid that the driver did. Then subtract the cases where the driver has a choice, but no chance to react fast enough to make a conscious choice. I think we will come up with a big fat zero number.

    • if your car has to choose between a maneuver that kills you and one that kills other people, which one should it be programmed to do?

      How about you tell us what should a HUMAN driver choose in a similar situation first, before you ask what should a computer do?

      ^ THIS.

      Cory's article completely misses the point. Or rather, he brings up the Trolley Problem and then moves on to his own point. The reason it's an ethical dilemma is because it brings up ethical issues. That dilemma doesn't change just because a computer is involved, it just shifts the burden to the system. An obvious solution that would probably occur for the first 10-15 years is "Transfer control back to the human in the event of an emergency", which of course just puts right back where we started.

      My b

  • In Switzerland, you are required by law to help if you see a person in danger. However, it is understood that you are to make sure that you can operate safely first. It makes no sense to go in with the best intentions only to produce a second victim for the firefighters to rescue.

    Thinking that further, it is clear that your car cannot take responisibilty for other participants in traffic since it cannot control them. It will save your life at all cost. Now if the decision lies between possible injury of you

  • by DeBattell ( 460265 ) on Sunday December 27, 2015 @08:23AM (#51189811)

    How much sleep have you lost over the engineering decision to make trains so large and heavy that the simply CAN'T stop for pedestrians and other vehicles. Yeah, I thought so. People will kvetch about how self driving cars are programmed right up until they become every-day objects an after that they'll be just as accepted (benefits AND dangers) as trains are today.

  • The Trolley Problem is a red herring that distracts from the real danger: government remote-controlled detainment of political opponents, as depicted in Minority Report [youtube.com]. Plus, any number of variations: script-kiddies hacking, drug cartel kidnapping, kidnapping/trafficking of women/children, murder-for-hire (drive off cliff), nation-state espionage and assassination. When major crimes, and not just credit card scams, become available to the push of a button, the risk threshold to the criminal is lowered for heinous crimes.

  • If I am forbidden from hacking my car's software will I be unable to stop it when:

    • * It shows me adverts on the dashboard when it is driving me somewhere
    • * I ask to go to my favourite restaurant but it takes me to McDonald's instead as they paid the car vendor more
    • * When more fuel is needed it drives to the filling station that the car manufacturer has a tie in with
    • If I am forbidden from hacking my car's software will I be unable to stop it when:

      • * It shows me adverts on the dashboard when it is driving me somewhere

      You can stop the adverts by purchasing a "no adverts" upgrade.

      Seriously, there probably will be a kindle-like discount if you allow ads.

  • Re: (Score:2, Funny)

    Comment removed based on user account deletion
    • Even at a cross walk, they often just start walking without any regard for courtesy or the laws of physics.

      Sounds like a reckless driver at fault. The crosswalk is a right of way mark and it should be approached in the same was as an intersection with a give way sign that is frequented by semi-trailers.

  • it misses the big question

    A better question for an IT forum would be to ask how the hell do you test whether the implementation (of which party to kill) works as designed?

    It should be immediately obvious to anyone in a capitalist society that who dies is a cost-option. Let's say that opting to save the car's occupants comes at a $1million price premium on the cost of the vehicle.

  • Luddites had it right.

  • A human driver will always choose self preservation even if it means killing others, so why should an autonomous car behave any differently?

  • VW (and others?) were caught tampering with the engine code to get more performance while cheating on the emissions. It's problems like this that make the question of who owns the code, or who reviews it, very relevant...
  • Why in the world would we require autonomous cars to answer hypothetical questions on morality?

  • The three laws of robotics state:

    • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

    To me, the interesting ramifications of these laws in many stories, and one movie involving Will Smith, are more than enough to answer all questions reg

  • I believe that automotive impact testing done using this Poster, and any supportive souls as, and pardon the pun, crash dummies. By concentrating only on cost one easily losses focus on perspective benefits. Cost Only Solutions are starting to go on its long path to obsolescence, but it is clear that it is already headed in that direction.

    Will Linux be used in the analysis?
  • Car DRM = dealer only service and based on how evil they want to get all the way down to tires, windshield wipers, oil changes, lights.

  • If terrorist activity is detected, should the AI drive the car to the incident so the driver can assist in fighting the terrorists, or should it flee the area?

    I'm voting for everyone piling on, and the AI could allow access to a locked weapon compartment.

No spitting on the Bus! Thank you, The Mgt.

Working...