writes with Cory Doctorow's story at the Guardian diving into the questions of applied ethics that autonomous cars raise
, especially in a world where avoiding accidents or mitigating their dangers may mean breaking traffic laws
. From the article: The issue is with the 'Trolley Problem' as applied to autonomous vehicles, which asks, if your car has to choose between a maneuver that kills you and one that kills other people, which one should it be programmed to do? The problem with this formulation of the problem is that it misses the big question that underpins it: if your car was programmed to kill you under normal circumstances, how would the manufacturer stop you from changing its programming so that your car never killed you?