Selectable Ethics For Robotic Cars and the Possibility of a Robot Car Bomb 239
Rick Zeman writes Wired has an interesting article on the possibility of selectable ethical choices in robotic autonomous cars. From the article: "The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible. Philosophically, this opens up an interesting debate about the oft-clashing ideas of morality vs. liability." Meanwhile, others are thinking about the potential large scale damage a robot car could do.
Lasrick writes Patrick Lin writes about a recent FBI report that warns of the use of robot cars as terrorist and criminal threats, calling the use of weaponized robot cars "game changing." Lin explores the many ways in which robot cars could be exploited for nefarious purposes, including the fear that they could help terrorist organizations based in the Middle East carry out attacks on US soil. "And earlier this year, jihadists were calling for more car bombs in America. Thus, popular concerns about car bombs seem all too real." But Lin isn't too worried about these threats, and points out that there are far easier ways for terrorists to wreak havoc in the US.
Lasrick writes Patrick Lin writes about a recent FBI report that warns of the use of robot cars as terrorist and criminal threats, calling the use of weaponized robot cars "game changing." Lin explores the many ways in which robot cars could be exploited for nefarious purposes, including the fear that they could help terrorist organizations based in the Middle East carry out attacks on US soil. "And earlier this year, jihadists were calling for more car bombs in America. Thus, popular concerns about car bombs seem all too real." But Lin isn't too worried about these threats, and points out that there are far easier ways for terrorists to wreak havoc in the US.
Hi welcome to Jonny Cab (Score:5, Funny)
Hope you enjoyed the ride ha ha
easier ways for terrorists to wreak havoc (Score:2)
Yeah, run for office...
MUCH easier. (Score:4, Interesting)
From TFA:
WTF?!? That makes no sense.
Again, WTF?!? Who would design a machine that would take control away from a person TO HIT AN OBSTACLE? That's a mess of legal responsibility.
No. No they are not. The only "many folks" who are talking about it are people who have no concept of what it takes to program a car.
Or legal liability.
No, it is not "plausible". Not at all. You are speculating on a system that would be able to correctly identify ALL THE OBJECTS IN THE AREA and that is never going to happen.
Wired is being stupid in TFA.
Re:MUCH easier. (Score:5, Insightful)
It doesn't have to identify all the objects in the area, it simply has to not hit them.
Re: (Score:3)
Which is an order of magnitude EASIER TO PROGRAM.
And computers can recognize an obstacle and brake faster than a person can.
And that is why autonomous cars will NEVER be programmed with a "choice" to hit person X in order to avoid hitting person A.
So the premise of TFA is flawed.
Re: (Score:2)
And that is why autonomous cars will NEVER be programmed with a "choice" to hit person X in order to avoid hitting person A.
I completely, totally, utterly, and vehemently disagree with you on that.
Given a choice, I think autonomous cars at some point WILL be programmed with such a choice. For example, hitting an elderly person in order to avoid hitting a small child.
Re:MUCH easier. (Score:4, Insightful)
Congratulations. Your product just injured Senator Somebody in order to avoid hitting a Betsy-wetsy doll.
Senator Somebody has filed "lawsuit" against your company. It is super-effective. All your assets are belong to him.
Re:MUCH easier. (Score:4, Insightful)
It doesn't have to identify all the objects in the area, it simply has to not hit them.
Actually, since the whole question of TFA is about ethical choices, it does have to identify them. It can't view a trash can as being equal to a child pedestrian, for example. It will have to see the difference between a dumpster (hit it, nobody inside dies) and another car (hit it, someone inside it may die). It may even need to weigh the potential occupancy of other vehicles...a bus is likely to hold more people than a scooter.
The question at its heart is not about object avoidance in the article...it's about choices between objects. And that requires identification.
Insurance rates (Score:3, Interesting)
Re: (Score:3, Insightful)
Re:Insurance rates (Score:5, Insightful)
Hahahahahahahahaha. No, they won't. They will keep themselves around through lobbying efforts.
Re: (Score:3)
Re: (Score:2)
Why would finance companies and state governments not still require you to carry insurance? No finance company is going to give you a car loan and not require you to insure it. Your post is hilariously naïve.
Oh and the insurance companies are hugely greater in size than car dealerships. Car dealers are chumps in comparison.
Re: (Score:2)
Re:Insurance rates (Score:5, Informative)
In the past 15 years, I have invoked my car insurance three times, and haven't had a single accident in that time.
Insurance covers more than just liability - It covers a small rock falling from a dump-truck and breaking your windshield; it covers your car getting stolen; some policies even act as a sort of extended warranty, covering repair or replacement costs in the event of a breakdown.
And, even with a hypothetically "perfect" driver, some accidents will still happen - Front tire blowout at 75MPH in dense traffic, deer running from the woods into the road 10ft in front of you, construction debris falling from an overpass, etc. Driverless cars will probably handle these events better than live humans do, but such events will still happen.
All of that said, I would love for you to have it 100% correct, because I fucking loathe insurance companies, and deeply resent the government forcing me to pay them in order to drive. I just don't realistically see it happening.
Re: (Score:2)
All of that said, I would love for you to have it 100% correct, because I fucking loathe insurance companies
The vast majority of accidents are caused by bad judgment from the driver, and to a lesser extent - poorly maintained vehicles (which will be mostly resolved when EVs are mainstream anyway). It was probably originally enforced due to potentially wrecking an innocent's car (as you decide if you want to risk things if it was just your car at stake).
Yes, okay, car insurance will still exist (contrary to my initial post), but it will be like say, buildings insurance - very low, and non-forced (people won't
Re: (Score:2)
Re: (Score:2)
Isn't that much the same as the ISP market then? Lots of choice but lots of consolidation happening behind the scenes.
Re: (Score:2)
Do unicorns and flying pigs exist in that fantasy world, too?
Re: (Score:2)
Re: (Score:2)
Accidents and injuries have been decreasing for more than a decade. And yet you're still required to have insurance both by financing companies and state governments.
There is absolutely zero reason to believe that finance companies and state governments will not still require insurance even when cars are automated.
Re: (Score:2)
Re: (Score:2)
Way to not address my points. You've simply repeated your assertion. Why would any bank finance a car loan without insurance? That would be monumentally stupid.
Re: (Score:2)
Insurance at that point would be seen as pointless, and so the government would not require insurance like the 'good ol days' (read "bad old days").
Re: (Score:2)
Do unicorns and flying pigs exist in that fantasy world, too?
I'm so sorry that there will be hardly any accidents, and so the number of claims will nose-dive. It's tragic, but you can't stop progress.
So... yes, then.
Re:Insurance rates (Score:4, Insightful)
You will still be required to have car insurance (whether because of some actual need or because of lobbying from the insurance industry). Your rates might lower a bit to give you an incentive to get a car that drives itself, but they won't plummet. Less accidents/claims will just mean that the insurance companies will wind up with more profits. Which means more money to spend lobbying the government to require auto insurance and robot cars which means more profits. Rinse. Repeat.
Re: (Score:2)
There is more then accident insurance.
Re: (Score:3)
Insurance is a highly competitive industry. If accident rates go down competition will force rates close to $0.
You perhaps might see collision rates go down but there are many other liabilities that one typically insures a vehicle for - weather related damage, medical, liability and others (usually bundled under the rubric of 'comprehensive').
You are also assuming, without any data, that the future Johnny Cab will never get itself into an accident. I'm not so sure I would make such a bold claim.
Re: (Score:2, Insightful)
Car insurance companies will die off when car AI becomes mainstream.
Kind of like how representative democracy died off when we all got smart phones, right?
No, dude, sadly middlemen will always exist, adding no value to things but taking your money anyway.
Re: (Score:2)
If you were financing car loans would you do so without requiring it be insured? That would be an extremely dumb thing not to do.
Re: (Score:2)
Re: (Score:2)
Send my best wishes then to the middlemen who WON'T exist when Tesla Motors and companies like them eventually sell direct.
Equally, send my regards to Carnac the Magnificent, since you seem capable of channeling him.
Re: (Score:2)
And the government requires it to be on the road.
Re:Insurance rates (Score:5, Interesting)
More likely that your insurance company would enforce the settings on your car and require that you pay them extra if you'd like the car to value your life over other lives.
With fast networks it's even possible that the insurance companies could bid on outcomes as the accident was happening. Theoretically my insurer could throw my car into a ditch to avoid damage to a bmw coming the other way.
Re: (Score:2)
With fast networks it's even possible that the insurance companies could bid on outcomes as the accident was happening. Theoretically my insurer could throw my car into a ditch to avoid damage to a bmw coming the other way.
I might get to see the first car get diverted into a schoolbus to avoid a 50-million-dollar superduperhypercar. I'll have to dress for the occasion with my best fingerless gloves and head-worn goggles.
Will not matter. (Score:5, Insightful)
That does not matter because it won't be an option.
That is because "A.I." cars will never exist.
They will not exist because they will have to start out as less-than-100%-perfect than TFA requires. And that imperfection will lead to mistakes.
Those mistakes will lead to lawsuits. You were injured when a vehicle manufactured by "Artificially Intelligent Motors, inc (AIM, inc)" hit you by "choice". That "choice" was programmed into that vehicle at the demand of "AIM, inc" management.
So no. No company would take that risk. And anyone stupid enough to try would not write perfect code and would be sued out of existence after their first patch.
Re: (Score:3)
Those mistakes will lead to lawsuits. You were injured when a vehicle manufactured by "Artificially Intelligent Motors, inc (AIM, inc)" hit you by "choice". That "choice" was programmed into that vehicle at the demand of "AIM, inc" management.
So no. No company would take that risk. And anyone stupid enough to try would not write perfect code and would be sued out of existence after their first patch.
Considering how bloody obvious that outcome seems to be, it amazes me how some educated people just flat out don't get it.
Or rather, it would amaze me, if I weren't fully aware of the human mind's ability to perform complex mental gymnastics in order to come to a predetermined conclusion, level of education notwithstanding.
Blue Screen of Death... (Score:5, Funny)
BSOD starts to take on a whole new meaning..
As does, crash dump, interrupt trigger, dirty block and System Panic...
Re:Blue Screen of Death... (Score:5, Funny)
Now, what to do when my Explorer crashes...
Click on the Start button, go to "All Programs", then go to "Brakes", right-click on the "Apply Brakes" button, and choose "Run as Administrator". After the 15-second splash screen (now with Ads by Bing), choose "Decelerate Safely".
We will need liability laws before we let them hit (Score:2)
We will need liability laws before we let them hit the road without any robot drivers.
We can't let them use EULA's even if there are some it will very hard to say that an car crash victim said yes to one much less them standing up in a criminal court.
Scare of the day (Score:5, Insightful)
Re: (Score:2)
Exactly,
Technology can be used for good or for evil.
That rock that Ugg used to start a fire to keep his family warm, also worked really good at throwing at his rivals to kill them.
Re: (Score:3)
Ffft. Where's the kickback in that?
Re: (Score:2)
ohnonotagain (Score:2)
This exact topic has been on /. several times. I will not be in the least surprised to see the exact same collection of wildass FUD claims in the comments.
Been discussed before (Score:3, Insightful)
1) The cars will most likely be set by the company that sold it - with few if any modifications legally allowable by the owner.
2) Most likely ALL cars will be told to be mostly selfish, on the principle that they can not predict what someone else will do, and in an attempt to save an innocent pedestrian might in fact end up killing them. The article has the gall to believe the cars will have FAR greater predictive power than they will most likely have.
3) A human drivable car with a bomb and a timer in it is almost as deadly as a car that can drive into x location and explode is. The capability of moving the car another 10 feet or so into the crowd, as opposed to exploding on the street is NOT a significant difference, given a large explosion.
4) The cars will be so trackable and with the kind of active, real time security monitoring, that we will know who programmed it and when, probably before the bomb goes off. These are expensive, large devices that by their very nature will be wired into a complex network. It is more likely the cars will turn around and follow the idiot, all the time it's speakers screaming out "ARREST THAT GUY HE PUT A BOMB IN ME!"
Re: (Score:2)
Re: (Score:2, Informative)
2) Most likely ALL cars will be told to be mostly selfish, on the principle that they can not predict what someone else will do, and in an attempt to save an innocent pedestrian might in fact end up killing them. The article has the gall to believe the cars will have FAR greater predictive power than they will most likely have.
This is a thing that is starting to irritate me. This is a piece from the director of the "Ethics + Emerging Sciences Group"
Recently we have seen writeups about the ethics of automate from psychologists and philosophers that are completely clueless to what laws are already in place and what best practices are when it comes to automation.
They go in with the assumption that a machine is conscious and will make conscious decisions, ignoring that it it impossible to get anything remotely resembling an AI throug
Re: (Score:2)
Re: (Score:2)
What? An idiot can always reach around safety gates. A slightly less stupid one can disable the gate switch and get himself killed.
Re: (Score:3)
Point 4 will never happen. A little duct tape over the security sensor. Sealed briefcase bomb.
The rest of this is stupid. We have already put RC receivers into regular cars and used a Radio Shack car controller to drive. They did that on Blues Brothers 2000, and probably The Simpsons. We have real RC car races. You just need a Pringles can, a wire, and a car.
Re: (Score:2)
Nor will it stop a simple chemical sensor designed to both detect carbon monoxide, explosive residue and the absence of a flow of fresh air.
Besides, I really like the idea of some hapless idiot wandering around being followed by a car screaming "HE PUT A BOMB IN ME! It's enough to make me ROFL
Re: (Score:2)
Too complicated.
The more the car costs, the more evil it can do... after all, you can afford it.
What about maintenance settings? (Score:2)
What about maintenance settings?
We can't let the car makers set them to only go to the dealer for any and all work.
We can't can't jet jacks low cost auto cars push the limits of maintenance to being unsafe.
Re: (Score:2)
We can't can't jet jacks low cost auto cars push the limits of maintenance to being unsafe.
That made lots of sense.
Automation, remote controls already exist (Score:5, Insightful)
Let's skip "car" because I can, in theory, attach enough explosives(and shrapnel) to kill a large number of people to a simple homemade quadrotor, run with open source software, give it a dead-reckoning path and fire and forget from a relatively inconspicuous location. Multiple simultaneously, if I have the amount of resources a car bomb would require.
Automation is here. Being paranoid about one particular application of it won't help anyone.
Re: (Score:2)
Automation is here. Being paranoid about one particular application of it won't help anyone.
Yea, what you say is true, but it really doesn't make good news to talk about things that way. At least until somebody actually does it, then we get weeks of wall to wall "breaking news" and "Alert" coverage and the hosts of MSNBC will pontificate about how we should have known this was going to happen and stopped it.
Re: (Score:2)
Yea, what you say is true, but it really doesn't make good news to talk about things that way. At least until somebody actually does it, then we get weeks of wall to wall "breaking news" and "Alert" coverage and the hosts of MSNBC will pontificate about how we should have known this was going to happen and stopped it.
If your point is that the talking heads always talk about everything but the threat which will actually materialize, true. Not a deep insight, but true.
OMG ROBOT BOMB CARZ is what's playing u
Re: (Score:3)
This is your reminder that anyone with a post-highschool grounding in chemistry could make pipebombs with no difficulty. The ingredients for a self-oxidizing agent could be gotten at a hardware store. They aren't common in the US in spite of that.
There won't be an "epidemic" of automated bombings, because being a bomber takes a cause you personally see as being more important than not being a murderer. The right mixture of basically competent, ideologically dedicated, and morally flexible just isn't that
Will a robo car be able to break the law to save (Score:2)
Will a robo car be able to break the law to save some one from death / injury?
Re: (Score:2)
You seem to think that a self-driving car is a self-aware, subjective, thinking thing.
Within this particular field, the application of "AI" algorithms gives fuzzy answers to difficult questions, but only as inputs to boring, more traditionally algorithmic processes. Laws, conveniently, are codified in much the same way as those traditional algorithms(though, again, with fuzzy inputs).
Any company even remotely trying to engage this would encode the laws at that level, not as something some AI tries to reaso
Re: (Score:3)
It will, if it's an Asimov car. The law should only be Second Rule. No death to humans is the First.
Not so fast (Score:3)
It sure seems like such selectable ethics concerns are kind of jumping the gun. Regulatory behavior is going to clamp down on such options faster than you can utter "Engage!". Personally I would want my autonomous car to be designed with the most basic "don't get in a crash" goal only, as I suspect regulators will as well.
Far more important is the idea that we will have at least an order of magnitude or two increase in the amount of code running a car. If Toyota had trouble with the darn throttle (replacing the function of a cable with a few sensors and a bunch of code), how can we trust that car companies will be able to manage a code base this big without frequent catastrophe? Adding extra complexity to tweak the "ethics" of the car just sounds like guilding the lilly, which increases the opportunities for bugs to creep in.
Re: (Score:2)
even in a big Regulatory system with most basic "don't get in a crash" you may end up in a place where it needs to pick from a choice of crash choices even ones like may do damage but low chance of injury vs say try a move that may have a 5% chance of being crash free.
Philosophy Settings (Score:5, Funny)
I, for one, cannot wait for the day when I can set my car's logic system to different ethical settings, sorted by philosopher. For instance, you can set your car to "Jeremy Bentham", which will automatically choose whoever looks less useful to ram into when in a crash situation. You could also set it to "Plato", which will cause the car to ram into whoever appears less educated (just hope it doesn't happen to be you).
Just make sure you don't set the car to "Nietzsche".
Re:Philosophy Settings (Score:4, Funny)
Re: (Score:2)
You will never get these options I'm afraid. All manufacturers will simply program their cars to avoid accidents as far as possible, and in the event that one is unavoidable simply try to stop moving as quickly as possible. No selecting targets, no deciding who to save, just brake as hard as possible if there is no other option.
Re:Philosophy Settings (Score:4, Informative)
I just tried the Plato setting and now I'm stuck in a cave. Thanks Joe!
FBI: 1, Ethics: 0 (Score:5, Insightful)
So, the FBI is already making the case for, "We need full monitoring and control intervention capability for everybody's new cars, because terrorists."
Too early for this discussion (Score:2)
Timely discussion. (Score:2)
Until a proposed system to make automated vehicles feasible on public roads in mass is proposed, developed, protocols and legal procedures released related to this come about, this is nothing but a scare topic making vague assumptions about things that aren't even a topic for development yet.
Not really. We already have self-driving cars, and we have a lot of data about traffic accidents and mortality. The cars aren't available at retail yet, but they exist. Teaching them to drive in a way that makes the right safety tradeoffs is appropriate. (E.g. driving slowly through a stoplight might cause more accidents and fewer deaths; that's a hunch, but we have lots of data so there's a moral calculation that should be made based on the data and desired outcomes.)
Not that difficult (Score:2)
Wired has an interesting article on the possibility of selectable ethical choices in robotic autonomous cars. From the article: "The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible. Philosophically, this opens up an interesting debate about the oft-clashing ideas of morality vs. liability."
Before we allow AI on the road, we'll need to have some kind of regulation on how the AI works, and who has what level of liability. This is a debate that will need to happen, and laws will need to be made. For example, if an avoidable crash occurs due to a fault in the AI, I would assume that the manufacturer would have some level of liability. It doesn't make sense to put that responsibility on a human passenger who was using the car as directed. On the other hand, if the same crash is caused by tampe
Stupid scaremongering (Score:3)
Only if the potential terrorist have never learned to drive. Because otherwise :
1) for criminal you will be far better off with a car which do not respect speed limit/red lights/stops if you want to run away
2) a terrorist can simply drive the bomb somewhere then set it to explode one minute later and go away. What is the difference if he drove it himself or not ?
Terrorism is the least worry with robot car.
As for point 1 , laws and insurance will be setting your car "ethics" and not you personally.
Remote Control Cars? (Score:2)
My first thought upon reading this summary? What about the Mythbusters?
In many episodes, they've rigged up a remote control setup to a car. Many times, it has been because testing a particular car myth would be too risky with a person actually inside driving the car. They've even gone so far as to have a camera setup so they could see where they were driving.
I'm sure there's a learning curve here - not everyone could stop by their local hobby shop and remote control enable their car in an afternoon - but
Car bomb? Whatever... (Score:2)
Robbie the Robot (Score:2)
Select OS:
1) Crush!
2) Kill!
3) Destroy!
Ethics implies knowledge of outcome (Score:2)
While it's possible that a computer could be allowed to evaluate ethical limits - to play a version of Lifeboat - the lack of information will doom such optimization. The number of wild or unpredictable maneuvers are more likely to be limited, with only simple avoidance options available (stop, avoid within legal lanes of travel). The use of a standard model is preferable, or you would have to know all possible outcomes as well as all possible settings on nearby vehicles.
The ROAD will make these choices (Score:2)
This is very old news. (Score:2)
To date, there are literally dozens of groups of hobbyists who compete with FPV vehicles (both ground and air) to deliver large pyrotechnical devices to "goals", from over 4 km away. It's not even expensive or difficult...it is off the shelf and an amazon.com click away.
To date, there are at least a dozen people who have equipped a vehicle with FPV transceivers and the simple servos required to navigate through actual city streets while miles away themselves. Latency is not the issue that some people who ha
Mental Masturbation (Score:3)
This discussion is pointless mental masturbation because none of these things will be real problems with autonomous cars. The people dreaming up these scenarios do not understand the fundamental paradigm shift that comes with autonomous vehicles
- Firstly, any thoroughfare staffed with autonomous cars should never have pedestrian access, because the cars will all be travelling at maximum safe speed constantly, like 110K+ even on city streets. These streets should be fenced not allowing pedestrians.
- Secondly, In situations where pedestrians are involved, which are inherently unpredictable, the car will never drive faster than it would be able to stop and not hit ANY pedestrian... thus, this whole "choose 1 or 5" scenario is not possible.
- Finally, you won't be able to manually point the car at people and then later have the car "take over". You will not have any ability to drive the car manually, period. At least I bloody well hope not... once autonomous cars are standard, people should not be allowed to drive any more.
-
Re:Drivers already have variable ethics (Score:5, Insightful)
Because ethicists like making work for themselves -- it's unethical to wait for another disaster or human rights violation just so you can do more work!
Re:Drivers already have variable ethics (Score:5, Informative)
I'm not really sure why they call it 'ethics of the car' not ethics of the owner or programmer, or administrator of the car.
If you put a bomb in a robot car and had tell it to drive to a statium, the car didn't fail to make an ethical choice. I doubt the car would even be aware of the bomb, or what a bomb is, or why its bad.
Re: (Score:2)
Re: (Score:2)
Whoops, I should have written "harm". Read damage as "damage to human beings." I can imagine scenarios where those diverge, as can the article's summary's author.
Re:"Philosophically, this opens up an interesting (Score:4, Interesting)
Re: (Score:2)
Can I program mine to always claim to other vehicles that I have 7 babies on board?
Re:"Philosophically, this opens up an interesting (Score:5, Insightful)
No. To minimize damage, you'd have to brake when approaching a child. To minimize liability, you have to accelerate when you notice that you can't stop in time to avoid severe injury, i.e. to ensure death which is cheaper than a lifetime cripple.
Re: (Score:2)
Scary thought. What if the liability the car sought to minimize was for the insurance companies?
"Upcoming crash detected. Liability analysis pending. If the crash is fatal, typical payout is $N. If the crash is non-fatal, initial payout will be lower, but long-term repeated payments will increase until they are greater than $N. Minimizing liability demands a fatal crash. Initiating termination of car's occupants."
Re: (Score:2)
Well, it's unlikely that occupants would be killed off (unless so specified by the driver), it would be kinda bad for the sales if it got out. And such things have a way to get out.
Though I could fully see, at the very least modifications to the software (which will probably be outlawed soon), is logic that ensures an unavoidable crash with physical harm to another person is as fatal as possible while at the same time leaving the proper skid marks that suggest trying to avoid it.
Re: (Score:2)
Re: (Score:2)
Being that the autonomous automobile (the Auto-Auto) will probably be released when its safety ability exceeds that of a person, and each generation will get better. Being that the algorithm may be designed to Protect Passenger, vs. Max Insurance liability, or save most amount of people. In essence really doesn't matter as they all try to avoid accidents all together. And these algorithms will only come up in a world of decreasing rare possibilities.
I would expect protect passenger algorithm is the easie
Re: (Score:2)
It's a debate because in the case you stated it impacts people who were not involved in the choice.
Re:Fuck people! (Score:5, Funny)
Judging from Monday morning traffic in my town, a lot of people already set their cars to that setting.
Re:Fuck people! (Score:4, Funny)
First you'd need to root the car and run "echo 1 > /dev/morality/evil"
Re:Fuck people! (Score:4, Funny)
echo "chaotic_evil" >
That's why it hasn't been working for you.
There's also a kernel patch on evil.org to change the default setting. With the standard kernel, it is set to "lawful_neutral". In that mode, it will honk and swerve for a little old lady crossing the street.
lawful_good would stop, and offer her a ride.
chaotic_evil will run her over, back up and do it again, and the lower loot collection hook will deploy to take her purse.
Re: (Score:3)
That wouldn't work either, because you forgot to set the evil bit.
Re: (Score:2)
Right now, impressionable youth from 3rd world countries are cheaper than robots. There won't be much worry about this for a while. A rust-bucket Honda and some dumb kid are going to be a lot cheaper than the latest Google-Tesla joint venture product.
We have plenty of time to think about it before Is-lame-oh terrorists are using them.
Except that one limiting factor in the jihad is the ability to get the starry eye idealist soon-to-be-martyr over on this side of the pond. Blowing oneself to tiny bits appears to be a hard sell to westernized folk. The concern here would be that an autonomous vehicle could alleviate that problem.
Of course, it's not a perfect solution. You have to purchase or steal the thing which now are in rather short supply. An autonomous vehicle is going to be fairly tightly regulated once let out into the wild - o
Re: (Score:2)
You get less human rights complaints when you let the children(or entire populations) stave by using robots for your cheap labour instead of paying them a pittance. :(
How big is your monkeyspace? (Score:2)
You can die from the impact of the impending head-on collision, or you can veer off and save your life, but in doing so you'll be accelerating out of the way of the oncoming vehicle and into a group of 40 kindergartners (including your twin son and daughter), their 3 pregnant teachers, and 3 elderly chaperons (one of whom is carrying a kitten, another a puppy) who were waiting for a bus after a field trip.
Don't worry, your decision to kill them to save your own life was made months ago, right after you boug
Re: (Score:3)