Philosopher Patrick Lin On the Ethics of Military Robotics 146
Runaway1956 writes "Last month, philosopher Patrick Lin delivered this briefing about the ethics of drones at an event hosted by In-Q-Tel, the CIA's venture-capital arm. It's a thorough and unnerving survey of what it might mean for the intelligence service to deploy different kinds of robots. This story is very definitely not like Asimov's robotic laws! As fine a mind as Isaac Asimov had, his Robot stories seem a bit naive, in view of where we are headed with robotics."
Not again (Score:2, Interesting)
Regardless of whether the robots are used in ethical ways or not, it is guaranteed that most of the opposition to their use will be from groups who are just looking for a way to oppose either a specific war or all wars the US is involved in. The robots will be a hook for disingenuous anti-war or anti-US activism that would not actually end if the US stopped using robots.
Every single time the headlines read "US uses ___ for military purposes, ethicists are talking about it" this has always been what has happened.
Did you know weapons can be TOO lethal? (Score:5, Interesting)
(From the article) So the Intl. Red Cross "bans weapons that cause more than 25% field mortality and 5% hospital mortality". (I assume these are the same guys who came up with the Geneva conventions so maybe there is some enforceability as in a war crimes trial afterwards).
Wow, and I thought all's fair (in love) and war. Doesn't this make every nuke illegal? (the article said this is one of the justifications for banning poison gas). So the concern is that as these drones get better, they may have a lethality approaching 100% making them illegal even if there are zero casualties from collateral damage.
I thought the whole point of weapons was 100% lethality. I guess I never thought about how terrifying such a weapon would be (as if war wasn't terrifying enough). Weapons have gone a long way since the first club wielded by that ape-man in that documentary "2001".
NOT "Robots" (Score:2, Interesting)
The drones are remote-controlled devices and not different to "distance weapons" such as longbows or precision rifles. There has been a discussion hundreds of years ago whether such weaponry is morally OK or not and apparently the human race has decided they are permissible. Again, Drones are NOT robots, as they have 0% scope to decide about weapons engagement. There are always humans making the "kill" decision. It has ZERO to do with Asimov's reasoning.
Whether you think warfare in Afghanistan is good| achieving anything positive|legal is a wholly different question, though.
Re:Asimov naive? I don't think so. (Score:5, Interesting)
I wonder if this anecdote is true (or based on a true incident involving Asimov):
While watching Clarke's 2001, it soon became obvious that Hal was going to
be a killer. Asimov complained to a friend, "They're violating the Three
Laws!"
His friend said, "Why don't you smite them with a thunderbolt?"
Comment removed (Score:4, Interesting)
Re:I don't think Asimov was naive (Score:2, Interesting)
I think people miss the whole point of his writing. It was all about unintended consequences. On the surface the laws seemed like a good idea, but they lead to exactly the problems they were intended to prevent! It's like people saying Darth Vader was "a bad guy". Yes, he did bad things, and for most of his life was a bad guy, but he didn't start OR end that way. I'm not trying to say on whole his life was balanced, but if you are talking about Vader at the end, he was a good guy at that point.