Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Robotics The Military United States Technology

Philosopher Patrick Lin On the Ethics of Military Robotics 146

Runaway1956 writes "Last month, philosopher Patrick Lin delivered this briefing about the ethics of drones at an event hosted by In-Q-Tel, the CIA's venture-capital arm. It's a thorough and unnerving survey of what it might mean for the intelligence service to deploy different kinds of robots. This story is very definitely not like Asimov's robotic laws! As fine a mind as Isaac Asimov had, his Robot stories seem a bit naive, in view of where we are headed with robotics."
This discussion has been archived. No new comments can be posted.

Philosopher Patrick Lin On the Ethics of Military Robotics

Comments Filter:
  • by Anonymous Coward

    I don't at all think that Asimov was naive! I think he was concerned about what what robots could become and was trying to educate people about what was needed.

    For example, look at the too narrow a definition of human or the weakening of the laws in other cases and the trouble they produced.

    • by TheLink ( 130905 ) on Sunday December 18, 2011 @11:09AM (#38416830) Journal

      Asimov would be naive if he actually believed the laws could actually be implemented.

      I claim that any entity capable of understanding the Asimov Laws AND _interpreting_ them to apply them in complex and diverse scenarios would also be capable of choosing not to follow them.

      You can program stuff to not shoot when some definable condition is met or not met. But when you need the AI to realize what is "human", "orders", "action/inaction" and "harm" (and judge relative harms), you're talking about a different thing completely.

      You can train (and breed) humans and other animals to do what you want, but it's not like your orders are some non-negotiable mathematical law. Same will go for the really autonomous AIs. Anyone trying to get those to strictly follow some Law of Robotics is naive.

      Even humans that intentionally try to will have difficulty following the 3 Laws. Through my inaction it is possible that some child in Africa will die, or perhaps not. How many would know or even care? FWIW most humans just do what everyone else around them is doing. Only a minority are good, and another minority are evil (yes good and evil are subjective but go look up milgram experiment and stanford prison experiment for what I mean - the good people are those who choose to not do evil even when under pressure).

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        I think people miss the whole point of his writing. It was all about unintended consequences. On the surface the laws seemed like a good idea, but they lead to exactly the problems they were intended to prevent! It's like people saying Darth Vader was "a bad guy". Yes, he did bad things, and for most of his life was a bad guy, but he didn't start OR end that way. I'm not trying to say on whole his life was balanced, but if you are talking about Vader at the end, he was a good guy at that point.

      • by foobsr ( 693224 )

        I claim that any entity capable of understanding the Asimov Laws AND _interpreting_ them to apply them in complex and diverse scenarios would also be capable of choosing not to follow them.

        Seems like if the question is if it is possible to implement an AI with a restricted 'Free Will' while it is still not clear whether Humans have such thing.

        CC.

      • by tragedy ( 27079 )

        TheLink wrote:

        I claim that any entity capable of understanding the Asimov Laws AND _interpreting_ them to apply them in complex and diverse scenarios would also be capable of choosing not to follow them.

        Of course, Asimov agreed with you on this. Hence the zeroth law [wikipedia.org]. Now, that "law" of robotics was still in the spirit of the other three laws, but involved robots choosing to violate the other laws. Asimov created the laws as a reasonably consistent guideline for ethical robotic behaviour. He did realize that

      • You can train (and breed) humans and other animals to do what you want, but it's not like your orders are some non-negotiable mathematical law. Same will go for the really autonomous AIs. Anyone trying to get those to strictly follow some Law of Robotics is naive.

        We Cyberneticists actually do train (and breed [wikipedia.org]) neural networks and other cybernetic entities [youtube.com] to do what we want, but it's not like your orders are some non-negotiable mathematical law.

        I've developed a "hive-mind" (Network of Neural Networks). The machine intelligence (MI) can add more brain power on the fly by either distributing load to more CPUs or by increasing its complexity. The new neuron networks take time to be assimilated into the collective, but this is how it does acquire new abilities as I

      • by Kelbear ( 870538 )

        Can we teach a robot to love?

        Kind of but not really trolling.

        Human love may just be illogical chemically driven urges to breed, but it's highly empathetic. Smart or clever animals learn some respect, cute and babylike animals are given even greater levels of protection.

        While gorillas are relatively intelligent compared to most animals, they're still just "dumb apes" that are easy to marginalize. But seeing two of them walk all the way across the enclosure to hug and cradle each other generated a lot of empa

        • by TheLink ( 130905 )

          I'm personally not even sure that we can say an amoeba or white blood cell isn't sentient, or is that stupid.

          They might just lack the ability/opportunity to show how smart they are. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3001269/ [nih.gov]

          Unlike neutrophils, macrophages live quite long, so perhaps someone could test and compare their learning abilities :).

    • by gl4ss ( 559668 )

      it's not much about education, they're a story device.

      asimovs robots provided a nice setting for a bunch of stories. you see, the robots acted as actors which had pre-defined rules, the humans on the other hand had not. but several stories portrayed that those rules didn't matter if the robots lacked information about what their actions would lead to(like the one story about robots that formed a cult).

      the stories in I, Robot are almost all detective stories of the sort where they're trying to find the moti

  • by bungo ( 50628 ) on Sunday December 18, 2011 @10:37AM (#38416588)

    Isaac Asimov had, his Robot stories seem a bit naive

    Are you sure you read the same Asimov Robot stories as everyone else? Asimov would set up his laws of robotics, and then go on to show how problems would occur by following those rules.

    Remember when he added the 0th rule in one of his later books? Again is was because he was NOT naive and knew that the 3 rules were not enough.

    • by Anonymous Coward on Sunday December 18, 2011 @10:43AM (#38416652)

      I think this Patrick Lin is a bit naive if he thinks that Asimov made the 3 rules as some kind of guideline for how to build robots.
      The 3 rules were just a device to explore unintended consequences of these kinds of things.

      • by TheLink ( 130905 ) on Sunday December 18, 2011 @11:15AM (#38416878) Journal

        I wonder if this anecdote is true (or based on a true incident involving Asimov):

        While watching Clarke's 2001, it soon became obvious that Hal was going to
        be a killer. Asimov complained to a friend, "They're violating the Three
        Laws!"
        His friend said, "Why don't you smite them with a thunderbolt?"

      • Lots of geek types seem to take them as literal laws of robotics. I've seen people get all worked up because an autonomous military robot would "Violate the three laws of robotics." They liked the stories so much they decided that those laws are real.

        Since they get bandied about like that all the time, I'm not surprised some journalist gets taken in by that.

        • by HiThere ( 15173 )

          It's qutie reasonable to get worked up about violating those laws. That's like running high voltage wires without insulation. Even *with* insulation you get into lots of trouble if you aren't careful, but without it...

          (I'm not really talking about transmission lines. I don't know whether those are insulated, or just kept separated. I'm thinking about inside electronic devices, like CRTs.)

          It you'll note on really old wiring, where the insulation wasn't that good (cloth wrapped around the wires.) the wiri

          • "really old wiring", ie, "widow maker" wiring, had no insulation. Just like an electric fence, you drove a nail through a ceramic insulator into a rafter or joist, then wrapped naked copper wire around it, to keep it in place. The two strands of wire were generally spaced about a foot apart, but sometimes they got closer. Add in the fact that a penny would fit into the fuse holder, and it created a LOT of widows. And, killed some widows too!

            I can still find examples of widow maker wiring in the county I

        • They liked the stories so much they decided that those laws are real.

          You mean they thought they were like the laws of thermodynamics? I've seen that too.

          The first time I heard them mentioned I came up with something more or less like the bodyguard paradox.

          • Maybe not quite like the laws of thermodynamics, but at least like federal or constitutional law. Some people may think they really can't be broken period, as in it is not possible, but more I've seen seem to think that they are the kind of law that you have to follow or you are in trouble.

            Either way they have trouble understanding the concept that they were just a story point for a particularly good and influential series of short stories.

      • The 3 laws were a device intended to produce science fiction stories. The 4th law was a device to ensure more sequels to the original stories.
    • by koan ( 80826 )

      I agree with you Asimov wasn't naive and David wasn't Christian.

    • I always assumed the suggestion that the laws were in place because Robotics would have been forbidden otherwise, and every unit had to have these laws burned in stone in it's BIOS, or it would be an illegal device.
      Since there hasn't been any mention of forbidding robots yet, I doubt the 3 laws system will ever exist. Part of the CIA's exercise here seems to be to prevent the ethical discussion from halting the development. On the one hand I think it is good to see an Agency start a 'preemptive' discussion
      • As I recall, Asimov's 3 laws were a fundamental design feature of robot "positronic" brain, and he wrote that it would be a herculean effort to design a brain without them. This was just his way to eliminate the possibility that there were robots that would disobey the laws.

        Isaac Asimov did not write those stories as a philosopher or ethicist. He was writing science fiction stories, or more precisely, detective stories thinly veneered as science fiction. In almost every story, robots were found to be

        • by HiThere ( 15173 )

          He did claim that they were a fundamental feature of the programs designed to operate the robot minds. And, yes, he said that modifying the program to be stable in the absence of those laws would be a herculean effort. But then so was writing the program in the first place.

          There wasn't assumed to be anything in the way of a natural law that made it impossible, but that you'd almost need to start the design from scratch. And it represented years (or, depending on the story, decades) of work.

          Think of tryin

    • The 0th rule is not enough either. The optimal course of action for humanity is arguably to wipe it out completely and to rebuild it from scratch in a controlled environment. I would fully expect a robot obeying the 0th rule to be genocidal.

      Quite frankly, every single "rule" you can think of will have unintended consequences, except for the rule that explicitly states "you shall not act contrary to the expectations of brain-in-a-jar X, to which you shall make periodical reports", for a suitably chosen X. No

    • No, They are completely naive.

      The problems that occur in the robots following As. Laws are completely ridiculous.
      In the real world, programming does not work that way and even if it did every robot made would break down within a days time when it encounter one of the rule paradoxes that characterize his novels.

      • by HiThere ( 15173 )

        You misunderstand them.
        The words were never intended to be the laws. They were intended to be an English translation of the basic principles. The actual implementations were intended to be balancing acts, and only violations of the first law even contained the potential of destroying the robot's mind. Even there some tradeoffs were allowed. Some damages were considered more important than others, etc. (Read "Runaround" again and think about it.)

        I'll agree that the words, as stated, were not implementab

        • It was not the words themselves that I was talking about.
          The basic principals (aka the English words) are pretty good, and probably what you would actually want IRL.
          But the implementation is the stupidest most useless implementation I have ever seen. Robots operating with systems similar to the books could never operate in the real world.

    • the 3 rules were not enough.

      No amount of rules will ever be enough. Rules are about modelling the world, but no model, being a simplification, will ever be able to represent the complexity of the world. No matter the quantity or quality of the rules, the robots will sooner or later arrive at a conflict, ambiguity or a plot device. This of course also happens just the same in ethics and philosophy. Thus it becomes that intelligence is as much about creating as it is about breaking the rules.

      Now the interesting thing in (Asimov's) robot

      • by HiThere ( 15173 )

        As a social animal, it is NECESSARY that we "outsource" some of our decisions to a common-to-our-group rule-system. Every social animal does it, whether wasps, wolves, or humans. Humans are unique in the detailed amount of decisions that they outsource, and in the variance among groups in what the rules are.

        In my opinion we (currently in the US) have too many rules, and they aren't a fair trade off. I don't think this is an evolutionarily stable situation. But that gets resolved over the long term (prob

    • by nine-times ( 778537 ) <nine.times@gmail.com> on Sunday December 18, 2011 @12:24PM (#38417420) Homepage

      Remember when he added the 0th rule in one of his later books? Again is was because he was NOT naive and knew that the 3 rules were not enough.

      Maybe I'm crazy, but I never thought the 3 rules were even the point. I didn't even think it was about robots per se. Asimov's interest seemed to me to be more directed at the difficulties with systematizing morality into a set of logical rules. Robots are a handy symbolic tool for systemizing human behavior in thought experiments or fiction.

      I guess I could be reading too much into things, but really arguing about the 3 rules seems to me a bit like arguing about the proper arrangement of dilithium crystals in the Star Trek universe-- it may be fun or interesting for the sake of a discussion, but it's kind of not that important.

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Sunday December 18, 2011 @10:41AM (#38416628)
    Comment removed based on user account deletion
    • by Anonymous Coward

      You've missed why he was naive. He was naive because the three laws can't be implemented to begin with.

  • by Anonymous Coward

    He knew exactly what humans could and would put robots to. That was the whole point of his laws to show that it didn't have to be that way, and that we could build robots with safe-guards built in. Even then he went to lengths to show that his laws were also not sufficient in every case to prevent harm.

    • by Broolucks ( 1978922 ) on Sunday December 18, 2011 @11:41AM (#38417076)

      He starts from the assumption that strong safeguards are needed, because robots will be like humans and will try to circumvent them. In practice, robots will circumvent their imperatives about as much as humans commit suicide - at the very worst - because obviously we will set things up so that only obedient units ever get to transmit their "genes" to the next robot generation, so to speak. Making robots with human-like minds and then giving them rules, as Asimov seems to suggest, is a recipe for disaster regardless of the rules you give them. It's good literature, but we're not heading that way.

      • I think Fred Saberhagen wrote the books about where we are headed, not ACC.

        http://www.berserker.com/FredsBerserkers.html [berserker.com]

        Skynet was an amateur compared to these guys. :)

      • by HiThere ( 15173 )

        Sorry, but when robots *DO* become intelligent they will be problem solving devices operating within constraints. When given a problem, they will attempt to solve it within those constraints. If you state the constraints incorrectly, then that's too bad. They won't even know that the solution they came up with is "cheating" instead of being creative. *You* know your intentions, but they can't read your mind, so they aren't intentionally "breaking your intentions".

        That said, they will only act within the

        • On the contrary, when robots will be intelligent they will essentially be reading your mind, because that's precisely what we *have* to train them to do. We can't encode any "constraints", much less "intentions" as general laws because it's too difficult. Instead, what we can do is encode them as a massive, crowdsourced set of (order in plain English, intended behavior) pairs and train machines to behave correctly in all the virtual situations listed. Provided we hold off a sizable set of these input/output

  • by erroneus ( 253617 ) on Sunday December 18, 2011 @10:44AM (#38416660) Homepage

    We are already seeing this happen and have been seeing it for hundreds of years... thousands even. The problem with people is that there are too many of them and that they often disagree with their leaders as to what is best for them. So when disagreements happen, there has to be a way to manage them. There are lots of ways... it's just that some would prefer there should be machines to go out and 'control' those who disagree. Getting other people to do your dirty work for you is often fraught with complications like conscience and morality.

    • Getting other people to do your dirty work for you is often fraught with complications like conscience and morality.

      I think the Afghanistan and Iraq wars have proven that when the cost of a war, in domestic lives, is relatively low, wars are more palatable to the populous.

      Look at the Republican debates - all but one of the candidates is itching to start some more wars.

      Robotic soldiers will just allow the politicians to go kill brown people without relent, so they should be opposed on that basis. Pound the

  • hmm (Score:5, Insightful)

    by buddyglass ( 925859 ) on Sunday December 18, 2011 @10:46AM (#38416672)
    Much would seem to hinge on whether you view drones as making independent "decisions", like a human does, or whether you view them as simply reacting to stimuli in a fairly predetermined way. In the former case they're autonomous agents. Maybe something that "new" that might causes us to think differently about the ethics of warfare. In the latter case they're just another man-made tool to maximize killing ability and minimizing risk. Other than that they have some (apparently pretty simplistic) AI baked in, from the perspective of "killing without risk to one's self or even having to experience the horrors of war", how are drones that different from cruise missiles?
    • by Beryllium Sphere(tm) ( 193358 ) on Sunday December 18, 2011 @11:16AM (#38416890) Journal

      That's the key difference between Asimov's robots and ours, and the reason the Three Laws were needed.

      Susan Calvin explained once that robots knew at some level that they were superior to humans, and that without the First Law, the first time a human gave a robot an order, the robot would kill out of resentment.

      • by HiThere ( 15173 )

        I think you are correct, and that she did say that.

        This is one place where Asimov got robots wrong. They won't work that way. Mind you, it doesn't help things all that much, because it's just the motivation that's wrong, not the result. If there isn't a constraint that you mustn't kill people, the robot will do it, but not because it's angry, just because it's the easiest way to solve the problem. (Well, of course this would mean that it wouldn't necessarily kill the person that gave the order, dependin

    • Who is responsible for the AI's actions? Is it the machine? Is it the person who setup and turned on the machine, or the person who designed the machine?

      As far as the law goes, isn't it important that the accused understand their own actions? I.E. the insanity defense allows you to prevent taking responsibility for your actions. So if the machine does not understand anything, then how can it be held responsible?

      By this test, the responsibility for t

      • by HiThere ( 15173 )

        I think it quite possible that the people in charge will not accept that the intelligent robots have any feelings. Read some Victorians descriptions of how the non-white races had lesser feelings, and the feelings of robots are pretty much guaranteed to not be commensurate with ours. Not necessarily lesser, but definitely different.

    • Much would seem to hinge on whether you view drones as making independent "decisions", like a human does, or whether you view them as simply reacting to stimuli in a fairly predetermined way. In the former case they're autonomous agents. Maybe something that "new" that might causes us to think differently about the ethics of warfare. In the latter case they're just another man-made tool to maximize killing ability and minimizing risk. Other than that they have some (apparently pretty simplistic) AI baked in, from the perspective of "killing without risk to one's self or even having to experience the horrors of war", how are drones that different from cruise missiles?

      The point I was going to make. Our drones are nothing like Asimov's robots. Asimov envisioned robots that could think, learn and adapt on their own, almost as well as humans. The three laws were created to give that robot morals and ethics. I'm not saying that we won't get to that point, but we're still a long way off from robots that would need the three laws. What we have now are simply autonomous spying and killing machines, that also can be overridden and controlled by a human remotely. Definitely not

      • by Anonymous Coward

        I still don't see that the author (Lin) had mentioned Asimov at all. So who's the moron now?

  • If anything Asimov saw the potential for where we are going, and suggested an alternative.
  • Not again (Score:2, Interesting)

    by Jiro ( 131519 )

    Regardless of whether the robots are used in ethical ways or not, it is guaranteed that most of the opposition to their use will be from groups who are just looking for a way to oppose either a specific war or all wars the US is involved in. The robots will be a hook for disingenuous anti-war or anti-US activism that would not actually end if the US stopped using robots.

    Every single time the headlines read "US uses ___ for military purposes, ethicists are talking about it" this has always been what has hap

    • Regardless of whether the robots are used in ethical ways or not, it is guaranteed that most of the opposition to their use will be from groups who are just looking for a way to oppose either a specific war or all wars the US is involved in. The robots will be a hook for disingenuous anti-war or anti-US activism that would not actually end if the US stopped using robots.

      Every single time the headlines read "US uses ___ for military purposes, ethicists are talking about it" this has always been what has happened.

      You're talking politics [wikipedia.org], not ethics [wikipedia.org]. Big difference.

  • And Stalin said: "As fine a mind as Karl Marx had, his ideas seem a bit naive, in view of where we are heading with communism."
  • by Hentes ( 2461350 ) on Sunday December 18, 2011 @11:11AM (#38416850)

    Military drones are not autonomous, but controlled by humans. Killing with drones is unethical the same way killing with a gun or with your bare hands is.

    • Ethics is hard (Score:5, Insightful)

      by Okian Warrior ( 537106 ) on Sunday December 18, 2011 @12:16PM (#38417366) Homepage Journal

      This is a subtle point with ethics, so I'm not surprised that you don't get it.

      Killing is not unethical per se.

      We kill people all the time and consider it ethical because of justifications behind the killing. Police can kill in the line of duty, soldiers can kill in duty of war, doctors can administer mercy killings to comatose patients, and so on.

      Killing becomes unethical not because it is killing, but because it is unjust. When the killing goes outside of the bounds of what we consider justified and reasonable, then and only then does it become unethical.

      Drone killings are not unethical in and of themselves, but using drones removes most of the social restraint we have against unethical killing. Unlike using a gun, no human "feels" the killing, there are no witnesses, and there is a diluted sense of responsibility.

      This makes drones easier to use and as a result, they will be used frequently for unethical killings.

      • You know...

        I'm sure that same argument has been made by some pundit by just about EVERY advance in military technology that served to keep one side's troops somewhat less in harm's way than the other's.

        When the U-Boat and torpedoes came about, the Admiralty condemned them as cowardly, illegal, and: "A damn un-English way to fight a war." But now just about every navy includes extensive submarine capabilities.

        Firing an artillery shell at a target that's beyond your horizon also removes one side from a certa

      • by Hentes ( 2461350 )

        Killing is not unethical per se.

        Western ethics is mostly based on the Bible which clearly states that "Thou shalt not kill.".

        Drone killings are not unethical in and of themselves, but using drones removes most of the social restraint we have against unethical killing. Unlike using a gun, no human "feels" the killing, there are no witnesses, and there is a diluted sense of responsibility.

        There IS a human controlling the drone, pushing the button, and seeing the kill through camera. This would be somewhat different with autonomous robots, but there will always be a human giving the command to kill or "go out hunting". An army can't function efficiently when the responsibilities are unclear, there will always be one responsible for the drones. Also, how is this different from ordering your dog to kil

        • by tsotha ( 720379 )

          Western ethics is mostly based on the Bible which clearly states that "Thou shalt not kill.".

          Except that it really doesn't say that. The original wording was much closer to "you shall not murder", which brings in a lot of contextual baggage. The ancient Hebrews had the death penalty and applied it much more liberally than we do.

          • Thank you, tsotha. I was about to post the same "not murder". It's amazing that people so blindly ramble on and on with that "thou shalt not kill" nonsense, when the Bible is filled with killing, in one form or another.

      • Killing with drones is unethical the same way killing with a gun or with your bare hands is.

        This is a subtle point with ethics, so I'm not surprised that you don't get it.

        Killing is not unethical per se.

        The crux of the GP's post was that, when it comes to remote-controlled drones, the moral responsibility hasn't moved one bit - like with a gun, it still rests with the soldier who wields it.

        Truly novel questions of morality only come into play when the robots autonomously make the decision to kill (or destroy, or do anything, really)

        using drones removes most of the social restraint we have against unethical killing. Unlike using a gun, no human "feels" the killing, there are no witnesses, and there is a diluted sense of responsibility.

        I think you're somewhat right here, but recall that the people on the ground being bombed do see what happens, and they're pretty clear on what they think of the morality of it

      • by prefec2 ( 875483 )

        Killing is unethical pre se. It will stay that way even if you find a judgment to shoot someone. From a rational point of view it is logic to defend yourself. As a Christian, Muslim, or Jew it is unethical to shoot someone or kill someone. But, as an argument I accept that self-defense is the only allowed case for killing someone else.

        Drones are not used for self-defense nowadays. They are used to spy on people in Iran. They are used to murder people abroad. And it makes no difference if they committed a cr

    • I disagree.

      1st Claim: The US military has a number of autonomous, currently unarmed examples include Global Hawk, X-37, and RQ-3. There are certainly others, and there may be armed examples.

      2nd claim: It is easily argued that remote-killing does not fulfill the proportionality argument of just war (bellum iustum). The very fact that the US is so heavily investing in them, indicates that the loss of a UCAV is considered less costly than the loss of the crew, thus, we as a combatant are not subject to the s

      • by Kjella ( 173770 )

        2nd claim: It is easily argued that remote-killing does not fulfill the proportionality argument of just war (bellum iustum).

        Why? Trying to get in a position where you can kill the enemy, but the enemy can't kill you has been the fundamental essence of warfare since forever. Getting air superiority to enable free bombing, artillery with longer range than the opposition, stealth so you can see them but they can't see you. It makes very little real difference if the US had nuked Hiroshima from a computer terminal back home or a pilot high in the skies above as long as the asymmetry is there. That part isn't fundamentally new.

        I gues

    • And autonomous killing robots would receive instructions from humans. Even when remotely controlled, drones reduce the difficulty and consequences of killing, and risk bypassing ethics entirely.
      • I think that you are missing at least a portion of one important fact. That "remotely controlled" thing. The robots are reaching the point where programming can and does kick in, when remote control is lost.

        To the point, that drone that Iran captured recently, was not under "remote control". Communications were jammed, and the GPS was spoofed to confuse the drone about it's position. So, it wasn't "remotely controlled" when it decided to land. In effect, the drone "decided" that it should land, and it

  • by wisebabo ( 638845 ) on Sunday December 18, 2011 @11:13AM (#38416864) Journal

    (From the article) So the Intl. Red Cross "bans weapons that cause more than 25% field mortality and 5% hospital mortality". (I assume these are the same guys who came up with the Geneva conventions so maybe there is some enforceability as in a war crimes trial afterwards).

    Wow, and I thought all's fair (in love) and war. Doesn't this make every nuke illegal? (the article said this is one of the justifications for banning poison gas). So the concern is that as these drones get better, they may have a lethality approaching 100% making them illegal even if there are zero casualties from collateral damage.

    I thought the whole point of weapons was 100% lethality. I guess I never thought about how terrifying such a weapon would be (as if war wasn't terrifying enough). Weapons have gone a long way since the first club wielded by that ape-man in that documentary "2001".

    • by pla ( 258480 ) on Sunday December 18, 2011 @11:44AM (#38417108) Journal
      I thought the whole point of weapons was 100% lethality.

      The ethics of killing aside, the "best" weapon for strategic (as opposed to personal self-defense) purposes doesn't kill, but rather, maximizes the resource drain required to deal with the damage. Ideally, a "perfect" weapon would leave your enemy's troops all alive, all severely crippled, and all not quite damaged enough to consider letting them die a mercy, yet requiring some fabulously expensive perpetual treatment.

      Some of the greatest victories in human history came down to such trivial nuisances as dysentery or the flu.
      • or just down to not having enough to eat and the weather too cold

        the famous graphic:

        http://www.edwardtufte.com/tufte/posters [edwardtufte.com]

        logistics: getting supplies to the front line, is more of a deciding factor in any war than how lethal your armament is

        and the wise defender does not fight the front lines, they fight the supply lines

      • Much as I agree with your premise, I have to argue with your examples.
        Dysentery and the Flu were "lucky" accidents for British and Spanish empire builders. Maybe, yellow fever was intentionally distributed by donated blankets -- but these biological weapons are really hard to aim, and the bad hygiene of Europeans, was never INTENTIONALLY a military strategy.

        A case could be made, however, for Microbes deciding human dominance, over actual militaries. Even in the US Civil War, more troops died of poor footwea

    • by Anonymous Coward

      Better to maim your opponent than kill, this is the first thing to learn.

      When you kill an opposing fighter, he's just dead.

      If you wound him badly, not only is he out of the fight but his wounded status put additional material and morale strain on the opposition who must them evac and care for that person.

      • That isn't necessarily true. As with most things, the situation dictates, but there are certainly a lot of situations where you want the enemy dead rather than wounded. Most situations in fact.

        -Former Marine.

        • A former marine may or may not have made that statement. However, what AC stated is official military doctrine. I first read the statements in a "Military Requirements for Petty Officers First and Second Class" manual. A dead enemy soldier removes one enemy soldier from a conflict. A wounded enemy soldier removes as many as seven enemy soldiers from the conflict.

          As has already been stated above, logistics win wars, not armament. And wounded soldiers seriously impair logistics.

          • As has already been stated above, logistics win wars, not armament. And wounded soldiers seriously impair logistics.

            And no military strategist ever figured out the response to this is: treat the wounded as dead and carry on until the battle is over? I know this runs headlong into ethics questions, but on a pure strategy level it's a win. Then again, apparently military strategies prefer losing X% of their force in an attack where they don't know which men will die, vs. losing far fewer men in a guarante

            • A statistician should attempt to take into account something called "morale". Troops who know they are doomed, and that their leaders will not assist any survivors for hours, or days, or possibly even weeks, simply won't fight as well as troops who KNOW that the leadership will do all in their power to tend to the sick and wounded.

              An ounce of morale is worth at least a couple pounds of weaponry.

    • They'd like to, but the Red Cross doesn't get to make those kind of determinations. That all comes from The SIrUS Project near as I can tell. The Red Cross thinks it would be a great idea, but it has no force of law or treaty that I can see.

      The actual Geneva Convention rule is more along the lines of weapons that aren't lethal enough. You can't use weapons that cause superfluous injury or unnecessary suffering. For example you couldn't design a weapon that would, say, go in and destroy someone's liver but l

  • NOT "Robots" (Score:2, Interesting)

    by Anonymous Coward

    The drones are remote-controlled devices and not different to "distance weapons" such as longbows or precision rifles. There has been a discussion hundreds of years ago whether such weaponry is morally OK or not and apparently the human race has decided they are permissible. Again, Drones are NOT robots, as they have 0% scope to decide about weapons engagement. There are always humans making the "kill" decision. It has ZERO to do with Asimov's reasoning.

    Whether you think warfare in Afghanistan is good| achi

  • by nurb432 ( 527695 ) on Sunday December 18, 2011 @11:19AM (#38416912) Homepage Journal

    And the standards are written by the victors.

    • there is a corollary to that observation:

      the victors are the ones with the better ethics

      such as happiness of the societies fighting, economic capacity, as determined by cultural proclivities, etc. you can't win a war if it is at the expense of making your society miserable, for example, or destroying your economy

      everyone knows the cynical observation "might makes right"

      but few appreciate the subtle prologue: "right makes might"

      wars are constant. battles are won and lost. but the victor in the long term is e

      • "Right makes Might" is a good principle, but doesn't always work out. Consider, for example, the Peloponnesian War.
  • Vini: We have a lot of works to do. You shouldn't be waisting time reading stories on the computer. Waterproof heart and brain monitor?
    Guido: It's a good story about ethical type stuff. Uh... check.
    Vini: I'm just sayin. If the boss catches ya your screwed. Robotic dunking arm?
    Guido: Uh... check. It aint like we gots the eyes or ears set up yet.

  • From TFA:

    Robots can monitor vital signs of interrogated suspects, as well as a human doctor can. They could also administer injections and even inflict pain in a more controlled way, free from malice and prejudices

    This is a terrible (human) atrocity.
    This is humans renouncing their humanity, by trying to get as far as possible from the victims of their actions through robots and drones, thous avoiding the moral responsibility. Horror.

  • by mark-t ( 151149 ) <markt AT nerdflat DOT com> on Sunday December 18, 2011 @12:34PM (#38417484) Journal

    We lose touch with the real cost of war... and with the importance of what, in the end, might be attained by it.

    In the end, I believe that the only things that justify going to war against another are things that one is prepared to sacrifice their life for so that future generations might be able have it. And in the end, our appreciation for whatever might be gained because of a past war is only amplified by the value of the sacrifice that went along with it.

  • Surprised in an article that long, this wasn't mentioned:

    The ability to wage war without the morality of individual soldiers. While soldiers are certainly capable of immoral actions like raping and indiscriminate slaughter that a machine not, it is also that humanity that can lead them not to follow orders, stop fighting...

    Today, this is probably much more important in domestic issues. Imagine the recent Arab Spring if the Arab dictators had access to such robots. They could effectively control their pop

    • by Grog6 ( 85859 )

      Imagine a psychopath like Hitler in charge of such an army. Not having to care about defections, unwilling troops...

      Give it a few years; we won't have to imagine anything.

  • From TFA:

    While robots can be seen as replacements for humans, in most situations, humans will still be in the loop, or at least on the loop--either in significant control of the robot, or able to veto a robot's course of action. And robots will likely be interacting with humans. This points to a possible weak link in applications: the human factor.

    Which will be addressed by SkyNet.

  • http://www.pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]
    "There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us

  • The headline talks about military robots, but in his introduction Mr. Lin talks about torture. Torture is unethical. And it stays unethical when it is performed by a machine. The only difference is, that not a person is actually performing it. But the issue remains, the victim is still a victim, and the person ordering the torture is still a scumbag. Similar logic applies to any machinery which is used to spy on people or bomb them. It makes no difference if the machine is more automated or not. A human pul

  • Currently, the Azimov Laws of Robotics come off as rather naive. That's because we're just getting started, and the utility of said Laws aren't generally recognized. They won't be until we've been hoist on our own petard a few times, including the power elite. Only when a large and broad enough sample of people have been killed, without a sustained advantage accruing to any one nation, will Azimov be seen as prophetic.

  • ... of their creations.

    That should fix it.

    If you're working on a general purpose algorithm to (say, recent news) improve the efficiency of multiplication of large sparse matrices, which will have multiple non-murderous uses, then you're probably going to be able to talk your way off the gallows by pointing to how widely your application is used outside the kill-bot industry.

    OTOH, your script for tracking moving humanoid targets and calculating whether it's better in a fiscal (and re-supply logistics) sen

"I'm a mean green mother from outer space" -- Audrey II, The Little Shop of Horrors

Working...