Please create an account to participate in the Slashdot moderation system


Forgot your password?
The Military Robotics United States

US Killer Robot Policy: Full Speed Ahead 202

Lasrick writes "Princeton's Mark Gubrud has an excellent piece on the United States killer robot policy. In 2012, without much fanfare, the U.S. announced the world's first openly declared national policy for killer robots. That policy has been widely misperceived as one of caution, according to Gubrud: 'A careful reading of the directive finds that it lists some broad and imprecise criteria and requires senior officials to certify that these criteria have been met if systems are intended to target and kill people by machine decision alone. But it fully supports developing, testing, and using the technology, without delay. Far from applying the brakes, the policy in effect overrides longstanding resistance within the military, establishes a framework for managing legal, ethical, and technical concerns, and signals to developers and vendors that the Pentagon is serious about autonomous weapons.'"
This discussion has been archived. No new comments can be posted.

US Killer Robot Policy: Full Speed Ahead

Comments Filter:
  • by jfdavis668 ( 1414919 ) on Friday September 20, 2013 @02:36PM (#44905691)
    or other movie equivalents.
    • Exterminate! Exterminate!
    • Re:Cylons (Score:4, Funny)

      by smooth wombat ( 796938 ) on Friday September 20, 2013 @03:34PM (#44906457) Journal

      I was thinking more along the lines of Bender.

    • "All of this has happened before, and will happen again."
  • by Anonymous Coward

    "Damn the Asimovs, full speed ahead!

  • by GuardianBob420 ( 309353 ) on Friday September 20, 2013 @02:37PM (#44905707) Homepage

    I'm not even sure what else to say here... so much for the Three Laws ;-)

    • In the US, the Three Laws of Robotics would be as complicated as copyright laws.
      • Law 1, article 4, section 53, subsection 12
        • by Anonymous Coward on Friday September 20, 2013 @03:11PM (#44906205)

          Law 1, article 4, section 53, subsection 12

          You shall not do any harm to a human being unless they are attacking or trying to arrest a CEO/Billionaire.

          Law 1, article 4, section 53, subsection 12 a. If CEO is being arrested by armed humans of any sort (including police), you are free to cause said human's harm and even death.

          Law 1, article 4, section 53, subsection 12 subsection b. Goldman Sachs, Bank of America, and every other Wall street banker shall be protected even if it means killing every other human in the vicinity.

          Law 1, article 4, section 53, subsection 12 subsection c. : folks who can pay 100 million dollars per year shall have unlimited protection by said robots.

          Law 1, article 4, section 53, subsection 12 subsection d. : Members of Congress are immune forever from actions of said robots.

      • Directive 4 []

    • by spire3661 ( 1038968 ) on Friday September 20, 2013 @02:47PM (#44905859) Journal
      The Three Laws were EXPRESSLY invented to show why such a simple system will not work. Asimov spent extensive amounts of time pointing this out to us. It is frustrating people think '3 laws safe'.
      • by vux984 ( 928602 ) on Friday September 20, 2013 @02:59PM (#44906059)

        The Three Laws were EXPRESSLY invented to show why such a simple system will not work.

        The three laws were expressly invented to make a system that works.

        He then spent extensive amounts of time exploring them for unintended consequences and corner cases where they did not work.

        It is frustrating people think '3 laws safe'.

        Its FAR more frustrating that rather than trying to -fix- the edge cases Asimov uncovered with the 3 laws (later 4 laws), we've decided to just go full steam ahead without any laws at all with robots designed for the sole purpose of killing us.

        • by JWW ( 79176 ) on Friday September 20, 2013 @03:08PM (#44906163)

          Ah, but one law is just waaaay simpler than three.

          It's just unfortunate that law is -- Kill humans.

        • by Charliemopps ( 1157495 ) on Friday September 20, 2013 @04:13PM (#44906821)

          These are AI laws. They have nothing to do with what we have now, as we don't have any real AIs. Our governments use of these will basically be machine guns with servos that shoot anything that moves or possibly drones that are authorized to shoot anyone identified with a weapon in hand in a given area. If we actually had real AIs I'd be less concerned, but having a computer that's likely not any more intelligent than my smartphone or, at best, my desktop PC decide if someone should be killed or not is terrifying.

        • by Anonymous Coward on Friday September 20, 2013 @04:24PM (#44906951)

          > Its FAR more frustrating that rather than trying to -fix- the edge cases Asimov uncovered with the 3 laws (later 4 laws), we've decided to just go full steam ahead without any laws at all with robots designed for the sole purpose of killing us.

          It's simple. Robots will follow orders. They will have no qualms about executing illegal orders. No issues with killing civilians. No hesitation about killing the very population that they were supposedly built to protect.

          Amy and police might actually not gun down a mob of 1000s just to protect a couple of corrupt and powerful fucks. They didn't in Egypt. The elite know this... so they need robots who will protect them unconditionally.

        • It seems like the most efficient way to kill all humans without the side-effects of nuclear war, especially if we can build killer robots that can make more killer robots.
          What do you have against efficiency?
        • Did you even RTFA? (Score:5, Interesting)

          by Anonymous Coward on Friday September 20, 2013 @05:45PM (#44907773)

          So, I'm working in the field, for the DoD. We're actually putting more work into getting them to autonomously prevent a shot that humans commanded than we are into getting them to take a shot uncommanded. The only weapons that are anywhere close to usable are anti-aircraft systems (starting in the 1970's) that have a self defense mode. Given that we're currently so cowardly in the ROE that more civillians have died in Afghanistan from our unwillingness to shoot than have died from our decisions to shoot, I'd say we're nowhere close to letting a robot make a decision, and won't be in the next decade.

      • The Three Laws were EXPRESSLY invented to show why such a simple system will not work. Asimov spent extensive amounts of time pointing this out to us. It is frustrating people think '3 laws safe'.

        The *early works* show how it wouldn't work in simple robots, while the later works show how advanced humaniform robots such as R. Daneel Olivaw ought be able to resolve difficult situations with increasingly narrower gray area that would prevent them from resolving only the most complex situations (they actually only had problems with resolving the zeroth law issues - I don't recall a humaniform robot getting conflicted over the implications of laws 1, 2, and 3). Reading that sort of like refining a fracta

      • by squiggleslash ( 241428 ) on Friday September 20, 2013 @03:49PM (#44906589) Homepage Journal

        That, perhaps, might be Will Smith's interpretation of Asimov's novels, but it's not any sane reader's interpretation.

        Asimov's novels are pretty clear, the three laws do, in fact, restrict the robots from being OUT OF CONTROL KILLING MACHINES!!1!. There are only two appearances of such robots (and then, they're hardly described by such a term) - one short story, whose name I forget, has a deliberately weakened set of laws in it. The other is, of course, Giskard and Daneel's formulation of the zeroth law, where, again, the robots are no longer obeying, exactly, the three laws.

        What Asimov does do is describe the consequences of the three laws, showing them to be imperfect in terms of creating universal machines, but effective at preventing the robots from going out and killing everyone.

        Asimov's motivation for creating the three laws was to deal with the plethora of inane "Scientist builds perfect universal machine, doesn't realize that a perfect machine will kill maker until it's too late" stories that started entirely legitimately with Frankenstein but then descended into cliche hell, as story teller after story teller thought it was wildly original to pretend that scientists are dumb and would build destroyers of the universe to prove how clever they were. Annoyed, he wrote a set of rules and then wrote story after story explaining them. The stories didn't debunk the rules, or show they wouldn't work. The stories usually showed that they did, and worked in unexpected ways.

        • by Kjella ( 173770 )

          Oh please, there's so much "Star Trek computer" in it. Hand a robot a gun, put it in front of a another guy with a gun about to kill someone. By the first law of robotics, do you by action kill the would-be killer or through inaction let the victim be killed? It always picks the "right" solutions as if by magic. By the way, wasn't there also a story where the term "human" became corrupted so the robots didn't violate their laws by attacking "sub-humans" or something to that effect? It's been a long time sin

    • You can keep the laws as is, you just have to redefine 'human.'
    • I'm not even sure what else to say here... so much for the Three Laws ;-)

      The three laws are the Robotic Constitution. And now witness all the exceptions granted by the KillerRobotic Congress of the federal robotic government in Machinegton A.C.

    • Let's balance this out on our options. Option one is three laws from a Science Fiction author who the three laws and the turned around and wrote multiple books about how they inevitably just couldn't work. Option two involves the real world and keep our service members out of harms way from real missiles and bullets.

      Let me think real hard about this, obey three fictional laws parodied by the very person who came up with them or protect real human beings from harm? I know that might sound like a tough choice

    • If you were going to bomb a building, and thus kill everything in a kill radius, is it significantly different if you send in a kill bot with no off switch. Maybe the AI on the robot makes mistakes as shoots some non-combatants. The bomb doesn't even have AI.

      Mission planners have the trade off of, do I send in very selective humans to avoid casualties, but put those humans at risk, or do I drop a bomb and not put my soldiers at risk but have the potential for indisriminate killing. With unclear info it's

    • Nice ones like Hal or the Terminator.

      Fred Saberhagen's Berserkers series is much more terrifying. []

      "What do you mean you killed the guy that knew the override password? He knew too much, Sir." :)

    • by MrBigInThePants ( 624986 ) on Friday September 20, 2013 @08:53PM (#44909017)


      The three laws is a bunch of overhyped rubbish - quadruply so in the case of robots that are not fully AI and could not even interpret them. Also due to the extremely naive assumption that people in charge of making robots will want them to respect human life and implement these anyway - I mean they don't currently?!

      Militaries? FoxConn? NSA? CIA? etc? Their track record is CURRENTLY completely abysmal.

      Yes I know you were joking. I just don't see any of this as a joke long term.

      This is not conspiracy - I look at the CURRENT state of affairs and add killer robots and feel sick to my stomach. This really is no joke and it is about time the plebs woke the fuck up already.

      You (and many other nations) have currently lost control of your government and if anything it currently works against you wherever it can for the benefit of the ruling class. (NB: Not you) This much is patently obvious and if you disagree I suggest you research the current value of the shadow banking system (100 trillion), what the rich pay in REAL taxes, and where 99% of the additional wealth generated since the crash has gone (i.e. to 1% of the population) while huge numbers of your fellow citizens (remember them?!) are either the "working poor" or unemployed while nothing is done to help.
      None of this is conspiracy - this is well documented fact from this very website no less.

      Soon efficient robots will exist that will be able to vanquish/suppress/intimidate/murder people without those pesky human traits that exists in many of us such as empathy, compassion, guilt, regret etc. Also they will not innately have danger of leaks, spies, whistle blowers, etc.
      You see it is very hard to find/train/indoctrinate/trick enough psychopaths to carry out these functions on an every increasing number of disgruntled citizens and foreigners.

      And your governments have ALREADY shown they are MORE than willing to use those (albeit very flawed) "human resources" for many, many nefarious purposes/war crimes/assassinations/etc. And this if from what we know from the facts (documents, reports, leaks etc), let alone anything that might have gone on behind the scenes, yet (if ever) to be uncovered.

      If I was you I would be very, very afraid. But not for the science fiction reasons given in most of these vapid articles written this way because people loved the terminator movie and a book. For the simple fact that robots could now carry out much the same sort of behaviours we already see with the restriction of having to use human beings.
      Finding/training/indoctrinating/tricking an army of psychos is very, very hard. Finding a handful of people whose moral compass is naturally broken is not that hard at all. (just ask the CIA)

      I would be far more afraid of a robot controlled by a psychopathic human than by another robot ANY DAY OF THE WEEK.

      And look around: many of those psychopaths are holding the keys to the kingdom already.

  • Yeah... (Score:4, Funny)

    by roc97007 ( 608802 ) on Friday September 20, 2013 @02:39PM (#44905729) Journal

    > A careful reading of the directive finds that it lists some broad and imprecise criteria and requires senior officials to certify that these criteria have been met if systems are intended to target and kill people by machine decision alone. [emphasis mine]

    (I think I've seen that movie...) What could possibly go wrong?

    I wonder if they'd be running Windows for Killer Robots?

    • by Forbo ( 3035827 )
      ED-209, anyone? "You now have five seconds to comply."
      • Seruiously, cop robots are what, 20 years out? Less?
        • Cop robots? You mean like speeding cameras?

          Remember when everyone thought that humanoid robots where the wave of the future and would be grandpa's maid and take over the job of factory workers? It turns out that humanoid robots are bloody hard, and it's a lot easier to automate the task at hand with more conventional tools and machines that look nothing like people.

    • (I think I've seen that movie...) What could possibly go wrong?

      As long as the killbots have a preset kill limit, I think we'll be okay.

      • Will work as well as cell phone data caps.
      • (I think I've seen that movie...) What could possibly go wrong?

        As long as the killbots have a preset kill limit, I think we'll be okay.

        Unless there's a rounding error, or another FPU issue, or a counter rolls over.

        • Yep. Intel FDIV bug, 1994; some prime number calculations are "innaccurate".

          Intel Logical Human Interaction Processor anomoly, 2023; a town of 500 is wiped out.

    • by gmuslera ( 3436 )
      In movies you should take care if the robot eyes shine in red light, or maybe yellow one. But for those robots you should take cover if they show Blue Eyes Of Death.
    • dieOS 8 is my guess
  • Nothing can go wrong, go wrong, go wrong, go wrong, go wrong...

  • by Lucas123 ( 935744 ) on Friday September 20, 2013 @02:40PM (#44905755) Homepage
    It would be pretty darned hypocritical of us to indiscriminately bomb people and then say that you shouldn't use A.I. driven robots because it's too impersonal a way to kill people.
    • by gmuslera ( 3436 )
      For they will be operating inside normal parameters to kill 50 civilians [] for each terrorist, enemy soldier or robber in US soil. Because police (or at least, swat teams) WILL use them.
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Yeah, no. Most people object to it because fully autonomous killing machines cannot be more reliable than our most advanced fully autonomous non-killing machines are now. The objection is less about indiscriminately bombing people within the designated target area than it is about a glitch causing the "designated target area" to change from the actual target to...oh, I dunno, a lone seagull on the beach? The moon? A city 5000km away from the original target? The base which initially launched the drone and c

    • It would be pretty darned hypocritical of us to indiscriminately bomb people and then say that you shouldn't use A.I.

      Now that it's becoming well-known that drone operators get severe PTSD [] (an injury to the moral reasoning part of the brain), the USG is going to need some H-K drones [] to carry on their murderous adventures.

      A.I.'s don't balk at attacking civilians either - they'll never be told that their maintenance budgets are funded by their targets' paycheck withholdings.

  • by Dunbal ( 464142 ) * on Friday September 20, 2013 @02:42PM (#44905783)

    It's a chilling thought that the world's largest state sponsor of terrorism is also pioneering drone and robotics technology. I guess some bright spark somewhere decided he can get around international law by just having the machines do the killing, because "there's no law against machines doing it, right?"

    Most Americans turn around and say "what law are we breaking?". How about this one, from the UN General Assembly: "No State may use or encourage the use of economic, political or any other type of measures to coerce another State in order to obtain from it the subordination of the exercise of its sovereign rights or to secure from it advantages of any kind. Also, no State shall organize, assist, foment, finance, incite or tolerate subversive, terrorist, or armed activities directed towards the violent overthrow of the regime of another State, or interfere in civil strife in another State." Don't even get me started on "targeted killing" or "extrajudicial killing", which is just summary execution without trial. Even Goering got a trial. Even Eichmann got a trial. And then you have the nerve to call yourselves a "free country"? Wake up, people, put your shiny iPhones down.

  • by metrix007 ( 200091 ) on Friday September 20, 2013 @02:44PM (#44905825)

    Asimov argued against the Frankenstein complex as it applies to robots, and indeed many people have made the point, asking how something like Skynet could happen.

    Would we really be stupid enough to build something that is smarter and stronger than us, and designed to kill us without safeguards?

    Apparently, yes.

    • by Xyrus ( 755017 )

      ...Would we really be stupid enough to build something that is smarter and stronger than us, and designed to kill us without safeguards?...

      In order to guarantee the American way of life? I'm willing to take that risk!

      10 points and a doughnut if you can guess the movie. :)

  • by freeze128 ( 544774 ) on Friday September 20, 2013 @02:45PM (#44905845)
    Go ahead, make killer robots. I'm pretty sure I can outwit the current state of the art killbots by:
    • - Hiding behind a blanket.
    • - Running around behind the robot to it's blind spot.
    • - Pushing it over on its side.
    • - Ascending some stairs.
    • by Kjella ( 173770 )

      Because what, you expect the killbot to come running after you? It'll have an IR camera and assault rifle, point-and-shoot style. Let's call it more of a moving turret, less of a "robot".

      • Actually, this form of robot is probably a good thing to build, as it would be employed by a military in the same way that land mines were traditionally used. With the added benefit that it would cover a much larger area, and me easier to decommission or move when the fighting is over so you don't have children blowing their hands off twenty years later.
    • by Nadaka ( 224565 )

      1: you still show up on thermal.
      2: you can't outrun an aimbot.
      3: it won't be top heavy biped.
      4: it calls in air support and levels the building.

    • - Shooting a blue portal beneath it, and an orange one above it.
      - Shooting an orange portal beneath it, and a blue one above it.

    • by invid ( 163714 )
      You forgot "hit weak point for massive damage!"
    • by invid ( 163714 )
      Fortunately killer robots don't just grab people by the head and crush them, they like to throw them around first, and they usually throw people near some conveniently discarded weapon.
    • Killbots will probably have a preset kill limit that we can just send wave after waves of infantry at so we can render the killbots useless until reset.
    • "Ascending some stairs," you say? E-LE-VATE!
    • Hiding behind a blanket.

      Thermal camera? Motion sensor?

      Running around behind the robot to it's blind spot.

      It's a robot. Who says it doesn't have "eyes in the back of its head"?

      Pushing it over on its side.

      It's probably both very heavy and on a tank-style turret. Good luck.

      Ascending some stairs.

      This has already been done; you just make the turret rotatable such that it can also climb stairs. Or, y'know, you just give the thing a helicopter.

    • by dissy ( 172727 )

      Wow, 14 replies and no one actually provided you links to "current state of the art robots". I do hope you find these at least entertaining, if not informative.

      Check out the darpa Atlas robot, preferably the videos from the past 3-4 months. []

      There is also the darpa PetMan []

      Good luck pushing over one of these beasts []

      Components such as the high speed hand based o

  • "Hey, baby; wanna kill all humans?"

    Bad humor is how I deal with horrifying realities I really don't want to face; the worse the situation, the more bad jokes I want to make.

    Now, where did I put that 50,000 page volume of stupid puns?

  • Read Kill Decision (Score:4, Insightful)

    by timdearborn ( 645863 ) on Friday September 20, 2013 @02:54PM (#44905983)
    If you have not read Kill Decision by Daniel Suarez, you should. This fictional thriller, written last year, unfortunately seems more like reality than fiction. It portrays a vivid, all-too-real picture of what could be the outcome of these policies. Wikipedia link to book: []
  • . . . the targeting algorithms will be vetted by legal teams every bit as diligent and committed to human rights and Constitutional law as the people in FISA courts who have helped keep the NSA from misusing their powers.

    In related news, if you have legitimate business in areas of cities frequented by anti-war protestors, you can purchase a RapidPass Trusted Citizen(tm) badge which will eliminate time-consuming drop-and-freeze inspections by SecuriCorps (tm) PeacePal(tm) hover-drones. F%$ing hippies need no

  • Make sure to make your killbots with a pre-set kill limit. That way, they can be defeated by Zapp Brannigan by sending wave after wave of his own men at them.

  • Changing my name to 'Philip J. Fry'

  • It's ultimately the same guy on the trigger finger, regardless of whether or not the weapon is a youth fresh out of boot camp, a remotely-operated weapon, or a drone.

    People should be looking harder at the people agitating for higher military spending and starting the foreign wars.

  • Got ahead . . . just try to steal that Robot Killer Car.

    The car says that it doesn't want to be stolen . . . and who is going to do it . . . "you, and what army?"

  • ...after I read some Neal Asher [] books. Truly and utterly horrifying, and very believable.
  • Auto kill bots are an open invitation to other countries to develop nukes to ensure that they can enforce a demand to keep these things away from their country. Or else.
  • welcome our new murderous robot overlords
  • Fortunately; Star Trek shows us how exactly to defeat killer robots or computers of any kind. You simply make them explode using illogical arguments.

    Captain Kirk: "Everything Harry Mudd says is a lie"
    Mudd : "Now listen carefully. I'm lying"

    Captain Kirk: "You've murdered hundreds of men"
    M5 : "Murder is contrary to the laws of man and God."

    And if that doesn't work, try the Chewbacca Defense!

  • Will they be manufactured by Cyberdyne Systems, or OCP (OmniCorp)?
  • by DigitalSorceress ( 156609 ) on Friday September 20, 2013 @03:50PM (#44906601)

    This will not end well.

    At first, the military will be all like "YAY, autonomous killer robots!"

    Then someone will hack some of them... and they'l be all like "boo, they're using it against us and we never saw taht coming"

    And then Politicians will be all "we gotta pass laws against being smart n stuff, because Turrhurrerristz"

    And then a few years later, we're all just banging rocks together.

    ok so my real answer is basically just /facelamp (for when facepalm is not sufficient)

  • they already exist (Score:5, Insightful)

    by bitt3n ( 941736 ) on Friday September 20, 2013 @03:51PM (#44906619)
    Killer robots have been used in combat for a long time. Their logic consists of "if pressure applied to unit, explode." Presumably these new models will be at least somewhat less likely to kill the wrong target.
  • 99.99% of the work of automatically killing enemy targets is properly detecting enemy targets and assessing their value. Whether the robot makes the decision to kill with or without human intervention is of great importance philosophically and practicality, but is trivial in terms of engineering.

    The reason why the military would want automated target detection and assessment is obvious. This just happens to also be the same prerequisite for completely automated military strikes. Given the relatively mini

  • While I strongly prefer requiring humans to make the life-and-death decisions, I see a serious problem with that:

    If robot A can make the decision itself, and robot B has to wait for a person to evaluate the information and decide, robot A will act much more quickly and be much more effective. The country with robot A will win the wars, and we'll be living with robot A anyway.

    • Both Robots A and B will be used by some enormously wealthy country to put some relatively poor country that we don't like what they're doing under the boot. This allows expansion of global power without friendly casualties, media attention, or voter push-back. Drones don't fight other drones.

  • by RandCraw ( 1047302 ) on Friday September 20, 2013 @04:41PM (#44907155)

    Why start small?

  • by dltaylor ( 7510 ) on Friday September 20, 2013 @06:39PM (#44908163)

    Anyone else remember the picures of the Russian women standing in front of the tanks during the Soviet collapse? The soldiers were generally willing to kill, and die, to protect the Soviet Union from US, but NOT from their mothers and grandmothers.

    These robots will have no qualms about massacring children and women (even the non-revolutionaries) if so ordered to protect the 1%. Plus, a fat lot of good your .308 hunting rifle will do against an armor-plated killing machine.

    • Probably spot on unfortunately. Considering how militarized the police have become it's only a matter of time before they are deployed "for everyone's safety." Guessing the results will be something like the ED-209 [].
  • Until they get replaced by said killer robots. These things will be used to eliminate human jobs. Not just humans.

All science is either physics or stamp collecting. -- Ernest Rutherford