Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Military Government Robotics

UN Debates Rules Surrounding Killer Robots 215

First time accepted submitter khb writes "It seems that the UN has started a debate on whether to place limits or bans on robots that can kill without manual supervision. It seems that bombs are viewed as 'kinder' than robots which might be programmed to achieve specific ends (e.g. destroy that bridge, kill anyone carrying a gun, etc.)."
This discussion has been archived. No new comments can be posted.

UN Debates Rules Surrounding Killer Robots

Comments Filter:
  • by WillgasM ( 1646719 ) on Thursday May 30, 2013 @05:05PM (#43866561) Homepage
    it's in the rules.
    • by Impy the Impiuos Imp ( 442658 ) on Thursday May 30, 2013 @05:08PM (#43866615) Journal

      It's also against the rules to assassinate enemy leaders outside war, but ok to initiate a war with a full frontal assault killing hubdreds of thousands on both sides.

      Ironically, that's less upsetting -- getting your nation's ass whooped -- than getting your Fearless Leader killed.

      • by JustOK ( 667959 )

        wrong rules.

      • Ironically, that's less upsetting -- getting your nation's ass whooped -- than getting your Fearless Leader killed.

        The rules being those proposed and ratified by members/servants of the Fearless Leader caste?

      • by couchslug ( 175151 ) on Thursday May 30, 2013 @07:23PM (#43867803)

        "Ironically, that's less upsetting -- getting your nation's ass whooped -- than getting your Fearless Leader killed."

        Fearless Leaders write the rules.

    • I thought the rules were:
      1. Serve the public trust.
      2. Protect the innocent.
      3. Uphold the law.
      4. (classified)

      • by DudemanX ( 44606 )

        Technically, those are cyborg(not robot) rules.

      • I thought the rules were:

        Rules are a big issue.

        I think that while at the moment, we obviously have teh upper hand. We can in concept kill anyone we want to from the comfort of our living rooms. It's sort of like Enders Game, with out the zero gravity. But what happens down the road, when other folks have caught up? Will artists who have the nerve to disrespect some religious figure end up being nailed by a holy drone?

        This is truly disruptive technology, and we do need to at least make the attempt to set some rules for it's use

    • you mean the Laws, not the rules... right?

      As in Asimov's 3 Laws.

    • The laws have never been implemented. And it is not established that it is possible for them to be implemented.
      See here:
      https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Ambiguities_and_loopholes [wikipedia.org]
      https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Applications_to_future_technology [wikipedia.org]
      https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence [wikipedia.org]

      If robot rules say robots aren't allowed to do X, doesn't that imply robots that do X are not allowed? So people thinkin

      • by lgw ( 121541 ) on Thursday May 30, 2013 @05:43PM (#43866967) Journal

        It's worth noting that the premise for almost every Robot story Asimov wrote was "something unacceptable despite the robot following the Three Laws". That's really what I liked about those stories: by extension to human moral codes, they're exploring how you can't prevent problems/evil with a simple set of rules.

        • by GrpA ( 691294 )

          Robotics laws work just fine unless the AI following them is sufficiently advanced. Once an AI is sufficiently advanced, then the rules would be as effective as if applied to humans.

          If you are interested in a different perspective, consider "Turing Evolved" which examines this in detail - Currently one of the top Science Fiction books available on Amazon. ( Top as in "Best Products By Rating" )

          In a nutshell, it's about what kind of test is necessary to determine if an AI/Robot can safely be armed and provid

          • Humans can choose to simply ignore rules because for humans, rules are not built in. If machines are build with rules built in at a low level, it may be much more constraining on their behavior, even if they are very intelligent.
        • It's worth noting that the premise for almost every Robot story Asimov wrote was "something unacceptable despite the robot following the Three Laws". That's really what I liked about those stories: by extension to human moral codes, they're exploring how you can't prevent problems/evil with a simple set of rules.

          The Three Laws are completely logical and boolean.

          • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
          • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
          • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

          They govern and define physical boundaries with regard to specific behavior. There's a lot of physical behavior and metaphysics

    • The biggest reason to ban robotic soldiers is because people (humans) get sick of war. They (eventually) make it stop. Robots think what they are programmed to think, and wage war for as long as they are ordered to fight. In a country with a robotic army the only thing preventing war is the morals of the government. History has shown that trusting to your governments morals is a bad bet.
      • The biggest reason to ban robotic soldiers is because people (humans) get sick of war.

        The biggest reason to ban robotic soldiers is that without killing people there is no point to war. before deciding that I am a lunatic, parse that for a while.

        • People cost money just like robots.

          Its a money pissing match plain and simple.
          (and a tech pissing match)

        • "Death, destruction, disease, horror...that's what war is all about, Anan, that's what makes it a thing to be avoided. You've made it so neat and painless you've had no reason to stop it."
    • by Evtim ( 1022085 )

      A Golem May Not Harm A Human Being Or Allow A Human Being To Come To Harm Except When Ordered To By Duly Constituted Authority

  • by Anonymous Coward on Thursday May 30, 2013 @05:06PM (#43866591)

    That'll limit the damage they can do.

  • by icebike ( 68054 ) on Thursday May 30, 2013 @05:08PM (#43866605)

    For once I agree with the UN.

    I don't think it should ever get so easy as to to allow machines making the kill decision
    without a human in the loop.

    • by chris_mahan ( 256577 ) <chris.mahan@gmail.com> on Thursday May 30, 2013 @05:15PM (#43866707) Homepage

      I don't really want humans making kill decisions either.

    • On the one hand, I would prefer if wars were always soldier-versus-soldier. On the other hand, I would rather see a robot on the battlefield making automatic decisions about what to attack than a bomb dropped from an airplane -- at least a robot can be programmed not to kill civilians or needlessly destroy civilian infrastructure (e.g. schools, hospitals).

      Where I see a problem is with robots being programmed to recklessly kill -- a genocide could be committed rapidly by robots, which would require no in
      • Re:Eh... (Score:5, Insightful)

        by cusco ( 717999 ) <brian DOT bixby AT gmail DOT com> on Thursday May 30, 2013 @06:12PM (#43867201)
        I'd prefer to see wars were always general against general, preferably armed with hand axes and stone tipped spears.
        • It's the guy who determines who counts as a general that you really want.

        • I'd prefer to see wars were always general against general, preferably armed with hand axes and stone tipped spears.

          How about just the assholes that started the mess to begin with duke it out mano-i-mano and leave the rest of us out of it. No, I don't want to watch, either.

      • You can commit genocide rapidly with artillery and airstrikes too - that's not really the issue.

        At it's core this is really a debate over liability and perception. If you setup a perimeter gun, who's liable when it kills someone? If it's supposed to have IFF and it fails then who's liable? The guy who set it up? The manufacturer? etc.

        But more important then that is very much perception: the law of armed conflict exist because war is not eternal, it has to end someday and we'd like that to be sooner. Where r

        • by Qzukk ( 229616 )

          My question is who is responsible for picking them back up when the war is over? Or will it be Land Mine 2: Electric Boogaloo complete with the killbot wandering out of the forest 20 years later and wiping out an entire elementary school?

      • I don't think someone who is willing and able to commit genocide is going to cancel the plans because the UN bans genocide-bots.

    • For once I agree with the UN.

      I don't think it should ever get so easy as to to allow machines making the kill decision without a human in the loop.

      The target is in the loop. Those targets were asking for it if you ask me. Beep Boop Beep

    • by Lumpio- ( 986581 )
      But humans are more easily swayed by fear, emotion and political pressure. Let's just remove the human factor already and welcome our new electromechanical overlords.
    • by mrmeval ( 662166 )

      UN: Your robot killed people autonomously.
      Some_country: We're so sorry, it was a glitch in our code.
      UN: We're going to issue a formal letter excoriating you as punishment!
      SC: Okay Dokay

      No one's going to allow the buffoons the least amount of power to enforce that. All such devices will have a happy kill switch. It will be the big red happy fun ball shaped button. A small note by it will say "Press this and run like hell" It will be a classified button but it will only be classified a little. It won't be cla

    • I have more faith in a machine than most people in power.

    • by cusco ( 717999 )
      It's not likely to ever be implemented, and if it is it's not likely to make any difference. The country most likely to build autonomous killer robots sits on the Security Council. If by accident they don't veto the rule the US has abundantly demonstrated over the past dozen years that the UN Charter, the Geneva Conventions and pretty much every other treaty that the US has ever signed, such as the nuclear anti-proliferation, anti-chemical warfare, anti-biowarfare, and anti-money laundering treaties are o
    • For once I agree with the UN.

      I don't think it should ever get so easy as to to allow machines making the kill decision
      without a human in the loop.

      What if it were limited to enforcement of a "no fly zone"? It gives "land or exit" warnings, and if they are not obeyed, shoots down the aircraft? This is exactly what NATO aircraft with human pilots did in Operation Deny Flight in Bosnia and Herzegovina in 1993, under authority of United Nations Security Council Resolution 816.

      I think by setting the rules of engagement, a human was in the loop.

      • by icebike ( 68054 )

        For once I agree with the UN.

        I don't think it should ever get so easy as to to allow machines making the kill decision
        without a human in the loop.

        What if it were limited to enforcement of a "no fly zone"? It gives "land or exit" warnings, and if they are not obeyed, shoots down the aircraft? This is exactly what NATO aircraft with human pilots did in Operation Deny Flight in Bosnia and Herzegovina in 1993, under authority of United Nations Security Council Resolution 816.

        I think by setting the rules of engagement, a human was in the loop.

        Fine, but an unscheduled air ambulance enters the zone, or a passenger liner with communication problems doesn't happen to be monitoring that particular frequency, and strays into the wrong air space, Then what?

        The human pilot identifies his target, attempts to give visual signals (wheels down, finger pointing to ground, etc), and under just about no circumstances shoots down a Commercial air liner (unless he is a Russian pilot).

        Your drone scores another kill, and to hell with anybody who protests. Hey, we

    • For once I agree with the UN.

      I don't think it should ever get so easy as to to allow machines making the kill decision without a human in the loop.

      You realize that such racist thinking is exactly what causes the Cyborg wars, right?

      • by icebike ( 68054 )

        For once I agree with the UN.

        I don't think it should ever get so easy as to to allow machines making the kill decision
        without a human in the loop.

        You realize that such racist thinking is exactly what causes the Cyborg wars, right?

        Well if we are going to build Skynet, we probably don't want to START with weapon systems.

    • Sorry to burst your bubble, but even back in the late '70's there was this red tape that could be loaded into the system which programmed the wire-wrapped CPU to shoot on it's own until it ran out out missiles or was destroyed; this Ginnie has been out of the bottle a long time.

    • --BY--YOUR--COMMAND--
  • by Tablizer ( 95088 ) on Thursday May 30, 2013 @05:10PM (#43866635) Journal

    10 Find Human
    20 Eat It
    30 GoTo 10

    • Sounds great. Make sure you turn that robot on manually, so you're, you know, the first human it sees.
  • One suggestion (Score:5, Insightful)

    by betterunixthanunix ( 980855 ) on Thursday May 30, 2013 @05:11PM (#43866645)
    Robots should find an empty field somewhere and self-destruct after some period of time without receiving commands. We do not want to wind up with the same situation we have with land mines -- dangerous leftovers from wars that ended decades ago. Imagine an autonomous robot getting lost during a war, only to get uncovered 10 years after the war ends and going on a rampage (say, killing every armed police officer it finds)...
    • The manufacturers probably already make them do this, though perhaps less to reduce future harm and more to keep secrets [go.com].

      • The manufacturers probably already make them do this, though perhaps less to reduce future harm and more to keep secrets [go.com].

        Nope they self destruct so more robots need to be manufactured and sold.

      • 1. There are no autonomous killer robots.
        2. No drone self destructs if it loses contact.

        That helicopter was destroyed by people.
    • by kwerle ( 39371 )

      Mines are generally mechanical and hidden. Which means they can remain functional for as long as the environment doesn't destroy them.

      Killer robots are generally powered and function where they can be seen. They are not particularly hard to find. They will wind down all on their own.

      Killer robots that use nuclear energy have a pretty obvious signature and are easy to find because of it.

      So I'm thinking this isn't even an imaginary problem, let alone a real one.

      • Well if you had a nuclear powered robot, you might want to make it a stealth nuclear powered robot so that the enemy do not know where to aim the rockets.

        And it could just as easily be a marine robot, and I am pretty certain that stealth nuclear subs exist.

        • by kwerle ( 39371 )

          I'll grant you that nuclear powered marine robots could become a real problem.

          But a robot on land that is both nuclear power and shielded and subtle (hard to find) is too much of a stretch for me to imagine.

    • Where's the fun in that?! Shit, Robots should have a timeout function called runAmok(). The function should execute random low level BIOS call with a weighted probability of calling the pullTriggerFinger(). queezinartMode() might be fun to call ofter too.
    • by Nyder ( 754090 )

      Robots should find an empty field somewhere and self-destruct after some period of time without receiving commands. We do not want to wind up with the same situation we have with land mines -- dangerous leftovers from wars that ended decades ago. Imagine an autonomous robot getting lost during a war, only to get uncovered 10 years after the war ends and going on a rampage (say, killing every armed police officer it finds)...

      If it means we actually get batteries that can power a robot for 10 years, I'm sort of down for that battery tech. So we might have killer robots on the loose, worth it for those batteries, imo.

    • Re:One suggestion (Score:4, Interesting)

      by Kaenneth ( 82978 ) on Thursday May 30, 2013 @07:31PM (#43867893) Journal

      Obsolete robots should be programmed to pace suspected minefields until their mechanisms wear out.

      2 birds, 1 stone.

      Add live streaming and betting pools, and it might even be profitable.

    • Imagine an autonomous robot getting lost during a war, only to get uncovered 10 years after the war ends and going on a rampage (say, killing every armed police officer it finds)...

      A whole series. [amazon.com]

  • by Hartree ( 191324 ) on Thursday May 30, 2013 @05:13PM (#43866663)

    So, tell me how a cruise missile that's autonomously guiding itself via GPS or TERCOM toward a target after being launched isn't already a "killer robot"?

    It was commanded to launch, yes, but isn't a robot that's being commanded to head out on a mission where it could kill just being given a longer lifetime to act?

    You can bring up the choices robots have to attack or not based on what target it sees, but how is this different from existing CAPTOR mines that can ignore one type of ship and go after another?

    I think this Pandora's box has already been open for a long time.

    • Or, for that matter, mines are preprogrammed robots securing an area. Sentry guns have also been around for a while, although they usually try to shoot down missiles.

    • by vux984 ( 928602 )

      So, tell me how a cruise missile that's autonomously guiding itself via GPS or TERCOM toward a target after being launched isn't already a "killer robot"?

      The cruise missile is fired by a human. The cruise missile's target is set at launch. The cruise missile did not choose to launch, nor did it choose its target.

      It was commanded to launch, yes, but isn't a robot that's being commanded to head out on a mission where it could kill just being given a longer lifetime to act?

      Its being given decision making capab

      • The cruise missile is fired by a human.

        The robot's program is initiated by a human.

        The cruise missile's target is set at launch.

        A human programs the mission parameters into the robot

        The cruise missile did not choose to launch, nor did it choose its target.

        Neither did the robot. The missile did recognize that it's current location matched it programmed parameters and decided to explode. How different is that from shooting at something if it matches with with the robot's programmed parameters? The missile did not check out the local conditions to see if is should explode.

        The duration argument also has issues as a Tomahawk missile [wikipedia.org] can stay aloft for almost 5 hours.

    • and they seem to be arguing that a smart robot guided bomb say looking for MBT's is worse than Dresden?
  • by Impy the Impiuos Imp ( 442658 ) on Thursday May 30, 2013 @05:13PM (#43866669) Journal

    Make love, not war. Where are the sex bots that will roam around and make you orgasm unsupervised? Let's get some other automaton out of control kthxbie.

    And now, the punch line.

    .

    Wait for it...

    .

    If you build it, they will...

  • It's hard for me to see how we will allow various technologies like self-driving cars to go forward while still holding back the war machines. I mean, I want to hold back the war machines, but writing a law to keep those two use cases separate will be tricky. A child runs out into the street... does the self driving car hit the child or swerve possibly hitting some other car? Does the car evaluate the people in the other vehicle? Whatever logic we put into the cars, that's the same logic -- inverted -- that would run the war machines.

    I hope we have high wisdom politicians writing that particular body of law. I know... improbable... but hope springs eternal.

    • by TheCarp ( 96830 )

      > does the self driving car hit the child or swerve possibly hitting some other car?

      Thats an interesting question but, fundamentally is a question of how a robot handles an extraordinary situation where it detects a potential harm to life and reacts. I don't see how this is even related to a question of using lethal force or not. Even if the decision is made (regardless of whether its the car or driver) that hitting the child is the least bad choice, it is not really the same as a decision to use lethal

  • http://www.pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]
    "Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead? ... There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarc

  • Clearly, they need to be designed with a pre-set kill limit.

  • Make war too easy (Score:4, Interesting)

    by EmperorOfCanada ( 1332175 ) on Thursday May 30, 2013 @05:35PM (#43866891)
    One of the problems that I have long had with the idea of robot soldiers is that it makes war too easy. When you have huge emotional and financial costs to war your government will think twice about either getting involved or at least be pressured into "Bringing the boys home." but if you are sending robot planes with loads of robot warriors, why not have a war, or two, or five? A bunch of dead dehumanized "others' is not so bad especially seeing that it generates jobs at homes and pork spending for politicians.

    War is rarely the correct solution. In fact it it usually a clear sign of a long series of failures or the sign of a madman.

    Plus robotic warriors are, for the next short while, going to be the plaything of western countries. But how long before some tin-pot nut job flies the same machines into NYC or LA? Or even a homebrew nutjob? Again the key is that the consequences are potentially far less for the perpetrator. You can't usefully arrest the bot. You mightn't even end up with the slightest clue who sent it. Again the same problem. This tool makes waging whatever stupid war that pops into your head too easy.

    Robots have the potential to turn this planet into Utopia or into Distopia. I suspect that some governments are philosophically predisposed toward Utopia and others Distopia in regards to using robots wisely. A simple question: If your country can, using robots, vastly decrease the cost of running prisons will your country increase its incarceration rate?
    • I share your concerns. In a little while though armies on both sides will be robotic. Then it will just be a financial war of attrition. We could just compare GDP and say who wins... After this is realized, then basically both parties could just go to a casino and whomever looses the least would be the winner, same difference. Soon thereafter the robotics races may cause the emergence of independent sentience... Then we'll need all the resources we can get to fight the robotic civil war -- Which the

    • This supposes that Country leaderships care about their populations. Look at the Korean war. Mao is reported to have replied to the statement that he would loose a million soldiers dead with "I have three million more". Look at the Iran /Iraq war that nearly wiped out all males in a generation on both sides.

      The other issue is that it gives the advantage to the country with the larger military. So if a larger country with larger military wants to take over a smaller country they just overwhelm them with numb

      • Korean war. Mao is reported to have replied to the statement that he would loose a million soldiers dead with "I have three million more"

        Do you think it possible that Mao be considered a madman?

    • War is rarely the correct solution. In fact it it usually a clear sign of a long series of failures or the sign of a madman.

      The causes of war, since the times of Thucydides and earlier, are three: wars of fear, wars of ideology, and wars of greed.

      The root cause is that humans are all too willing to kill each other. You might call such a person a madman, but if so, madness is very common. If you want to solve the problem of war, you need to come up with a solution for all three causes.

    • by rachit ( 163465 )

      One of the problems that I have long had with the idea of robot soldiers is that it makes war too easy

      More importantly, it allows a tiny group of people to rule over the country / world by force. Today (with super advanced weaponry and even with nuclear weapons), if a dictator or "really bad group of people" took over the any mostly homogenous society (ie. the US) for example, they wouldn't be able to maintain power because the military would simply not follow orders to kill other civilians. Nuclear weapons are overly blunt to control a population.

      With fully autonomous robots, this becomes possible.

  • There's a famous Alexis Gilliland cartoon of a cruise missile thinking "They've got me aimed at a computer center! I'll just fly a bit farther and hit the maternity ward."

  • On the other hand, robots don't go on murderous rampages out of anger:

    http://news.yahoo.com/lawyer-soldier-plead-guilty-afghan-massacre-140057614.html [yahoo.com]

  • people do. So, who will be put in jail if some of those droids kill someone? Or this is just a way to legalize kiliing with impunity?

    Wonder how much people will think that it is a necesary security measure until someone that they care about gets killed.

  • I suppose autonomous drones could be viewed as landmines that happen to move and make decisions about their targets. So, if banning landmines makes sense, maybe so would banning autonomous drones.

  • The robot can actively look for you. A weapon is a weapon, I don't think there should be any special considerations. There should be a ban on all weapons.. But like anything else, the automated technology will get abused and just like normal weapons, it will be hard to determine the maker, origins, etc. Let's face it, the world changes, but humanity is just as stupid.
  • ... "Klaatu barada nikto".

  • The argument is NOT generalized-demolition vs specific targets, but WHAT you could be specific for (insert politico-religious/racial warcrimes here).
  • Is it really all that wrong to program a robot to go blow up a bridge when there's a war on?
  • "There!"
    "What, behind the robot?"
    "It is the robot!"
    "You silly sod!"
    "What?"
    "You got us all worked up!"
    "That's no ordinary robot! That's the most foul, cruel, and bad-tempered mechanism you ever set eyes on!"
    "You git! I soiled my armor, I was so scared!"
    "Look, that robot's got a vicious streak a mile wide! It's a killer!"

A CONS is an object which cares. -- Bernie Greenberg.

Working...