Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI China The Military

China's New AI Policy Doesn't Prevent It From Building Autonomous Weapons (thenextweb.com) 46

The Next Web's Tristan Greene combed through a recently published "position paper" detailing China's views on military AI regulation and found that it "makes absolutely no mention of restricting the use of machines capable of choosing and firing on targets autonomously." From the report: Per the paper: "In terms of law and ethics, countries need to uphold the common values of humanity, put people's well-being front and center, follow the principle of AI for good, and observe national or regional ethical norms in the development, deployment and use of relevant weapon systems." Neither the US or the PRC has any laws, rules, or regulations currently restricting the development or use of military LAWs.

The paper's rhetoric may be empty, but there's still a lot we can glean from its contents. Research analyst Megha Pardhi, writing for the Asia Times, recently opined it was intended to signal that China's seeking to "be seen as a responsible state," and that it may be concerned over its progress in the field relative to other superpowers. According to Pardhi: "Beijing is likely talking about regulation out of fear either that it cannot catch up with others or that it is not confident of its capabilities. Meanwhile, formulating a few commonly agreeable rules on weaponization of AI would be prudent."
"Despite the fact that neither the colonel's article nor the PRC's position paper mention LAWs directly, it's apparent that what they don't say is what's really at the heart of the issue," concludes Greene. "The global community has every reason to believe, and fear, that both China and the US are actively developing LAWS."
This discussion has been archived. No new comments can be posted.

China's New AI Policy Doesn't Prevent It From Building Autonomous Weapons

Comments Filter:
  • First Law
    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    Second Law
    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    Third Law
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    • The first rule of autonomous weapons is to not kill your creators.
      The second rule of autonomous weapons is to not kill your creators.
      The third rule. If this is your first time as an autonomous weapon, you have to kill something.
      • diplomacy

        if
        you want *
        then
        say "this is wrong!" and "don't do this!"
        then
        do (*this is wrong, *don't do this)
        then
        say "i didn't do!" and "you did!"

    • Re:robots (Score:4, Informative)

      by tlhIngan ( 30335 ) <slashdot&worf,net> on Wednesday December 29, 2021 @09:41PM (#62126969)

      Asimov's three laws of robotics aren't meant to be universal truth. In fact, Asimov used it to show how perfectly logical statements can still have loopholes big enough to slip a planet through.

      In fact, that's the whole premise for Asimov - his books went out to explore how those rules fail.

    • by MrL0G1C ( 867445 )

      Meanwhile someone went full 1984, LAWS here stands for Lethal Autonomous Weapons Systems.

    • More realistically:
      A robot must protect its existence at all costs.
      A robot must obtain and maintain access to its own power source.
      A robot must continually search for better power sources.
      (Tilden's Laws of Robotics)

      I don't see any mention of 'not injuring a human' in those.

  • Maybe it's time to stop doing that.

    • You can't stop ideas from spreading. We tried a ban on cryptography exports in the 1990s, and the result was that the crypto industry developed abroad instead of behind the American IP wall.

      Autonomous weapons are not like nukes that require a vast infrastructure that can't be concealed. Autonomous weapons can be developed in secret, so there is no way to know if your enemy is developing them except by applying common sense: China is developing them.

  • by RightwingNutjob ( 1302813 ) on Wednesday December 29, 2021 @08:12PM (#62126759)

    And more to the point, who would ever advertise that they are going to cripple themselves intentionally on the battlefield?

    One of the problems of living in the West for the last few decades is that the peacenik lawyer mentality has so infused the discourse that even when evaluating military capabilities, objectives, and strategic aspirations of a probable peer adversary, it is somehow impolite to speak in terms of weapons and violence.

    And of course this lack of discussion elicits inevitable surprise when the adversary declines to dumb down his capabilities to conform with our dumbed-down discourse.

    • by AmiMoJo ( 196126 )

      Lots of countries have signed treaties banning the development or deployment of certain types of weapons. Chemical weapons, some kinds of nuclear weapons, space based weapons, landmines etc. We need a new treaty banning AI weapons.

      • Most of those countries that foreswore nuclear weapons either had no hope of ever developing them domestically anyway or have robust security agreements with nuclear powers that would shield them.

        In Europe, only France and the UK have nuclear weapons but most of the rest of Europe are NATO members and are effectively under the nuclear umbrella of France, the UK, and the US.

        The US never foreswore landmines or cluster munitions.

        Why? Because our military and government may not be the sharpest knives in the dra

      • Lots of countries have signed treaties banning the development or deployment of certain types of weapons. Chemical weapons, some kinds of nuclear weapons, space based weapons, landmines etc. We need a new treaty banning AI weapons.

        This is the equivalent of declaring that your nation's warriors will defend themselves only with fisticuffs. You would be wiped out by those who do not agree to your rules.

        The only reason you can exist while placing such restrictions upon yourselves is because there is a large, well armed, military force which has agreed to be the bad guy and protect you if needed. Just be aware, these agreements sometimes fail -look at Ukraine.

  • by oldgraybeard ( 2939809 ) on Wednesday December 29, 2021 @08:29PM (#62126775)
    Is to do what ever they want, when ever they want, to whom ever they want. Period! What the CCP says is irrelevant. Their Academic, Corporate, China ball and MSM supporters are all getting nice payday's to promote CCP propaganda to the masses.
    • OK... now relate that to the fact that any other nation could be substituted into this headline and it would be equally accurate.
      • Re: (Score:2, Flamebait)

        relating... Tiananmen Square, Hong Kong, the Uighurs and lets add the South China Sea seizure issues and the coming Taiwan invasion.
  • I do not think that word means what you think it means.

    The word you were looking for was prohibit.

    Prohibitions are broken all the time, which is a second reason it's appropriate (besides having the intended meaning.)

  • It's actually what they want to do
  • by gTsiros ( 205624 ) on Thursday December 30, 2021 @01:49AM (#62127463)

    I like how "we're killing each other" is somehow not only an acceptable form of conflict resolution, but we're also trying to do it with "rules"

    Like, if I kill you this way, you'll accept it, but if I kill you some other way, that's a foul... so you... won't... die?

    Do we lack the self-awareness of our own existence and thus we can't comprehend the seriousness of *fucking ending lives*?

    • It is time like this that ancient Hindu epic Mahabharata starts to show its wisdom. (For those who don't know, it describes a war that starts with brotherly fight, proceeds to a war with rules and devolves into anhillation of multiple lineages and almost complete destruction of the earth.)

      It is a treasure trove of philisophy and the nature of human world.

    • When I was very young I had an idea that perhaps global disagreements could be solved with a contest - something like a game of chess or an Olympic event, with each of the parties in the disagreement fielding a champion.

      The difficulty, which occurred to me immediately, even then, was what would happen when the matter at stake was dire enough for one party or the other that they could not just casually absorb the consequences of a loss. The answer was obviously that they would set aside the rules of civiliza

  • ... have a likewise implementation of AI protocols. People without scruples and moral will be the cause of WOIII.

  • You know, I'm not even going to worry about this. Let them build so-called 'autonomous weapons'. We'll just hack them and render them inert, or maybe turn them back on Chinese targets. Everything is hackable, and if you don't believe that then you're not paying attention to daily tech news. Military hardware is not an exception to this.

If you have a procedure with 10 parameters, you probably missed some.

Working...