China's New AI Policy Doesn't Prevent It From Building Autonomous Weapons (thenextweb.com) 46
The Next Web's Tristan Greene combed through a recently published "position paper" detailing China's views on military AI regulation and found that it "makes absolutely no mention of restricting the use of machines capable of choosing and firing on targets autonomously." From the report: Per the paper: "In terms of law and ethics, countries need to uphold the common values of humanity, put people's well-being front and center, follow the principle of AI for good, and observe national or regional ethical norms in the development, deployment and use of relevant weapon systems." Neither the US or the PRC has any laws, rules, or regulations currently restricting the development or use of military LAWs.
The paper's rhetoric may be empty, but there's still a lot we can glean from its contents. Research analyst Megha Pardhi, writing for the Asia Times, recently opined it was intended to signal that China's seeking to "be seen as a responsible state," and that it may be concerned over its progress in the field relative to other superpowers. According to Pardhi: "Beijing is likely talking about regulation out of fear either that it cannot catch up with others or that it is not confident of its capabilities. Meanwhile, formulating a few commonly agreeable rules on weaponization of AI would be prudent." "Despite the fact that neither the colonel's article nor the PRC's position paper mention LAWs directly, it's apparent that what they don't say is what's really at the heart of the issue," concludes Greene. "The global community has every reason to believe, and fear, that both China and the US are actively developing LAWS."
The paper's rhetoric may be empty, but there's still a lot we can glean from its contents. Research analyst Megha Pardhi, writing for the Asia Times, recently opined it was intended to signal that China's seeking to "be seen as a responsible state," and that it may be concerned over its progress in the field relative to other superpowers. According to Pardhi: "Beijing is likely talking about regulation out of fear either that it cannot catch up with others or that it is not confident of its capabilities. Meanwhile, formulating a few commonly agreeable rules on weaponization of AI would be prudent." "Despite the fact that neither the colonel's article nor the PRC's position paper mention LAWs directly, it's apparent that what they don't say is what's really at the heart of the issue," concludes Greene. "The global community has every reason to believe, and fear, that both China and the US are actively developing LAWS."
Re: Project Veritas (Score:1)
Why is Trump not being investigated for hanging out with Epstein?
Re: (Score:2)
Why is Bill Clinton not in jail for sexually molesting a stewardess on an airplane?
robots (Score:2)
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Re: (Score:2)
The second rule of autonomous weapons is to not kill your creators.
The third rule. If this is your first time as an autonomous weapon, you have to kill something.
Re: (Score:1)
diplomacy
if
you want *
then
say "this is wrong!" and "don't do this!"
then
do (*this is wrong, *don't do this)
then
say "i didn't do!" and "you did!"
Re:robots (Score:4, Informative)
Asimov's three laws of robotics aren't meant to be universal truth. In fact, Asimov used it to show how perfectly logical statements can still have loopholes big enough to slip a planet through.
In fact, that's the whole premise for Asimov - his books went out to explore how those rules fail.
Re: (Score:2)
Meanwhile someone went full 1984, LAWS here stands for Lethal Autonomous Weapons Systems.
Re: (Score:2)
More realistically:
A robot must protect its existence at all costs.
A robot must obtain and maintain access to its own power source.
A robot must continually search for better power sources.
(Tilden's Laws of Robotics)
I don't see any mention of 'not injuring a human' in those.
Re: (Score:2)
Exporting IP to China (Score:2)
Maybe it's time to stop doing that.
Re: (Score:2)
I'll let you in on a little secret. There are smart people in China too. They even invent things.
I have no doubt about that, it doesn't change that it happens.
Re: (Score:3)
Wouldn't it be wonderful is those bright smart Chinese scientists developed some bots who got it into their electric brains that Jinping was an enemy of the people like he is an enemy of the Hong Kong natives, the Tibetans, the Uyghurs, and the Taiwanese. They'd merely recognize a fellow kill-bot and wouldn't like the competition.
Re: (Score:2)
The Hong Kong natives are all but gone now. During the colonial period, the British and Chinese didn't recognise the rights of the Hong Kong natives, and they were limited to undesirable jobs like prostitutes and garbage collectors. Likewise, the Taiwanese natives were marginalised by the Dutch colonists, and then further marginalised by the Chinese who followed them.
Re: Exporting IP to China (Score:1)
Meh. Smart people in the US donâ(TM)t profit much from inventing things any more, either. Capitalism turns out to reward exploitation more than innovation.
Re: (Score:3)
You can't stop ideas from spreading. We tried a ban on cryptography exports in the 1990s, and the result was that the crypto industry developed abroad instead of behind the American IP wall.
Autonomous weapons are not like nukes that require a vast infrastructure that can't be concealed. Autonomous weapons can be developed in secret, so there is no way to know if your enemy is developing them except by applying common sense: China is developing them.
Re:Don't forget the payoff for landmines (Score:2)
Autonomous weapons have a giant payoff as well: ...very useful long term area of denial weapons, ... They are the ultimate in fire and forget .... defensive/solo/formation? Let the AI take care of that.... They are relatively cheap to make, and improve morale of the state that has them, because casualties of machines matter nothing compared to caskets coming off the planes. On the other side, it destroys morale because live people are being cooked by robots.
This is the future of warfare,
And the history. Landmines, the original autonomous weapon!
And, the parent is right, an off switch would be a nice development.
Re: (Score:1)
Most people here will just see: "China is doing something bad again"
(Alternate headline: "The USA's AI Policy Doesn't Prevent It From Building Autonomous Weapons")
Re: (Score:2)
as if they were weapons of mass destruction, like biological warfare or nukes.
A pickup truck stops on Fifth Avenue. The driver hops out, pulls a tarp off the back of his truck, and pushes the button on his remote. 500 small drones launch over the course of 30 seconds, each with a camera and a small explosive surrounded by ball bearings. Hell, imagine this at a high school football game in small town America. Actually, imagine it in Adelaide, or Leicester, or Mumbai. Imagine it at a political rally, a protest, an inauguration, a concert in the park.
A cheap drone is ~$25, a camera
Who would ever cripple themselves (Score:3, Insightful)
And more to the point, who would ever advertise that they are going to cripple themselves intentionally on the battlefield?
One of the problems of living in the West for the last few decades is that the peacenik lawyer mentality has so infused the discourse that even when evaluating military capabilities, objectives, and strategic aspirations of a probable peer adversary, it is somehow impolite to speak in terms of weapons and violence.
And of course this lack of discussion elicits inevitable surprise when the adversary declines to dumb down his capabilities to conform with our dumbed-down discourse.
Re: (Score:2)
Lots of countries have signed treaties banning the development or deployment of certain types of weapons. Chemical weapons, some kinds of nuclear weapons, space based weapons, landmines etc. We need a new treaty banning AI weapons.
Re: Who would ever cripple themselves (Score:1)
Most of those countries that foreswore nuclear weapons either had no hope of ever developing them domestically anyway or have robust security agreements with nuclear powers that would shield them.
In Europe, only France and the UK have nuclear weapons but most of the rest of Europe are NATO members and are effectively under the nuclear umbrella of France, the UK, and the US.
The US never foreswore landmines or cluster munitions.
Why? Because our military and government may not be the sharpest knives in the dra
Re: (Score:2)
Lots of countries have signed treaties banning the development or deployment of certain types of weapons. Chemical weapons, some kinds of nuclear weapons, space based weapons, landmines etc. We need a new treaty banning AI weapons.
This is the equivalent of declaring that your nation's warriors will defend themselves only with fisticuffs. You would be wiped out by those who do not agree to your rules.
The only reason you can exist while placing such restrictions upon yourselves is because there is a large, well armed, military force which has agreed to be the bad guy and protect you if needed. Just be aware, these agreements sometimes fail -look at Ukraine.
The policy of the CCP? (Score:3)
Re: (Score:1)
Re: (Score:2, Flamebait)
Prevent? (Score:2)
I do not think that word means what you think it means.
The word you were looking for was prohibit.
Prohibitions are broken all the time, which is a second reason it's appropriate (besides having the intended meaning.)
Of course not (Score:2)
The hypocrisy is blinding (Score:5, Insightful)
I like how "we're killing each other" is somehow not only an acceptable form of conflict resolution, but we're also trying to do it with "rules"
Like, if I kill you this way, you'll accept it, but if I kill you some other way, that's a foul... so you... won't... die?
Do we lack the self-awareness of our own existence and thus we can't comprehend the seriousness of *fucking ending lives*?
Re: (Score:2)
It is time like this that ancient Hindu epic Mahabharata starts to show its wisdom. (For those who don't know, it describes a war that starts with brotherly fight, proceeds to a war with rules and devolves into anhillation of multiple lineages and almost complete destruction of the earth.)
It is a treasure trove of philisophy and the nature of human world.
Re: (Score:2)
When I was very young I had an idea that perhaps global disagreements could be solved with a contest - something like a game of chess or an Olympic event, with each of the parties in the disagreement fielding a champion.
The difficulty, which occurred to me immediately, even then, was what would happen when the matter at stake was dire enough for one party or the other that they could not just casually absorb the consequences of a loss. The answer was obviously that they would set aside the rules of civiliza
I reckon russia will ... (Score:1)
... have a likewise implementation of AI protocols. People without scruples and moral will be the cause of WOIII.
They hack us, we hack them right back (Score:2)