Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI China Technology

Chinese and Western Scientists Identify 'Red Lines' on AI Risks (ft.com) 28

Leading western and Chinese AI scientists have issued a stark warning that tackling risks around the powerful technology requires global co-operation similar to the cold war effort to avoid nuclear conflict. From a report: A group of renowned international experts met in Beijing last week, where they identified "red lines" on the development of AI, including around the making of bioweapons and launching cyber attacks. In a statement seen by the Financial Times, issued in the days after the meeting, the academics warned that a joint approach to AI safety was needed to stop "catastrophic or even existential risks to humanity within our lifetimes."

"In the depths of the cold war, international scientific and governmental co-ordination helped avert thermonuclear catastrophe. Humanity again needs to co-ordinate to avert a catastrophe that could arise from unprecedented technology," the statement said. Signatories include Geoffrey Hinton and Yoshua Bengio, who won a Turing Award for their work on neural networks and are often described as "godfathers" of AI; Stuart Russell, a professor of computer science at the University of California, Berkeley; and Andrew Yao, one of China's most prominent computer scientists. The statement followed the International Dialogue on AI Safety in Beijing last week, a meeting that included officials from the Chinese government in a signal of tacit official endorsement for the forum and its outcomes.

This discussion has been archived. No new comments can be posted.

Chinese and Western Scientists Identify 'Red Lines' on AI Risks

Comments Filter:
  • Sigh (Score:2, Insightful)

    by HBI ( 10338492 )

    The thing that compelled 'international cooperation' on nuclear matters was national power considerations - preventing nuclear technological proliferation to maintain nuclear monopolies by the first movers. The IAEA was a chief agent of this. The rest was called Mutual Assured Destruction (MAD), which means that any nation that uses nuclear weapons opens itself up to reprisal from other nuclear powers. That's assured that none of the initial nuclear powers or those who have sneaked into the club have use

    • by neoRUR ( 674398 )

      Don't worry, once we let the AI's control the nuclear arsenal then we won't have to worry. Colossus will tell us what to do.
      But in seriousness, there will always just be hidden secret government and corporate and hacker groups that will exploit these red-line factors. It's a lot easier to make AI's then it is to make Nukes.

    • by mjwx ( 966435 )

      The thing that compelled 'international cooperation' on nuclear matters was national power considerations - preventing nuclear technological proliferation to maintain nuclear monopolies by the first movers. The IAEA was a chief agent of this. The rest was called Mutual Assured Destruction (MAD), which means that any nation that uses nuclear weapons opens itself up to reprisal from other nuclear powers. That's assured that none of the initial nuclear powers or those who have sneaked into the club have used a weapon since Nagasaki.

      So since they misstate history at the outset, probably purposefully, i'm not taking the rest of what they say very seriously at all.

      Sigh, yet another person who thinks MAD actually worked.

      First clue, plenty of nuclear weapons have been used since Nagasaki... Loads of them have been detonated, every nation who has them have detonated them.

      What's stopped them from being used in anger is sensible leadership, faster communications, more diplomacy and one Russian who refused to push the button on a false alarm.

      MAD only kind of worked when it was just the US and Soviets, even then it was just the honour system mixed with detente. Sur

      • by HBI ( 10338492 )

        Of course it works. There have been no serious attempts to use a nuclear weapon since Dien Bien Phu in 1954. There _were_ attempts to use them to settle out the Korean War and in response to the Berlin Blockade in the 1940s. What changed? MAD. The Soviets gained a credible deterrent.

        Your belief that Russia is unhinged suggests you aren't a serious student of geopolitics or history, so I can excuse you for the silly belief that MAD doesn't work.

  • Wrong way. (Score:4, Insightful)

    by Brain-Fu ( 1274756 ) on Monday March 18, 2024 @05:32PM (#64326033) Homepage Journal

    The right way to prevent AI cyberattacks from causing harm is to build systems that are strong against cyberattacks. No amount of global pinky-swearing will prevent malicious actors from getting their hands on AI bots and using them to perpetuate cyber attacks.

    This means that our intelligence agencies need to stop sitting-on and weaponizing exploits when they find them, and instead responsibly report them..

    I personally think this also means we need some sort of official service for performing security auditing and penetration testing, and holding companies accountable to good security practices. I generally oppose this level of government intervention but in the case of mission critical cyber infrastructure, it's justified. Maybe the accountability level is tiered based on how critical the service is. Steam and Reddit are not so critical, the world doesn't end if they go down. Sites that have lives on the line like military bases with any kind of internet-facing content, power grid, hospitals, or Slashdot would need to be in the top tier and undergo the most scrutiny.

    If we don't do this, we will get taken down by AI cyberattacks, no matter how many countries promise not to.

    • It’s been shown that companies will bury knowledge of a critical vulnerability to put off the cost of fixing it.

      That knowledge is valuable in a hundred different ways. Why should the US intelligence agencies just give it up? The most likely outcome would be the companies dragging their feet, and meanwhile the knowledge would just leak out to malicious actors even faster, because these companies leak info like a sieve.
    • Wasn't that the argument around the massive build-up of arms in Europe just before WWI?
    • by gweihir ( 88907 )

      Indeed. Or doing it in other ways. I mean, MS Azure + o365 got fully compromised last year. If the (supposedly Chinese) attackers had wanted to, they could have burned a major part of the western world down. Fortunately, they only wanted to spy, and that means low key and no damage. We are also seeing more and more "security elements" that are vulnerable themselves. And we still have way too many things rely on an OS with really crappy security.

      Currently, any attacker that does not rely on cloud infrastruct

  • Does NOT signal tacit endorsement by the Chinese government. It means “we’re watching”.

    This played out during the cold war. US and Russian physicists insisted on communicating with each other, and both governments were fairly unhappy about it. It’s much harder to convince the general population that the other side is totally evil and eats babies, when everyone knows the scientists are still collaborating with each other. So both governments just watched.
    • The failure of many in the West to accept, despite the overwhelming evidence, that the Soviet Union was, indeed, an evil empire, is depressing. The repeat of this error with China - not engaging with their suppression of the Tibetans, Uighurs, the colonisation of their provinces with Han Chinese settlers - is thus even more culpable. Anyone who starts muttering 'but the West was / is just as bad' has no idea what was actually going on.

      • > Anyone who starts muttering 'but the West was / is just as bad' has no idea what was actually going on.

        But maybe some of those people know both, how bad what was actually going on in China was, and how bad what America was actually involved in too?

        • The worse claims for political deaths against Western allied countries are in the tens of thousands. By contrast the USSR and China saw millions dead, and they both operate and continue to operate massively oppressive regimes. I suggest a trip to the Baltic republics. In each one you will find a 'Museum of the Occupation' which detail the techniques used by the KGB. Vilnius is especially impressive as it is housed in the former HQ of the KGB, and thus includes the cells and the execution chamber.

          • > The worse claims for political deaths against Western ...

            I was referring to more or less the past 40-50 years as you mentioned "their suppression of the Tibetans, Uighurs..." and I assumed you wanted to include events like Tiananmen square.

            In that time frame the US has openly killed hundreds of thousands of civilians worldwide, injured in countless numbers, invaded countries and then left them worse off, tortured people, and so on.

            Further back, the USSR and China were definitively massively oppressive

            • 'In that time frame the US has openly killed hundreds of thousands of civilians worldwide'

              Seriously? want to offer a source for that?

              • > Seriously? want to offer a source for that?

                I purposely said hundreds of thousands because that could mean as little 200 000 violent civilian deaths, a number that even the US government doesn't dispute.

                But the actual numbers appear to be much higher.

                https://en.wikipedia.org/wiki/... [wikipedia.org]

                https://watson.brown.edu/costs... [brown.edu]

                I'm still surprised a lot of people don't realize this.

                • I'm not disputing the figures, I would want to challenge the intent.

                  'In that time frame the US has openly killed hundreds of thousands of civilians worldwide'

                  implies that US forces sought to kill these people, rather than their being caught in a battle zone - or at least it's trying to suggest that, although that interpretation can be retreated from if challenged.

                  https://en.wikipedia.org/wiki/... [wikipedia.org]

                  By contrast Stalin's purges murdered people in order to fill the quota that Uncle Joe. It is interesting to compa

                  • > implies that US forces sought to kill these people rather than their being caught in a battle zone - or at least it's trying to suggest that, although that interpretation can be retreated from if challenged.

                    No it doesn't imply that and it's not what I'm suggesting or meant. Killed is a pretty neutral term, and how these deaths are called doesn't take away from their atrocity.

                    You've been using comparatively pretty tendentious language, but your point is noted and I can better appreciate Brown study's ch

  • Never gonna happen (Score:4, Insightful)

    by Pollux ( 102520 ) <`speter' `at' `tedata.net.eg'> on Monday March 18, 2024 @05:47PM (#64326093) Journal

    Imagine if Hitler's scientists and Roosevelt's scientists both spoke out to the world in a radio broadcast back in 1942 about "red lines" in the development of the Atomic Bomb. What would Hitler and Roosevelt say to that?

    I believe something along the lines of "I don't give a damn", I would imagine.

    Because they were building it to have the most powerful weapon of warfare, that's why.

    And today, now that the most powerful nations on this planet can't use nukes against one another without screwing themselves over in return, now we recognize the next most powerful weapon we can develop is AI.

    So I'm pretty sure that Biden, Trump, and Papa Pooh are all all saying the same thing: "I still don't give a damn." Because the moment one side gives up on the pursuits, the other side will persist and win in its development.

    • by AmiMoJo ( 196126 )

      They would probably have agreed if they thought that the other side already had or was close to having nuclear weapons.

      That's why the US dropped them without first trying to use them to negotiate a surrender. There was a very good chance that if they did, they would never get to test them on cities and civilians, because everyone would agree they needed to be banned and strictly regulated.

  • So, from the perspective of the common man, global stability and relationships between powers seems to be on a general downward trend. Right at the same time AI seems to be... I hesitate to say developing, because in all reality we're just busy tossing more and more power at the same form of "AI" we've been capable of for quite a while now in an attempt to create computer god, or at least computer capital gainer god, but at least marketing and buzz has pushed it into the forefront of our collective consciou

  • by OneOfMany07 ( 4921667 ) on Monday March 18, 2024 @06:03PM (#64326153)

    Don't think this can work. One reason the cold war treaties worked was we could detect if someone was cheating. I'm not an expert in the field, but some form of radiation or particles could be seen by the rest of the globe if anyone tried to detonate in secret. There is no such measurement for AI after it's used (currently hardware and power requirements might hint at use, but data is portable).

    And limiting the goals of AI seems pointless. Meaning again, you can't guarantee an otherwise useful AI tool wouldn't be used for those goals. All you'd need to do is reformulate the problem into a new domain, then translate back. Or claim to be working for one goal, and extract the opposite by lying to the AI about the initial conditions. Or by taking part of the work early (it finds bioweapons to then guard against, and you extract the weapon partial answers).

    Any effective tool can be misused with enough effort. My best alternative solutions involve radical transparency. It's harder to be sneaky if everyone knows what everyone is doing. If hardware can perform AI, then share what will run on it openly and in a secure way prove that's what ran or is running.

    • Satellites used radiation sensors with a time gate to verify the double flash characteristic of an atmospheric detonation. See "bhangmeter" and "Vela satellite". I recall seeing a packaged device advertised for this application, in openly published catalogs.
  • Sounds like these "experts" are pretty much hallucinating. Not that this is anything new with AI "experts".

  • The attackers are sensitive about being attacked ?

    Sounds like the "cooperation" they want is us giving them more trade secrets.

    "A wolf ! A wolf indeed ! But no one paid him any heed."

God doesn't play dice. -- Albert Einstein

Working...