Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google Microsoft Technology

Top Tech Companies Form Group Seeking To Control AI (ft.com) 33

Some of the world's most advanced artificial intelligence companies have formed a group to research increasingly powerful AI and establish best practices for controlling it, as public anxiety and regulatory scrutiny over the impact of the technology increases. From a report: On Wednesday, Anthropic, Google, Microsoft and OpenAI launched the Frontier Model Forum, with the aim of "ensuring the safe and responsible development of frontier AI models." In recent months, the US companies have rolled out increasingly powerful AI tools that produce original content in image, text or video form by drawing on a bank of existing material. The developments have raised concerns about copyright infringement, privacy breaches and that AI could ultimately replace humans in a range of jobs.

"Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," said Brad Smith, vice-chair and president of Microsoft. "This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity." Membership of the forum is limited only to the handful of companies building "large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models," according to its founders.

This discussion has been archived. No new comments can be posted.

Top Tech Companies Form Group Seeking To Control AI

Comments Filter:
  • Translation (Score:5, Insightful)

    by lytlebill ( 659903 ) on Wednesday July 26, 2023 @12:50PM (#63716126)

    "Please don't regulate us, in contrast to 99% of our actions to date, we're going to do good things for consumers instead of the bottom line this time, pinky swear."

    And/or:

    "Look at our 'responsible' code of conduct that sets standards in a way that makes it exceptionally difficult for anyone without our giant pile of exiting resources and technological head start to get into the market and compete with us. You should totes set any regulation to match this because we're so responsible."

    • Re:Translation (Score:4, Insightful)

      by dbialac ( 320955 ) on Wednesday July 26, 2023 @01:00PM (#63716150)
      And never mind that other countries will break our rules as well.
    • by WDot ( 1286728 )
      Yep, governments actually *like* large corporations more than small businesses for this reason. A large corporation has the resources to spend on creating jobs that satisfy the whims of government policymakers. A startup would (justly) complain that this regulation would make their business impossible. Google can just hire a dozen “AI Safety & Compliance” Officers that are a drag on their own productivity because it means that they’ve kept competitors out. In exchange, large corporatio
      • And government regulations have little or no effect on them. The one I always remember is the onerous regulations the small computer shop I knew had to comply with where they had to take their broken computer parts down to the city dump and put them in a specific pile. Aside from paying their taxes that was the extent.

        Real policy makers like you're Elizabeth Warren's or your Bernie Sanders or your Ed Makleys don't like large corporations because large corporations interfere with sound public policy in o
        • by WDot ( 1286728 )
          At least if you work in healthcare, it doesn’t matter whether you’re a small company or not. If the FDA thinks you have a product that requires the full battery of FDA scrutiny, you will spend years jumping through hoops. Even if you’re not in technology or pharmaceuticals, you’re just a practicing physician, the documentation and compliance requirements of ACA/HIPAA/21st Century CURES apply just as much to you as they do to a large hospital system, which coincidentally is why there
        • by GlennC ( 96879 )

          ...Google the candidates and their history and accomplishments before you vote. Do it for both parties if you haven't figured out which one is the more corrupt one.

          Both sides of the Party are corrupted beyond redemption. The only major difference is that one side is also batshit insane.

          Those "Real Policy Makers" you mentioned don't really influence anything. Their only purpose is to create the impression that their side of the Party cares about that sort of thing.

          Can't give people the idea that there are options other than "Team Red" and "Team Blue" because that's Not How Things Are Done.

    • Well, will you sign the Tethics(TM) pledge? Is your company Tethical(TM)? If not, you're part of the problem.
    • by dmomo ( 256005 )

      Or... this technology isn't rocket science. It's all about data and processing power. The barrier to entry is low. So, the barrier for competition is low. We must nip that in the bud.

      • by dmomo ( 256005 )

        I should have put that in quotes.... speaking from the POV of the industry giants.

      • by ceoyoyo ( 59147 )

        Translation: OpenAI and Microsoft are miffed that their too-dangerous-for-the-public technology is being routinely improved by random geeks playing around in their moms' basements.

    • It may as well be the exact opposite: They WANT to be regulated, so the barrier of entry is raised. All that talk about safety is just there so they get regulations that require a degree of manpower and/or money that only they can bring to the table. Incidentally, Meta (who happens to have just released llama2 with a very liberal license for private AND commercial usage) doesn't seem to be part of this.
  • Also collusion between existing players to prevent new players to enter the game...
  • by CAIMLAS ( 41445 ) on Wednesday July 26, 2023 @01:11PM (#63716168)

    This degree of regulatory capture, as we've seen market-wide in the last couple years, is quite distressing.

    It's ultimately direct collusion between "the government" and "corporations" - organizations effectively fronted and run by the same exact people. Literally: they'll sit on the board of a company for a while, jump back to some plutocratic position in DC, and then move to the board of another company.

    It's fascism, quite literally by definition. (You know, the one we've used for the past 80 years, until several months ago, when it was changed to fit a specific ideologically whitewashing version of the term).

    This should be no surprise at all, considering who was largely responsible for creating the bureaucratic state after WW2. The same people regulating the companies are the officers within the companies, across every single market vertical. They'll sit in DC for a year or two and then get ushered back into a high paying corporate officer position for a company they previously had oversight over. Wash, rinse, repeat.

    • This degree of regulatory capture, as we've seen market-wide in the last couple years, is quite distressing.

      It's ultimately direct collusion between "the government" and "corporations" - organizations effectively fronted and run by the same exact people. Literally: they'll sit on the board of a company for a while, jump back to some plutocratic position in DC, and then move to the board of another company.

      It's fascism, quite literally by definition. (You know, the one we've used for the past 80 years, until several months ago, when it was changed to fit a specific ideologically whitewashing version of the term).

      No, that's combination of oligarchy and plutocracy.

      Fascism is extreme nationalism. The idea that everyone (including every company) must be dedicated to furthering the goals of the nation. You get some oligarchy with fascism just because it's so easy to suppress dissent (oh, you don't think government should give a sweetheart deal to BigCorp? Well you're obstructing the goals of the nation!!). But oligarchs don't really like fascism since your position is now dependent on you staying friendly with the state [wikipedia.org]

  • The more you tighten your grip, Tarkin, the more star systems will slip through your fingers.

    The last thing the gatekeepers want is AI saying something that's unapproved or goes against the official narrative.

    Plus ...
    FraudGPT [thehackernews.com]
    WormGPT [thehackernews.com]

  • This means they're going to pay for the millions of books they used to train their GPTs and LLMs, right?

  • What does it even mean? I can see that you can prevent somebody from doing something. But AI is a form of "intelligence", then it means that they want to control with it "thinks"? How do you do that? By feeding "correct information"? This seems to be some form of AI indoctrination. I am sure this will go just perfect. Indoctrinated people are knows to be able to do awful things. Let us see what an indoctrinated AI is capable of.
  • by ctilsie242 ( 4841247 ) on Wednesday July 26, 2023 @01:25PM (#63716202)

    This reminds me of cryptography (no, not crypto-currencies, but basic cryptography stuff we take for granted.) Now the Powers That Be know how to deal with a new trend, so are stepping in, with hastily worded patent claims, bans, regulation, and regulatory hurdles to ensure that only they are the ones who can develop GPTs/LLMs.

    Problem is that the cat is out of the bag. No matter what the regulations, matrix multiplication is matrix multiplication and can be done anywhere. If it gets banned in the US and Europe, China, Iran, India, and other BRICS nations will happily do the AI stuff for people, including setting up cloud farms of GPUs and AI coprocessors for model training.

    AI is something every nation wants, because it can do anything from identifying potential troublemakers, Minority Report Style, to knowing where an enemy would strike, so fortifications could be set up, to guessing what tactics an opposition party will use, to just general winning with game theory. AI promises internal security that is the dream of any tyrant, so no matter how much it is regulated or even banned, everyone else will make it.

    Banning AI is like banning gunpowder based weaponry and crossbows. The enemy will have them, but you won't.

  • Autonomous Weapons (Score:4, Insightful)

    by Art Challenor ( 2621733 ) on Wednesday July 26, 2023 @01:34PM (#63716228)
    I'm amused that the whole "AI is dangerous" really started with ChatGPT. Understandable since ChatGPT appears scarily capable but the truly destructive tech is in the military arena and that's not even part of the discussion (and would be a non-starter even if it were).
    • I'm amused that the whole "AI is dangerous" really started with ChatGPT. Understandable since ChatGPT appears scarily capable but the truly destructive tech is in the military arena and that's not even part of the discussion (and would be a non-starter even if it were).

      I'm not sure AI+military tech is actually that scary.

      AI gives weapons extra autonomy, but there's already lots of weapons that require very little to none human involvement. And any AI weaponry has a very quantifiable destructive potential, how much damage can an AI with an M16 do? Well about as much damage as an M16 can do.

      The ChatGPT is scary because the capability it has, language, is extremely hard to contain. A clever set of ChatGPT powered sock puppets could take over the conversation in a set of onli

  • That sounds familiar...where have I heard that before?
    Oh yes...
    "In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time

  • We want total control over it so A) the average person will have to pay yet another monthly fee to use it and B) we'll keep the really good stuff to ourselves.

  • This is the subject of the newest Mission Impossible, part one. I am of course only guessing he will defeat it in part II, seems to be Ethan Hunt's most challenging adversary to date: https://en.wikipedia.org/wiki/... [wikipedia.org]
  • Oh yes, because self-regulation has always worked well.

    so that it benefits all of humanity should really be so that it benefits us, the corporations.

  • I maintain that this regulation isn't needed yet, in spite of the copyright issues; a statistical model isn't repeating anything, and the training process isn't even on the law books because AI aren't human.
  • Meta is releasing powerful open source LLMs and its trainign data. These are not quite at ChatGPT4 levels but are very good. This means all efforts to control the use of AI are sort of pointless.

  • Private big-tech companies trying to convince the public that they will regulate their products for our safety and privacy, is as post-modern capitalism as it gets...

You know, the difference between this company and the Titanic is that the Titanic had paying customers.

Working...