Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Google AI

Eric Schmidt Rejects AI Research Pause Over China Fears (bloomberg.com) 32

Putting a temporary pause on artificial intelligence development would only hand an advantage to competitors in China, former Google CEO Eric Schmidt said, after more than 1,000 researchers signed a letter warning of the consequences of moving too quickly on AI research. From a report: Speaking to the Australian Financial Review in an interview published Friday, Schmidt said there were legitimate concerns about the speed of research into AI but they should be mitigated by tech companies working together to set standards. In the past week, more than 1,000 researchers and executives, including Tesla Chief Executive Officer Elon Musk, signed an open letter published by the Future of Life Institute, which called for an AI research pause of "at least six months," warning of "potentially catastrophic effects" on society if appropriate governance wasn't put in place. But Schmidt said he wasn't in favor of the six-month pause as it would "simply benefit China. What I am in favor of is getting everyone together ASAP to discuss what are the appropriate guardrails."
This discussion has been archived. No new comments can be posted.

Eric Schmidt Rejects AI Research Pause Over China Fears

Comments Filter:
  • by david.emery ( 127135 ) on Friday April 07, 2023 @11:37AM (#63432940)

    Better to continue research and learn from it, than to pretend that everyone will obey some sort of "appeal to bury your head in the sand for a year."

    Now I still think the core concern is -verification of AI-, and that requires a better understanding of -specification of AI-, because you can't really verify something unless you know what it's supposed to do and not do.

    • Re: (Score:2, Interesting)

      by postbigbang ( 761081 )

      Schmidt, who devastated your privacy, and became a tax-haven citizen to move his money out of the US, now claims China might get a lead on a technology he's invested in.

      Go ahead and agree with Eric, who is a schmuck of the highest order.

      It's all about Eric-- make no mistake about that.

      • by jm007 ( 746228 )

        you're right, the messenger is likely a biased PoS

        his message however, is realistic and pragmatic

        • As AI improves -- especially AI embodied in robotics -- it makes it cheaper to produce goods and services. AI and robotics ultimately makes most human labor relatively less valuable in a competitive marketplace.

          More on that collected by me: https://www.pdfernhout.net/bey... [pdfernhout.net]
          "This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary s

      • technology he's invested in ... It's all about Eric-- make no mistake about that.

        Actually I don't agree with that. He owns Alphabet / Google stock, and is also invested heavily in the biotech industry. Google is currently way, way behind OpenAi's ChatGPT 4 (in fact they're far behind ChatGPT 3.5 for that matter). A moratorium only hurts OpenAI as they have the lead, and would allow companies like Google to catch up. So I disagree with your analysis saying it benefits Schmidt personally to NOT force ChatGPT to pause at their current 4.0 level AI.

      • When has allowing an industry to manage itself ever not worked out well anyways? I for one absolutely trust tech companies to properly regulate themselves and keep the interest at society as a whole as their primary concern.

  • by necro81 ( 917438 ) on Friday April 07, 2023 @11:43AM (#63432948) Journal
    "Mr. President, we must not allow a mine-shaft gap!"

    [ref [youtube.com]]
  • 1,000 researchers: "Hi Google, please pause your AI work for 6 months or society will collapse."

    Schmidt: "What?! But then China is going to have a competitive advantage! We won't be able to make profits!!!"

    1,000 researchers: "Yeah, but, uh, if society collapses, you won't have anything to buy with your profits."

    Schmidt: "ME WANT MAKE AI. MAKE MONEY GOOD. FREE MARKET BLARGHGJAKVKGJVKFAGVKODFJ(@&T^(T(((&^GD*&Y(*&(*&*&^!@)(*#)(*(7573260982302309890287582734098233-4-023984-02398-298490ER

    • by narcc ( 412956 )

      Society isn't going to collapse. It's a marketing gimmick. What good would a six-month "pause" do anyway but delay the collapse of society by six months?

      No, all this nonsense is about nothing more than making people think the tech can do way more than it actually can.

      • > What good would a six-month "pause" do

        Would give people 6 months to try to figure out how to prevent AI from collapsing society. Which .... is.... what the story says.

        • Please explain how a weak AI large language model system like chatgpt or others in that family of programs are going to destroy society.

          There are a lot of things going on that might destroy society or greatly weaken society. Chatgpt is not one of them.

        • by narcc ( 412956 )

          Ignoring the absurdity of it all for a moment, the imagined harms and ethical implications of AI have been explored to death since the 1960s from every conceivable angle. What possible progress do you think is going to be made in a few months?

          to figure out how to prevent AI from collapsing society.

          What do you think a solution would look like? Some new technology? A set of rules that we'll all agree to follow? A disaster response plan? What would a disaster even look like? It's all very silly.

          There is no danger here. It's just a stupid marketing gimmick.

  • The issue is deployment - i.e., companies and organizations deciding to offload things that require human judgment to what is, despite the constant hype, still basically a chatbot with web search capabilities.

    • This is kinda like "outsourcing on steroids" - my experience with outsourced services is the companies that outsource then get to disclaim any responsibility for what their contractors do/fail to do. When customer service, etc, is handled by ChatGPT, and the AI gives you wrong data (or even tells you to divorce your spouse), companies will disclaim any responsibility for that, "We don't know and can't control what ChatGPT or Bing or Bard says."

      Example: My home email account, which I've had since 1986, is

      • That won't fly in a court.

        "Yeah we put this computer in charge of your surgery so hey sorry you died horribly on the table and didn't even get anesthesia but the computer was in charge so sucks to be you but not our fault!" is not a winning legal argument.

        If a doctor at a hospital kills someone, the hospital gets sued.

      • Btw, your email problem is probably your IP address not your domain, assuming you're not spamming. I gave up on home hosted email because every residential IP block ended up on some big black lists years ago for spam which included me even though I never spammed anyone.

        If you get a commercial IP or have some other non-residential IP your domain spam issues will likely clear up in time.

        Good luck on that, though, I moved my mail to gmail business and pay a few bucks a month. I get pretty solid anti-spam bui

  • China is the bogeyman of late. While it's perfectly well and good to identify them as behaving in a way we'd like to not emulate, I don't think it's justified nor prudent to sanction them/cut them off/trade embargo etc. Using them as a demon against which we supposedly struggle isn't helpful to us, nor to de-escalating future conflicts. So if he's really concerned about china doing something bad to us that he's missing an opportunity to prevent, he should encourage our contemporaries to turn down the pol
    • by rogoshen1 ( 2922505 ) on Friday April 07, 2023 @01:12PM (#63433168)

      Why are progressives only concerned with ultra nationalist elements in western societies? If you can't see that we are on a collision course with them, i don't know what to tell you.

      Look at it like this, in the days of Mao, farmers were coerced into smelting down farming implements so the CCP could match Britain in terms of steel production, due to a resentment held over from Britain's adventures decades ago. There's a pretty strong urge to prove themselves on the world stage as a great power, which yeah, will inevitably lead to conflict with the US (who doesn't want to see themselves knocked down a peg or two)

    • I think I disagree. On the interpersonal level, if you have moral objections to what a person does, you stop doing business with them, don't you? I don't see why the same principle doesn't apply to international relations. The usual stated justification given for maintaining normal trade with China for so long despite the atrocious (and increasingly unapologetic) behavior of Chinese government was the naive conviction that, if only China followed the same economic path as modern democracies, that it woul
    • This whole thing reeks of counting chickens before they hatch.

      We can't even get PEOPLE to do a 'good' job most of the time, now we're going to get something made by those same people, to suddenly replace those people AND somehow do at least as good a good as before if not, most likely(lol sure), better?

      It's going to start raining pie soon I think.

  • by Tablizer ( 95088 ) on Friday April 07, 2023 @01:31PM (#63433228) Journal

    Sorry, the genie is out of the bottle. Even if democracies agree to safeguards, dictators and rogue players will test the limits.

    Making mass fake news and social-engineering-based biz hacks are probably the 2 biggest risks.

    GOP was not in favor of Biden's plan to monitor and issue alerts over fake news, saying it had too much potential for political abuse. However, if there is an "information pandemic" that creates mass chaos, there's no central org to coordinate the cleanup.

    Find a way to compromise to make a bipartisan working group to at least monitor potential problems, and draft up contingency plans. Think of a kind of Information Federal Reserve. Members should have long-term assignments so that that no one petulant President can stuff it with their toadies.

    • We're already suffering an information pandemic (nice phrase btw) right now.

      There's been tons of complete shit out there on countless topics pushed by people we're supposed to trust that we later find out are complete lies. And it gets worse every day.

      We are there right now.

      • by Tablizer ( 95088 )

        Perhaps "misinformation pandemic" is more fitting. Polarization has definitely been increasing the last decade or two. It might be the "boiling frog" problem where we don't know we drifting into hot water because it's relatively gradual.

  • Comment removed based on user account deletion
  • it hurts and its not our fault a mistake was made and our money is in your pocket!
  • I'll do a side track, and let's discuss lethal AI.

    Yes, ChatGPT could be considered "lethal" to some obsolete jobs, like copy editing, but that has been part of life since industrial revolution.

    I am more worried about the military uses of the technology, and the genie is already out of the bottle. Having multiple competing superpowers means none of them can trust each other with a moratorium. Think about it... How can you distinguish work done in a regular datacenter with AI, let's say on image tagging, vers

    • It depends on how it's used in combat.

      If given free rein to fire weapons at whatever it feels is a valid enemy target then no, that's crazy.

      If it pops up on a screen to tell a trained human it thinks the target is an XYZ with probably of ABC percent then that is a possible improvement in the battle field that could reduce friendly fire and dead innocent civilians.

      • by stikves ( 127823 )

        The free rein one will eventually happen, won't it?

        First in geo-fenced "off limits" areas like Korean DMZ, or other military exclusion zones. Those will be perfect for "patrolling robots with a shotgun" that will kill anything on sight. (No need for an IFF to distinguish between you guys and theirs).

        And then, at a desperate time, like "cyber rattling" (pun intended), a general losing a battle will deploy them while retreating.

        And then, we would incrementally increase their use since they are "safer" now wit

        • Yes it absolutely will by some country and there's nothing to be done about it. AI with self driven shoot to kill capacity will wipe out a few villages or something and there will be outcry and everyone will keep using them because "the enemy" is. But just because we know the stupid evil use will eventually happen doesn't mean we shouldn't implement the other version that reduces random deaths. Either way, the cat is out of the bag, the horse is not coming back to the barn, and you can't unburn a piece o

  • My first thought was this would be a favor to China or his companies - He's got a trip to China coming up.

Quantity is no substitute for quality, but its the only one we've got.

Working...