Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Facebook AI Open Source

Meta Says There's Been No Downside To Sharing AI Technology (bloomberg.com) 30

Meta executives said there's been no major drawbacks to openly sharing its AI technology, even as many peers take the opposite approach. From a report: Over the past few months, Meta has been releasing open-source versions of its large language models -- the technology behind AI chatbots like ChatGPT. The idea is to keep those models free and then gain an advantage by building products and services on top of them, executives said at an event for the company's AI research Lab FAIR. "There is really no commercial downside to also making it available to other people," said Yann LeCun, Meta's chief AI scientist. Meta has joined most of the world's biggest technology companies in embracing generative AI, which can create text, images and even video based on simple prompts. But they aren't taking the same path.

Many of the top AI developers, including OpenAI and Google's DeepMind, don't currently open-source their large language models. Companies are often fearful of opening up their work because competitors could steal it, said Mike Schroepfer, Meta's senior fellow and former chief technology officer. "I feel like we're approaching this world where everyone is closing down as it becomes competitively important," he said. But staying open has its advantages. Meta can rely on thousands of developers across the world to help enhance its AI models.

This discussion has been archived. No new comments can be posted.

Meta Says There's Been No Downside To Sharing AI Technology

Comments Filter:
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday December 01, 2023 @10:36AM (#64046539) Homepage Journal

    They get the value of OSS. They have been major OSS contributors for many years. There are many sleazy things about them, but I cannot recall one time they tried to shit all over a FoSS license like IBM/Redhate has done.

    • Yep, I guess they've learnt from Apple that you make far more from clever marketing than you do from scientific discovery or innovation.
  • Good for Lecun (Score:4, Interesting)

    by WDot ( 1286728 ) on Friday December 01, 2023 @10:54AM (#64046589)
    This AI safety drama has been so odd, from a rhetorical standpoint. Lecun almost seems like an island, cheering open source models which have already been a massive boon to both scientists and engineers for the past decade. On the other hand, OpenAI and Google and Geoffrey Hinton are going around saying “ooga booga, if you don’t watch out Skynet will become real! The only solution is to give us a regulated monopoly!” When pressed on how these statistical models will become Skynet, they never point to any mainstream AI literature, but only hint at the obscure research of a bunch of nuts who have not built actual models but have made sophisticated mathematical doodles out of their imaginary doomsday scenarios.

    Even reasonable questions like “if the AI really does start acting maliciously, how about we just unplug it like we do any other piece of misbehaving machinery” are scoffed at, but no actual realistic response is provided. It seems to be an assumption that these models will magically discover new laws of physics that will allow them to transcend the heavy infrastructure requirements (electricity, communications, teams of competent engineers) that allow them to run at all, and then use these new laws of physics to kill everyone on earth on purpose.
    • Re:Good for Lecun (Score:5, Insightful)

      by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Friday December 01, 2023 @11:37AM (#64046679) Homepage Journal

      IMHO, the threat comes not from AI (which is just an overblown stats package), but from AI's users, some of whom honestly believe "AI" is sentient (and therefore capable of following standards), whilst others ascribe to it a level of accuracy that isn't there.

      We know, for example, that Israel is picking targets based on the output of generative AI because they've said so. This does not seem safe. The statistical odds of words being linked is not a reliable indicator of threat status.

      It is entirely possible that intelligence services will consider AI the next step from their big data meta-analysis that Snowdon reported on. Again, word association is not an indicator of threat, and if indeed anyone is employing AI this way then our security apparatus will be largely useless against any actual threats.

      Likewise, if police forces try to use AI to anticipate crime, we could see absolutely disastrous results, with resources being distributed according to word association rather than risk association.

      These aren't the fault of the AI system. The AI would be working perfectly, telling people what words are commonly associated with other words. The problem would be with the people making mystical assumptions from this. And, as noted, we're already seeing at least one government agenct, the IDF, do this.

      I don't fear Skynet, but I am terrified of those who treat algorithms as holy and divine.

      • The danger is people dodging their responsibilities. If the answer to "why did you bomb that hospital" is "because the computer told me so", and that's acceptable, it becomes a problem.
        • by elcor ( 4519045 )
          Very good point, humanity disengaging itself even more is the risk. When I used it heavily during the lawsuit, I stopped reading documents. Only because I'm slightly more intelligent and motivated that the average did I take the reigns back - these fucking things are so unreliable.
      • I don't fear skynet, and I only fear the algorithm cultists a little bit. I do fear that criminals are having a field day with ultra cheap scamming tools for blackmail, money laundering scams, kidnapping, phishing, denials of service, sabotage, character assassination, terror, hatespeech, race targeting, insurrection, disinformation, propaganda, etc

        Not criminal masterminds, mind you, just regular unemployed joes who don't know the first thing about javascript but can say a few words into a phone and have

    • how about we just unplug it like we do any other piece of misbehaving machinery

      Yeah, like we do with botnets and the machines of malicious adversarial state actors. We just unplug them. Easy-peasy.

      that these models will magically discover new laws of physics that will allow them to transcend the heavy infrastructure requirements (electricity, communications, teams of competent engineers) that allow them to run at all

      What? Many of these models can run on consumer hardware. Sure, they are not as advanced as the ChatGPTs of this world, but a human brain runs on about 20 watts, so no laws of physics need to be broken for some future highly efficient and intelligent model to greatly surpass all humans on this planet with regard to intelligence and capability to survive in a distributed and covert manner. The

  • you can trust Zuck .et .al to be aribiter of your Reality.

    LMAO

    • by jd ( 1658 )

      I wouldn't trust Zuck to be the arbiter of which type of tea I drank.

      And, yet, I'd trust him a thousand times more than Elon Musk.

      Still, a thousand times Planck's Constant isn't very much.

  • The site [i3investor.com] is blessed by Web of Trust. I don't know how complete it is, but it credits Bloomberg.

  • ... it is not. Meta wants me to sign up for Instagram, Messenger, or WhatsApp. It's like, "Be sure to pack a lot of stuff you will not be using so you can get to the small world you will be using after you ride the learning curve of the stuff you're not going to use."

  • China (Score:4, Insightful)

    by groobly ( 6155920 ) on Friday December 01, 2023 @12:32PM (#64046773)

    Open sourcing AI or anything else for that matter, makes it available to China, Russia, and North Korea. There are tradeoffs, and that includes downsides.

    • Open sourcing AI or anything else for that matter, makes it available to China, Russia, and North Korea. There are tradeoffs, and that includes downsides.

      Most days I keep a copy of a chat tuned Yi-34b (Chinese) model running in the background so that I can ask it questions from time to time. In my opinion it is currently the best "fast" model that fits entirely in vram.

    • What like helping all of humanity, giving ordinary people the world over the tools to fight against their overlords and masters.

  • Facebook comes so close to actually having its models having an open license. They just need to make a small adjustment to get there. Hoping they will.

  • The meta models are not open source. I'd fed up with this bullshit. I'm ready to put down about $5k of my personal cash to make Meta shut up about this lie. If about 4-5 other people in California are willing to out down a similar amount I think around $30k should be enough to bankroll a California False Advertising and Unfair Competition lawsuit bases on what I've heard.
    • Excuse the typos, calling these open source makes my blood boil...
      • For those who are not aware, please read the OSI definition of Open Source. Open Source forbids usage restrictiona and discrimination, which Meta's License does. Lllama is NOT Open Source. Let me repeat that. Llama is NOT open source. The license isn't arguably open source, it falls squarely in the "source available" category. Open source has a definition and the only aspect of that definition that Meta meets is just 1 of the required pillars of open source, namely that that source code is available to r
        • Lllama is NOT Open Source. Let me repeat that. Llama is NOT open source. The license isn't arguably open source, it falls squarely in the "source available" category. Open source has a definition and the only aspect of that definition that Meta meets is just 1 of the required pillars of open source, namely that that source code is available to read.

          What does open source even mean in terms of AI models? Models are "compiled" bags of weights from millions of dollars worth of computation. Training data and pipeline can perhaps be considered "source code" certainly not the resulting bag of weights. When people refer to open source AI models it is more of a figurative analogue.

          In terms of license restrictions anyone's guess if IPR claims to bag of weights are even enforceable and ATM doesn't look like anyone really cares. There are numerous franken me

        • For those who are not aware, please read the OSI definition of Open Source.

          For those who are aware, the OSI doesn't have the right to define what Open Source is. They never tried to trademark the words Open Source, only Open Source Initiative. Christine Peterson is lying when she claims to have coined the term, which some of us were using a decade before she says she thought of it.

          Open source has a definition

          No, it does not. It absolutely does not have one definition. But if it did have only one definition, it would be "you can see the source [archive.org]".

          We have the concept of Free Software specifically because Open So

    • by Dwedit ( 232252 )

      There was a true open-source recreation of LLaMA made called "OpenLLaMA", which is under the Apache 2.0 license. But for some reason, their benchmark numbers aren't measuring up to the actual LLaMA numbers.

  • Give us your code!

  • 1. Do you believe them? 2. Does Meta have a complete strategy for the $20+ B it's spending on datacenter gear per year? 3. Does Meta even have a strategy to monetize anything besides the private data and mental health of individuals?

The Tao doesn't take sides; it gives birth to both wins and losses. The Guru doesn't take sides; she welcomes both hackers and lusers.

Working...