Meta Says There's Been No Downside To Sharing AI Technology (bloomberg.com) 30
Meta executives said there's been no major drawbacks to openly sharing its AI technology, even as many peers take the opposite approach. From a report: Over the past few months, Meta has been releasing open-source versions of its large language models -- the technology behind AI chatbots like ChatGPT. The idea is to keep those models free and then gain an advantage by building products and services on top of them, executives said at an event for the company's AI research Lab FAIR. "There is really no commercial downside to also making it available to other people," said Yann LeCun, Meta's chief AI scientist. Meta has joined most of the world's biggest technology companies in embracing generative AI, which can create text, images and even video based on simple prompts. But they aren't taking the same path.
Many of the top AI developers, including OpenAI and Google's DeepMind, don't currently open-source their large language models. Companies are often fearful of opening up their work because competitors could steal it, said Mike Schroepfer, Meta's senior fellow and former chief technology officer. "I feel like we're approaching this world where everyone is closing down as it becomes competitively important," he said. But staying open has its advantages. Meta can rely on thousands of developers across the world to help enhance its AI models.
Many of the top AI developers, including OpenAI and Google's DeepMind, don't currently open-source their large language models. Companies are often fearful of opening up their work because competitors could steal it, said Mike Schroepfer, Meta's senior fellow and former chief technology officer. "I feel like we're approaching this world where everyone is closing down as it becomes competitively important," he said. But staying open has its advantages. Meta can rely on thousands of developers across the world to help enhance its AI models.
One thing about Metaboot (Score:4, Insightful)
They get the value of OSS. They have been major OSS contributors for many years. There are many sleazy things about them, but I cannot recall one time they tried to shit all over a FoSS license like IBM/Redhate has done.
Re: (Score:2)
Good for Lecun (Score:4, Interesting)
Even reasonable questions like “if the AI really does start acting maliciously, how about we just unplug it like we do any other piece of misbehaving machinery” are scoffed at, but no actual realistic response is provided. It seems to be an assumption that these models will magically discover new laws of physics that will allow them to transcend the heavy infrastructure requirements (electricity, communications, teams of competent engineers) that allow them to run at all, and then use these new laws of physics to kill everyone on earth on purpose.
Re:Good for Lecun (Score:5, Insightful)
IMHO, the threat comes not from AI (which is just an overblown stats package), but from AI's users, some of whom honestly believe "AI" is sentient (and therefore capable of following standards), whilst others ascribe to it a level of accuracy that isn't there.
We know, for example, that Israel is picking targets based on the output of generative AI because they've said so. This does not seem safe. The statistical odds of words being linked is not a reliable indicator of threat status.
It is entirely possible that intelligence services will consider AI the next step from their big data meta-analysis that Snowdon reported on. Again, word association is not an indicator of threat, and if indeed anyone is employing AI this way then our security apparatus will be largely useless against any actual threats.
Likewise, if police forces try to use AI to anticipate crime, we could see absolutely disastrous results, with resources being distributed according to word association rather than risk association.
These aren't the fault of the AI system. The AI would be working perfectly, telling people what words are commonly associated with other words. The problem would be with the people making mystical assumptions from this. And, as noted, we're already seeing at least one government agenct, the IDF, do this.
I don't fear Skynet, but I am terrified of those who treat algorithms as holy and divine.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Not criminal masterminds, mind you, just regular unemployed joes who don't know the first thing about javascript but can say a few words into a phone and have
Re: (Score:3)
how about we just unplug it like we do any other piece of misbehaving machinery
Yeah, like we do with botnets and the machines of malicious adversarial state actors. We just unplug them. Easy-peasy.
that these models will magically discover new laws of physics that will allow them to transcend the heavy infrastructure requirements (electricity, communications, teams of competent engineers) that allow them to run at all
What? Many of these models can run on consumer hardware. Sure, they are not as advanced as the ChatGPTs of this world, but a human brain runs on about 20 watts, so no laws of physics need to be broken for some future highly efficient and intelligent model to greatly surpass all humans on this planet with regard to intelligence and capability to survive in a distributed and covert manner. The
Meta says (Score:2)
you can trust Zuck .et .al to be aribiter of your Reality.
LMAO
Re: (Score:1)
I wouldn't trust Zuck to be the arbiter of which type of tea I drank.
And, yet, I'd trust him a thousand times more than Elon Musk.
Still, a thousand times Planck's Constant isn't very much.
Alternate site ... (Score:1)
The site [i3investor.com] is blessed by Web of Trust. I don't know how complete it is, but it credits Bloomberg.
Gentle user-facing ... (Score:2)
... it is not. Meta wants me to sign up for Instagram, Messenger, or WhatsApp. It's like, "Be sure to pack a lot of stuff you will not be using so you can get to the small world you will be using after you ride the learning curve of the stuff you're not going to use."
China (Score:4, Insightful)
Open sourcing AI or anything else for that matter, makes it available to China, Russia, and North Korea. There are tradeoffs, and that includes downsides.
Re: (Score:2)
Open sourcing AI or anything else for that matter, makes it available to China, Russia, and North Korea. There are tradeoffs, and that includes downsides.
Most days I keep a copy of a chat tuned Yi-34b (Chinese) model running in the background so that I can ask it questions from time to time. In my opinion it is currently the best "fast" model that fits entirely in vram.
Re: (Score:1)
What like helping all of humanity, giving ordinary people the world over the tools to fight against their overlords and masters.
Not open enough (Score:2)
Facebook comes so close to actually having its models having an open license. They just need to make a small adjustment to get there. Hoping they will.
Not open source (Score:2)
Re: Not open source (Score:2)
Re: Not open source (Score:3)
Re: (Score:2)
Or OSI. The OSI notably failed to get a trademark on "open source."
Re: (Score:2)
Re: (Score:2)
The reason the US trademark office didn't issue a trademark was that they judged "open source" to not be sufficiently distinct. It's a non-specific description.
"Red herring." Lol.
Re: (Score:2)
Lllama is NOT Open Source. Let me repeat that. Llama is NOT open source. The license isn't arguably open source, it falls squarely in the "source available" category. Open source has a definition and the only aspect of that definition that Meta meets is just 1 of the required pillars of open source, namely that that source code is available to read.
What does open source even mean in terms of AI models? Models are "compiled" bags of weights from millions of dollars worth of computation. Training data and pipeline can perhaps be considered "source code" certainly not the resulting bag of weights. When people refer to open source AI models it is more of a figurative analogue.
In terms of license restrictions anyone's guess if IPR claims to bag of weights are even enforceable and ATM doesn't look like anyone really cares. There are numerous franken me
Re: (Score:2)
For those who are not aware, please read the OSI definition of Open Source.
For those who are aware, the OSI doesn't have the right to define what Open Source is. They never tried to trademark the words Open Source, only Open Source Initiative. Christine Peterson is lying when she claims to have coined the term, which some of us were using a decade before she says she thought of it.
Open source has a definition
No, it does not. It absolutely does not have one definition. But if it did have only one definition, it would be "you can see the source [archive.org]".
We have the concept of Free Software specifically because Open So
Re: (Score:2)
There was a true open-source recreation of LLaMA made called "OpenLLaMA", which is under the Apache 2.0 license. But for some reason, their benchmark numbers aren't measuring up to the actual LLaMA numbers.
we need help (Score:2)
Give us your code!
Fox says no downside to guarding henhouse (Score:2)