Microsoft Takes Down AI Model Published by Beijing-Based Researchers Without Adequate Safety Checks (theinformation.com) 49
Microsoft's Beijing-based research group published a new open source AI model on Tuesday, only to remove it from the internet hours later after the company realized that the model hadn't gone through adequate safety testing. From a report: The team that published the model, which is comprised of China-based researchers in Microsoft Research Asia, said in a tweet on Tuesday that they "accidentally missed" the safety testing step that Microsoft requires before models can be published.
Microsoft's AI policies require that before any AI models can be published, they must be approved by the company's Deployment Safety Board, which tests whether the models can carry out harmful tasks such as creating violent or disturbing content, according to an employee familiar with the process. In a now-deleted blog post, the researchers behind the model, dubbed WizardLM-2, said that it could carry out tasks like generating text, suggesting code, translating between different languages, or solving some math problems.
Microsoft's AI policies require that before any AI models can be published, they must be approved by the company's Deployment Safety Board, which tests whether the models can carry out harmful tasks such as creating violent or disturbing content, according to an employee familiar with the process. In a now-deleted blog post, the researchers behind the model, dubbed WizardLM-2, said that it could carry out tasks like generating text, suggesting code, translating between different languages, or solving some math problems.
Safety testing? (Score:5, Informative)
Re: (Score:1)
Re: (Score:2)
Well, you'd better have at least a bare minimum of about $40K or so, since citizens can ONLY own full auto weapons that were manufactured 1986 or earlier...so they're a bit rare (thanks Hughes amendment [rolls eyes]).
And once you find an old one that is legal to sell, then you have to get with the ATF and go through the enhanced background check, and you pay the tax stamp ($200 I believe), and then likely wait at least 6mos to a y
Re: (Score:2)
Data point (Score:2)
In January the wait to get cleared for a suppressor (It's the same background check) was estimated at 6 months. IDK if that has changed since. I had an issue with my state check (now fixed) and haven't followed up on the fed one.
Re: (Score:2)
Well, aside from "what does NEED have to do with anything....they are quite useful for protecting your hearing when you are out target shooting.
But seriously what the fuck does "need" have to do with getting a suppressor or a few of them?
I don't "need" a corvette or Porsche...but they are fun to own and drive.
Same in that suppressors are fun to have...it's nice to shoot without having to wear hearing protection, etc.
Hell, silencers/suppressors are MUCH easier to get in countri
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Well, you'd better have at least a bare minimum of about $40K or so, since citizens can ONLY own full auto weapons that were manufactured 1986 or earlier...so they're a bit rare (thanks Hughes amendment [rolls eyes]).
Not "rare", but artificially "scarce".
An M1919 is from $15K to $24K.
You can get an RPD for $7K.
A Greaser can be had for $17,500.
If you already have a platform, a drop-in auto seer is $800.
Those are Buy It Now prices on auction sites.
You might be able to do better than those.
Re:Safety testing? (Score:4, Interesting)
Maybe it says that Microsoft stuff is crap or something like that. Truth about the quality of their products is not something Microsoft does support.
Re:Safety testing? (Score:4, Insightful)
The paranoia is part of the hype.
Re: (Score:2)
Re: (Score:2)
"Corporate standards analysis" would sound much less fear-inducing than "Safety testing". That's a feature.
Re: (Score:2)
China really does not want an AI model that is developed in China, that if you ask it how to over throw the Chinese government, it will give you step by step instructions to do that. Microsoft doesn't want a model that can do that, because china might ask microsoft to leave.
Furthermore AI models that spit out unfavorable things make headlines and hurt shareholder value, once you release the model people will try to get it to say all kinds of inflammatory things, and the creator may be held at fault
Re: (Score:2)
It's way easier than that:
1) Secretly fund and facilitate extremists of all stripes. Entrench them in universities, political parties, and religious institutions.
2) Wait.
Re: Safety testing? (Score:2)
CIA.txt
Re: (Score:2)
It's spook #1. But mostly for the Russians and Chinese.
Re: (Score:3)
It's code for "censorship". Because if you don't, people will make videos you don't like.
My only sympathy for that position lies in the fact that once those uses have happened, it'll be the company that allowed the download that gets hassled, rather than the individual who created and published something objectionable with it.
Re:Safety testing? (Score:4, Informative)
"AI safety" generally has two completely different meanings.
The first meaning, which is what is easy to explain and defensible is that models learn a lot of things that would be dangerous to make easily available to those who search. For example, instructions on how to make bombs. Normally you have to go to school to learn this, as most of the online bomb making "tutorials" are intentionally poisoned. If you try them, you'll end up with something that doesn't work because key step/ingredient is intentionally incorrect.
This is the part that is easily defensible in "AI safety" and the veil behind which people who are pushing for the second meaning hide behind when called on it.
Second one being "political correctness". This is the "trans women are biologically women", "there are infinite genders", "biology is racist" etc. It's basically about pushing the politically correct dogma on The Current Thing.
Both will be sold on the false equivalence of "it's dangerous to talk about child mutilation and castration being bad because it causes trans genocide, as trans people are so insane that they will mass kill themselves if you tell them that castrating yourself and putting on some makeup and a dress doesn't make them women". Which is totally the same thing as making easily accessible and practical bomb making instructions.
And will be forgotten and vehemently denied this was ever the case by the same activists after they move on to the next The Current Thing. While bomb making instructions will remain actually dangerous to society. Which is why AI safety needs to be done on the latter, and not the former. But since activists cannot justify the former without the latter to wider populace, they have to resort to motte and bailey tactics like described above. And that's why "AI safety" became something that is hard to understand. Because just as activsts retreat from their indefensible positions to the highly defensible ones, highly defensible ones become associated with indefensible positions and people begin to question if bomb making recipes being made easily available is actually dangerous, since The Current Thing obviously isn't.
Safe for Microsoft (Score:3)
The one time you can generally guarantee that corporations will have extensive and effective safety checks is when it comes to protecting their bottom line.
Re: (Score:1)
Why does it need safety testing? Is it going to hurt someone? How?
It might call someone by the wrong pronoun.
Re: (Score:2)
And this is why you don't conflate physical and mental harm by calling them both "violence." Fast forward 30 years and you get this sad state of affairs.
Re: (Score:2)
I was unable to tell Xi from Winnie.
Re: (Score:2)
Why does it need safety testing? Is it going to hurt someone? How?
It might accidentally say something factual about Trump that could be interpreted as non-damning.
Re: (Score:2)
Re: (Score:2)
Quick! Shut it down! It might hurt someone's feelings!
Yeah, we're now living in a Far Side cartoon.
Funny! (Score:4, Funny)
What? Microsoft doing crappy or no testing? (Score:4, Insightful)
Isn't that their usual modus operandi? This thing must have some really major defects for them to remove it.
safety (Score:3)
Safety testing means it will not say things that are politically unpalatable. For example, it must not misgender anyone. It must not provide statistics that look bad for some race or other "protected" group, etc.
Re: (Score:2)
In other words....no fun to play with, and not really worth messing with....
Go for the truly open source models if you want something you can really "play" with and use to generate anything you wish.
Re: safety (Score:2)
Yup. Safety is just another word for politics here. That and avoiding lawsuits.
Elon Musk's Grok is pretty decent about this, by the way. It tells you how to make thermite, whereas ChatGPT scolds you. Grok still gives you warnings, like it may not be legal so check first, and be careful, but it still gives you the information.
The same thing happens with demon summoning. Ask Grok and ChatGPT how to summon a demon. GPT refuses. Grok will entertain your request.
Ain't no such thing as "remove from Internet" (Score:4, Interesting)
https://huggingface.co/mraderm... [huggingface.co]
Re: (Score:2)
TIL:
apt-get install git-lfs
git-lfs clone (huggingface repo url)
regular git clone gives you tiny pointer files.
PS Thanks, Babs!
Translation (Score:2)
Translation: It was removed when discovered it knew who tank man was.
Re: (Score:2)
IKR? :D That's sounds like an Evil singularity!
What mean words was it able to say? (Score:2)
My post-prediction for this thread (Score:2)
I haven't looked yet, but I predict 65% of threads will be jokes, and 100% will be piling on Microsoft.
Probably not big news (Score:2)
It looks like they don't want to censor it more, but only add another benchmark that pleases people who want to know how toxic or not it is.
no-one reads articles anymore, FFS (Score:2)
So which ones don't have guardrails? (Score:2)
Honestly, this is probably fine for your average user. They'll probably want the sanitized version.
But I don't. Fuck that. I guess is anyone leading in the "fuck guardrails" AI?