Microsoft Says AI Can Create 'Zero Day' Threats In Biology (technologyreview.com) 29
An anonymous reader quotes a report from MIT Technology Review: A team at Microsoft says it used artificial intelligence to discover a "zero day" vulnerability in the biosecurity systems used to prevent the misuse of DNA. These screening systems are designed to stop people from purchasing genetic sequences that could be used to create deadly toxins or pathogens. But now researchers led by Microsoft's chief scientist, Eric Horvitz, says they have figured out how to bypass the protections in a way previously unknown to defenders.The team described its work today in the journalScience.
Horvitz and his team focused on generative AI algorithms that propose new protein shapes. These types of programs are already fueling the hunt for new drugs at well-funded startups like Generate Biomedicines and Isomorphic Labs, a spinout of Google. The problem is that such systems are potentially "dual use." They can use their training sets to generate both beneficial molecules and harmful ones. Microsoft says it began a "red-teaming" test of AI's dual-use potential in 2023 in order to determine whether "adversarial AI protein design" could help bioterrorists manufacture harmful proteins.
The safeguard that Microsoft attacked is what's known as biosecurity screening software. To manufacture a protein, researchers typically need to order a corresponding DNA sequence from a commercial vendor, which they can then install in a cell. Those vendors use screening software to compare incoming orders with known toxins or pathogens. A close match will set off an alert. To design its attack, Microsoft used several generative protein models (including its own, called EvoDiff) to redesign toxins -- changing their structure in a way that let them slip past screening software but was predicted to keep their deadly function intact. "This finding, combined with rapid advances in AI-enabled biological modeling, demonstrates the clear and urgent need for enhanced nucleic acid synthesis screening procedures coupled with a reliable enforcement and verification mechanism," says Dean Ball, a fellow at the Foundation for American Innovation, a think tank in San Francisco.
Horvitz and his team focused on generative AI algorithms that propose new protein shapes. These types of programs are already fueling the hunt for new drugs at well-funded startups like Generate Biomedicines and Isomorphic Labs, a spinout of Google. The problem is that such systems are potentially "dual use." They can use their training sets to generate both beneficial molecules and harmful ones. Microsoft says it began a "red-teaming" test of AI's dual-use potential in 2023 in order to determine whether "adversarial AI protein design" could help bioterrorists manufacture harmful proteins.
The safeguard that Microsoft attacked is what's known as biosecurity screening software. To manufacture a protein, researchers typically need to order a corresponding DNA sequence from a commercial vendor, which they can then install in a cell. Those vendors use screening software to compare incoming orders with known toxins or pathogens. A close match will set off an alert. To design its attack, Microsoft used several generative protein models (including its own, called EvoDiff) to redesign toxins -- changing their structure in a way that let them slip past screening software but was predicted to keep their deadly function intact. "This finding, combined with rapid advances in AI-enabled biological modeling, demonstrates the clear and urgent need for enhanced nucleic acid synthesis screening procedures coupled with a reliable enforcement and verification mechanism," says Dean Ball, a fellow at the Foundation for American Innovation, a think tank in San Francisco.
One word. (Score:1)
Bad idea (Score:2)
Re: (Score:2, Funny)
...or perhaps the manufacturers will start to check a little more closely on the proteins they are building, and maybe push it through their own LLM and ask 'is this a toxin' and build a database of similar/analogs/alternatives to STOP that.
Obviously someone at Microsoft thought this up, you're living in a bubble if you think 'evildoers' are somehow less intelligent than white hats.
Re:Bad idea (Score:4, Insightful)
That scheme won't stop the wrong people or government (more likely) from putting together several different proteins for something really bad. I don't think there is a fix for this as it becomes an arms race. Put another way, there is no closed world hypothesis for these compounds. We can only guard for what we know, what we do not know is still out there.
Already been done (Score:1)
AI Mengele (Score:2)
If we were ever wondering if in the pursuit of security one can actually create insecurities, red-teaming a fucking DNA bank to try and see if AI Mengele is better than Josef ever was, certainly qualifies. Microsoft should take a lesson on keeping shit classified. This is the kind of story we should only read about in a CDC history book years after the fact. And this is why:
even now, it’s not foolproof, they warn.
Way to put a spotlight on AI Mengele future goals. If we’re so hell-bent on converting AI to AGI, perhaps we should try and r
Re: (Score:2)
So you'd rather this research NOT be done, and let the people who have ALREADY thought of doing this keep doing it, and not warn the actual manufacturers? They can at least get ahead of this and start to realize "Oh this unknown protein is an analogue with the same properties as toxin X". If you're scared of this, you haven't been paying attention to everything else LLM has been used for and think this is novel and unique, when this is NOT the first time this has been done.
The nuclear weapon. VX gas. There are a few things we humans really wish we could un-invent.
Speaking of your potential harm, anyone know if the more detailed red-team data has been hacked already, or did we not even think about securing that deadly research well enough to not publicize a fucking blueprint of far-worse?
Some research, is necessary. Not all research, should be made public for clicks and fucks sake. This, is one of them. They’ve welcomed the monster by publishing this target of oppor
Re: (Score:3)
The nuclear weapon.
Let's use this example as an example. I completely understand what you are saying. I sure as hell share your concerns. But I have no idea of how we stop it. At what point do we say "Not allowable for research!"? Making it impossible to do any research that might result in a nuclear weapon would require elimination of atom and astro physics. Simple things like determining how stars obtain the massive amount of energy they use leads one to how the atom works. Basic research into particle physics would have
Re: (Score:2)
The largest differentiator between then and now, is understanding how highly classified a lot of research was after they realized how immense the potential harm could be.
Tends to highlight my point around the sheer stupidity of doing this research and then making it public, while leaving the door cracked open for further abuse.
Can we prevent innovation driven by curiousity? No. Nor should we. But should everything be in the public domain, because World Wide Web? No, not really. For all the same reason
Re: (Score:2)
The largest differentiator between then and now, is understanding how highly classified a lot of research was after they realized how immense the potential harm could be.
Tends to highlight my point around the sheer stupidity of doing this research and then making it public, while leaving the door cracked open for further abuse.
Speaking of classified information, we knew the Germans knew about the potential of fission - and they were ahead of us for a while, having achieved the first fission in late 1930's Germany. So there was no question that the 1930's Germany government was going to pursue this as a weapon. Knowing this, we did our best to impede the German effort. Fortunately we got the results first. In retrospect, the basic concepts of fission are simple. Uncontrolled fusion not quite so simple, but nothing a high schooler
Re: (Score:2)
Not really. It really *IS* a danger. IIUC, however, currently most of the things it predicts don't really work. (This is also true for professional organic chemists!) So it's going to be something expensive to implement...and require a bit of research (i.e. actually building stuff and testing it).
OTOH, bacteria have been doing this research as long as they've been around. So don't immediately freak out. (Most of our fancy medicines are just tweaks on the bacterial research projects.)
Marketing nonsense (Score:2)
So, according to Microsoft AI can make it easier to buy toxins from commercial DNA vendors. Who is that supposed to appeal to? 'If you're a doomsday cult or a terrorist organisation wanting to develop bioweapons, buy our AI! It's the best at getting past security screening procedures!' *two thumbs up and a cheesy grin*.
Really though if someone would want to design entirely new proteins on basis of existing toxins, test those proteins and possibly be able to mass-produce them that implies an amount of technn
Re: (Score:2)
So Microsoft marketing seems to be resorting to cheap scare tactics in another desperate attempt to plug its AI. Apparently they'd rather people believed that AI is scary as opposed to just useless and overhyped.
While I agree with you in that this should not have been marketed and published in the very public manner it was, don’t assume AI isn’t dangerous. It absolutely is.
Is a nuclear-tipped arsenal merely a tool, or is it a dangerous weapon in your mind? Remember your answer will define what AI is.
Re: (Score:2)
As I've mentioned before they are just at the 'our AI is so powerful it might be too dangerous for us to release...' phase of the hype cycle. After this they go on to 'we know how to solve general intelligence...we've figure it out...everything is about to change'.
It's a tick-tock media cycle that they just bounce between. Sam Altman is the inventor. MS is a fast learner.
Re: (Score:2)
100% agree. Doomsday fear based hype is clever.
A small distraction once you realize-- oh yeah, this shit still doesn't... even... work.
Re: (Score:3)
Maybe Microsoft should fix their own crap? (Score:2, Interesting)
They have now had two compromises of all of their cloud users. Forst, 2021-2023 all Exchange online emails and they did not notice themselves and still do not know how the attackers got in. And recently a full Azure compromise where they know nothing because they have no logs. Nobody might have used that vector or somebody might have places backdoors copiously. If this were professionally run infrastructure, Azure would now need to be fully reinstalled ...
Synthetic biology threats are one thing, but not bei
Much Ado! (Score:2)
I don't want to play THEATERWIDE BIOTOXIC AND CHEM (Score:2)
I don't want to play THEATERWIDE BIOTOXIC AND CHEMICAL WARFARE
let's play global thermonuclear war
Won't be AI, but a racist (Score:2)
Someday we will have a machine that can sample cancer and build a virus that will kill all cells that have the specific gene that makes it cancer.
And someone will decide, hey, how about the gene for epicanthic folds (asian eyes). Or the gene for blond hair. Or the gene for nappy hair.
Now that is going to be a real racist nightmare.
If we are very lucky the first one will be incompetent and will end up giving people a cold rather than a grave. After that, perhaps we can fund an agency or two to police this
MS Can't Even Make an OS that Anyone Wants (Score:1)