Forgot your password?
typodupeerror
AI Google Security

Google Says Hackers Used AI To Create Zero Day Security Flaw For the First Time (politico.com) 24

Google says it has seen the first evidence of cybercriminals using AI to create a zero-day vulnerability. "Google reported its findings to the unnamed firm affected by the vulnerability before releasing its report," reports Politico. "The company then issued a patch to fix the issue." From the report: Google Threat Intelligence Group researchers detailed the development in a report released Monday. Zero-day exploits are considered the most serious type of security flaw because they are not detected by security companies and have no known fixes. The report noted that this was the first time Google had seen evidence of AI being used to develop these vulnerabilities -- marking a major change in the cybersecurity landscape, as it suggests newer AI models could be used to create major exploits, not just find them.

Google concluded that Anthropic's Claude Mythos model -- which has already found thousands of vulnerabilities across every major operating system and web browser -- was most likely not used to create the zero-day exploit. [...] The Google Threat Intelligence Group report also details efforts by Russia-linked hacking groups to use AI models to target Ukrainian networks with malware, while North Korean government hacking group APT45 used AI technologies to refine and scale up its cyber methods.
John Hultquist, chief analyst at Google Threat Intelligence Group, said the findings made clear that the race to use AI to find network vulnerabilities has "already begun."

"For every zero-day we can trace back to AI, there are probably many more out there," Hultquist said. "Threat actors are using AI to boost the speed, scale, and sophistication of their attacks."

Google Says Hackers Used AI To Create Zero Day Security Flaw For the First Time

Comments Filter:
  • by karmawarrior ( 311177 ) on Monday May 11, 2026 @01:01PM (#66138558) Journal

    ...if the AI hadn't hallucinated the functions "strcpy_unsafe" and "setuid_evenwhennotroot".

    • Okay, I think your FP is sort of funny and deserves the mod you were going for, but I was looking for the other joke of the revised Subject.

      Not laughing, but I think we are living in the biggest house of cards ever. So much awful software and we are so dependent on it. If anyone did have an ASI that was capable of finding every bug, then that person could pwn the world faster than any human-mediated responses.

      Pretty sure it hasn't happened yet, but if the ASI was sufficiently "super", then how would I (or y

      • The ASI-angle is actually very interesting here: One of the ways it could dominate mankind is via a stranglehold on key infrastructure through exploiting vulnerabilities in software. We seem to be moving into a state where all software will have been quite thoroughly vetted by ever more capable AI models.

        It might be that all the exploitable bugs will have been removed by the time ASI arrives. It's a pretty big hypothetical (ASI being ASI), but it will certainly not be ASI against the Swiss cheese that curre

        • by shanen ( 462549 )

          Basically the ACK, but I'm also concerned with the kinetic threats from the control of robots by ASI. You must have seen some of the recent videos of the Chinese humanoid robots dancing? Also the bipedal robot that set the new record for the half-marathon?

          • by dinfinity ( 2300094 ) on Monday May 11, 2026 @02:45PM (#66138758)

            Yes, there are definitely many potential issues still left and some of them are moving faster than I previously expected.

            The efforts of Ukraine in their defense against Russia's aggression are valiant and heartening in the context of that war, but with it they are also stepping very hard on the dystopian pedal of creating autonomous, effective, easily and cheaply produced, killer aerial and ground drones. We're not quite at full robot wars yet, but we are close, with UGVs with guns mounted on them performing assaults without any human supporting units. UAVs with shotguns are also a thing (although mainly used air-to-air). Last-mile AI targeting is under heavy development. Full robot on robot wars are years, not decades away.

            The bipedal bots are also advancing way faster than I thought they would. The Unitree bots are simply incredible when it comes to agility, easily surpassing humans in a bunch of disciplines where manual dexterity isn't required. Having said that, those currently can't deal with a lot of payload. I believe it's well sub 10kg and even then the durability of the joints is quite questionable. The capabilities of the quadruped bots like the Unitree B2 and the wheeled B2-W are really scary though: Fast, agile (even on very rough terrain), and capable of carrying serious payloads.

            Imagining an ASI being able to gain control of armies of bots is harrowing. The main blocker currently would be the lack of automation of the production of more of the bots, which is undoubtedly not going to be around for a long time, alas.

      • A house of cards is robust. Uptime is four nines.
  • VERY sloppy (Score:5, Insightful)

    by XanC ( 644172 ) on Monday May 11, 2026 @01:08PM (#66138568)

    They aren't creating the vulnerabilities, they're finding them and creating exploits.

    • by 0123456 ( 636235 )

      Are you sure? AI coding is creating plenty of security vulnerabilities to find in the future. Like all those websites with no authentication that were mentioned here a few days back.

      • I'm pretty sure vibe coding is going to create any number of new vulnerabilities, but it's not quite clear to me from the summary what was the role of AI in this case. Was the vulnerability found in code developed by AI or was it found by AI in code developed in an unspecified manner?

        Both of those issues are concerning, on different time scales. On a medium to long term, I expect we'll be flooded with vibe-developed apps, with all kinds of issues. The second issue is more concerning for the short term. I'm

    • by Monoman ( 8745 )

      This. .. did @BeauHD even bother to read the submission?

  • On the contrary, evidence seems to suggest that AI has long been very good at generating zero-day vulnerabilities. It has a little more trouble with identifying and avoiding them.

  • by MpVpRb ( 1423381 ) on Monday May 11, 2026 @01:34PM (#66138616)

    This is common in "vibe coded" slop produced by the clueless.
    It should have been written, "Discover Zero Day Security Flaw"
    I wonder if the headline was written by AI?

    • Not every instance of slop is AI.

      Just like not every instance of fraud is a Ponzi scheme.

      Know your taxonomies!

  • by noshellswill ( 598066 ) on Monday May 11, 2026 @01:36PM (#66138620) Homepage
    Pardon my English ... please ... but cyber-criminals do not create vulnerabilities ... they discover them. You know, like discovering  a camel-turd in a basket  of dates.  However, you create a stuffed date, by inserting a dried apricot.
  • They're creating security flaws just like the Starship Enterprise goes around creating new worlds... which I guess could be accurate if they have a Genesis Device.

    • They're creating security flaws just like the Starship Enterprise goes around creating new worlds... which I guess could be accurate if they have a Genesis Device.

      So... creating Strange New Worlds. :-)

  • ... how long it would be until someone decided to do this.
    Now that everyone uses some AI for stuff, this'll just end up pitting AI against AI; one finds an exploit and uses it, the other patches that exploit, the attack AI finds another, the patch AI patches it, and so on, faster than you can blink.
    The nice, new version of whatever it was (say Chrome) that works totally fine is now so patched up that it's impossible to uninstall and on opening it (the 'close' button is gone), it won't let you do anything on

  • The used AI to, at most, identify and exploit the flaw.

    In any event, is it still possible, in 2026, to exfiltrate passwords and other secret data as ping response packet padding like we used to do with AT&T Unix systems back in 1995?

  • I've only seen Anthropic's Mythos or Claude result in a total of THREE actual AI exploits for issues. "Thousands" is the claim by Anthropic and it's nearly completely unsubstantiated despite what they *say*. They've released three actual exploitable issues (all LPEs, other than the FreeBSD NFS issue). So, color me completely unimpressed and assuming they are lying their asses off. If they truly have found that much, why haven't they disclosed them to the dev teams for fixes? I assert that it's because they
    • by HiThere ( 15173 )

      You might ask Firefox. They've reported a few. I think most companies aren't reporting them...which is what should be expected.

      • They claim to have found like 178 bugs with Mythos but they don't say how many of them were security related or how many were exploitable. When it comes to OS bugs, they have found THREE. Exactly fucking "3". Two LPE's and one RCE. Not great for something claiming to be earth shattering and monumental with "thousands" of bugs discovered. They cannot even substantiate over 250 even including the nebulous Mozilla "bugs" they found. So... again... I call bullshit. Show your cards or be called a lying fraud, An
  • then Bash and cURL are the most intelligent AI tools in modern times.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...