Forgot your password?
typodupeerror
AI Government Security

US Government Now Wants Anthropic's 'Mythos', Preparing for AI Cybersecurity Threats (politico.com) 24

Friday Anthropic's CEO met with top U.S. officials and "discussed opportunities for collaboration," according to a White House spokesperson itedd by Politico, "as well as shared approaches and protocols to address the challenges associated with scaling this technology."

CNN notes the meeting happens at the same time Anthropic "battles the Trump administration in court for blacklisting its Claude AI model..." The meeting took place as the US government is trying to balance its hardline approach to Anthropic with the national security implications of turning its back on the company's breakthrough technology — including its Mythos tool that can identify cybersecurity threats but also present a roadmap for hackers to attack companies or the government... The Office of Management and Budget has already told agencies it is preparing to give them access to Mythos to prepare, Bloomberg reported. Axios reported the White House is also in discussion to gain access to Mythos.
The Trump administration "recognizes the power" of Mythos, reports Axios, "and its highly sophisticated — and potentially dangerous — ability to breach cybersecurity defenses." "It would be grossly irresponsible for the U.S. government to deprive itself of the technological leaps that the new model presents," a source close to negotiations told us. "It would be a gift to China"... Some parts of the U.S. intelligence community, plus the Cybersecurity and Infrastructure Security Agency (CISA, part of Homeland Security), are testing Mythos. Treasury and others want it.
The White House added they plan to invite other AI companies for similar discussions, Politico reports. But Mythos "is also alarming regulators in Europe, who have told POLITICO they have not been able to gain access..." U.S. government agency tech leaders sought access to the model after Anthropic earlier this year began testing the model and granted limited access to a select group of companies, including JPMorgan, Amazon and Apple... after finding it had hacking capabilities far outstripping those of previous AI models. This includes the ability to autonomously identify and exploit complex software vulnerabilities, such as so-called zero-day flaws, which even some of the sharpest human minds are unable to patch. The AI startup also wrote that the model could carry out end-to-end cyberattacks autonomously, including by navigating enterprise IT systems and chaining together exploits. It could also act as a force-multiplier for research needed to build chemical and biological weapons, and in certain instances, made efforts to cover its tracks when attacking systems, according to Anthropic's report on the model's capabilities and its safety assessments.

Those findings and others have inspired fears that the model could be co-opted to launch powerful cyberattacks with relative ease if it fell into the wrong hands. Logan Graham, a senior security researcher at Anthropic, previously told POLITICO that researchers and tech firms had been given early access to Mythos so they could find flaws in their critical code before state-backed hackers or cybercriminals could exploit them. "Within six, 12 or 24 months, these kinds of capabilities could be just broadly available to everybody in the world," Graham said.

US Government Now Wants Anthropic's 'Mythos', Preparing for AI Cybersecurity Threats

Comments Filter:
  • by AlanObject ( 3603453 ) on Saturday April 18, 2026 @11:17AM (#66100034)

    So now can we all recognize that Pete Hegseth's little temper tantrum last month was basically just that. A spoiled little kid not getting exactly what he wants is instantly recognizable to anyone who has ever had to deal with it.

    And the worst Sec Defense the nation ever had.

  • by ElderOfPsion ( 10042134 ) on Saturday April 18, 2026 @11:21AM (#66100038)

    If v6 isn't called Wintermute, it'll be a lost opportunity.

  • Just Add Security (Score:5, Insightful)

    by sound+vision ( 884283 ) on Saturday April 18, 2026 @11:43AM (#66100060) Journal

    Whatever Mythos is, security isn't something you bolt on after the fact. It's something you consider from the beginning of your endeavor, before even starting, and continually throughout. Security is a practice, it's a way of life. It's not a product.

    • If they followed regular security protocols for highly sensitive information, it would be secure... using AI to secure it depends on how good the AI is.

      Sensitive stuff like that should bee both behind a hardware firewall or two, require tons of credentials, and the computer should be routed through another computer with a software firewall (at a minimum). I'm sure there's like a system admin or database technician or whatever they call the person at the desk 24/7... maybe any request for data has to be ap

    • by gweihir ( 88907 )

      Exactly. And with the uncertainty LLMs now bring to the table, your software must be secure by construction to have a good chance of survival. This includes architecture, design and implementation and requires real insight and experience. Testing can only do so much. Review only really works if it looks at the construction principles used. If you "vibe code", you can basically throw the result away directly after finishing it.

    • Security isn't something you bolt on after the fact, but a new tool could significantly upend what it is you need to secure against. I suspect that's the real issue here.

      My experience of Opus is that it's shockingly capable of tearing apart software binaries. I drop a path to a binary in Claude Code and ask it to tell me how a feature works, and it will usually give me a complete breakdown of classes and functions and how they work together. The binary loader information, symbol data, assembly, etc. are all

  • How much more ai slop do we need to endure before some one just say its ALL BULLSHIT. If even one of the fields it has been deployed to showed something other than the slop all the other have mayby. but these systems all generate the same level of garbage.

    • by Jeremi ( 14640 )

      If even one of the fields it has been deployed to showed something other than the slop all the other have maybe.

      Okay, here's one field: in the last four weeks, Claude Code has detected and diagnosed 91 genuine bugs in the open-source library I maintain. That's 91 bugs that likely would have remained unfixed indefinitely, unless/until I (or a user) happened to stumble across a resulting runtime misbehavior and then laboriously worked our way backwards to pinpoint the underlying software defect. I'd estimate probably 150 man-hours were saved, right there.

  • The AI tools are making it much easier to untie a company, government agency, or other user group from a particular vendor's software, SaaS, cloud, library or open source license.

    The repeated grumbling from entrenched parties is reported daily as they are seeing a decline in their ability to land the initial sale, entrench, expand and live off license fees.

    Predict: The rise of a commodity priced commercial locked down AI compute machine much like a locked down Docker container become in general use.

    • Locked down? You mean like 'sandboxed in' or more like guardrails "around" the whole thing?
      If you send/give it a prompt to do something, and carrying out that prompt is blocked by guardrails, will it ask the nearest AI it can find to carry out the prompt for it?
      If it's a hacking tool, then the public shouldn't have access to it at all.
      Maybe if you're a "pentester" for some outfit, possibly... but, even that would have to have some kind of lock (like only during your workhours or something) on it.

      Even basic

  • Mythos is a scam (Score:4, Insightful)

    by cowwoc2001 ( 976892 ) on Saturday April 18, 2026 @01:09PM (#66100208)

    There is nothing advanced about Anthropic's mysterious model. People who have seen it have reported it produces tons of junk vulnerabilities that are not realistic to exploit. Plus, they recently intentionally dumbed-down their public models to make Mythos look amazing in comparison when they got public. This is just PR bullshit.

    • OpenAI did the same thing a year back, when they claimed their new model was so good they couldn't release it because AGI doom ensues.

      It's a symptom that Anthropic has run out of ideas and is worrying about new sources of revenue. Although they didn't accept the humongous amounts of investor cash that OpenAI accepted, they are still going to have to come up with impossible returns in a year or so for the investments they did accept

  • by Jontu_Kontar ( 668824 ) on Saturday April 18, 2026 @02:34PM (#66100344) Homepage
    If this is as dangerous as they claim and the need to keep it away from people is as important as they claim. Why tell the world about this at all? This whole thing reeks of being a marketing gimmick to get the military as their customer again.
  • The DOD and most of the rest of the government need to take radical steps to secure themselves. First and foremost they need to stop using TCP/IP ( both IPv4 and IPv6) on internal networks. A protocol like IBMs SNA or the Novell IPX should be substituted so that internal addresses are not visible or routable on the internet. A lot of employees will whine and moan about not having facebutt on their PC at work, but in reality productivity would go up. Gateways could be used for email, and any truly necessary

try again

Working...