Forgot your password?
typodupeerror
Government Privacy

US Cyber Defense Chief Uploaded Sensitive Files Into a Public Version of ChatGPT (politico.com) 51

An anonymous reader quotes a report from Politico: The interim head of the country's cyber defense agency uploaded sensitive contracting documents into a public version of ChatGPT last summer, triggering multiple automated security warnings that are meant to stop the theft or unintentional disclosure of government material from federal networks, according to four Department of Homeland Security officials with knowledge of the incident. The apparent misstep from Madhu Gottumukkala was especially noteworthy because the acting director of the Cybersecurity and Infrastructure Security Agency had requested special permission from CISA's Office of the Chief Information Officer to use the popular AI tool soon after arriving at the agency this May, three of the officials said. The app was blocked for other DHS employees at the time.

None of the files Gottumukkala plugged into ChatGPT were classified, according to the four officials, each of whom was granted anonymity for fear of retribution. But the material included CISA contracting documents (PDF) marked "for official use only," a government designation for information that is considered sensitive and not for public release. Cybersecurity sensors at CISA flagged the uploads this past August, said the four officials. One official specified there were multiple such warnings in the first week of August alone. Senior officials at DHS subsequently led an internal review to assess if there had been any harm to government security from the exposures, according to two of the four officials. It is not clear what the review concluded.

This discussion has been archived. No new comments can be posted.

US Cyber Defense Chief Uploaded Sensitive Files Into a Public Version of ChatGPT

Comments Filter:
  • by gijoel ( 628142 ) on Wednesday January 28, 2026 @06:13PM (#65955320)
    There's now at least one person in the world who's going to lose their job due to AI.
    • by dskoll ( 99328 )

      LOL. I don't have mod points right now, but for the love of the Internet, mod parent up!!!

    • by 93 Escort Wagon ( 326346 ) on Wednesday January 28, 2026 @06:29PM (#65955354)

      Don't worry, he'll be replaced by an equally-unqualified sycophant.

      • by ffkom ( 3519199 )
        The CEO of the corporation I work for would certainly qualify as successor, with regards to being clueless about both AI and security.
    • by cusco ( 717999 ) <brian.bixby@gm a i l.com> on Wednesday January 28, 2026 @07:16PM (#65955460)

      Depends on how good he is at brown-nosing. If he's good enough he'll end up at the White House.

      • by gtall ( 79522 )

        If he's in that position, we already know he's good at brown-nosing. And getting "caught" means nothing, it is only the alleged administration moving the goalposts further down the Fascist road of Project 2025. Even if their useful idiot goes 'round the bend with Alzheimers, they won't easily give up their cash cow. They'll just AI the hell out of his alleged administration and the hallucinations will be indistinguishable from his regular behavior.

  • by JoshuaZ ( 1134087 ) on Wednesday January 28, 2026 @06:14PM (#65955324) Homepage
    ChatGPT has a version which is FERPA compliant, and high school and university teachers are told explicitly not put any student names or anything else sensitive into personal ChatGPT or other AI accounts. I don't use the teacher version of ChatGPT, in part because I've never had need of any interaction with an AI where having an AI would be useful and where a student's name or other identifying info would show up. I honestly struggle at seeing what the reasonable use cases are in that intersection. (Also, I'm slightly cynical/skeptical about their promises to not use anything from those accounts for new training data.) But the bottom line is that this sort of thing is something which would get stern rebukes up to being fired if it happened in a high school environment. And this is the nominal head of cyber security for the US government. So the question is how much of this is part of the Trump's administrations general tendency to not hire people who are competent, how much is the tendency for people (in any administration) who are powerful to just ignore the rules, and how much of this is that there are some people who are really into credulously using LLM AIs and are just idiots?
  • Of course (Score:5, Insightful)

    by Bahbus ( 1180627 ) on Wednesday January 28, 2026 @06:19PM (#65955334) Homepage

    Because despite the fact that this man holds a Bachelors in engineering, Masters in Comp Sci, a useless MBA, and a PhD in information systems... he's a fucking retard and has no clue what the fuck he is doing. He probably never actually earned those degrees legitimately. He certainly hasn't done any work that proves his knowledge.

  • by taustin ( 171655 ) on Wednesday January 28, 2026 @06:23PM (#65955338) Homepage Journal

    I know a lawyer whose firm has just updated their formal policy on the use of AI. It used to be "Don't." Now, they're allowed to use one, but only if the firm higher ups have specifically approved it. There are none approved, and apparently none available that meet their requirements.

    Interestingly, their issue isn't that AIs make up case law [calmatters.org] (they're not overworked, and aren't idiots, so they know they would have to carefully vet anything produced), but that to be useful, the query would submit to the AI engine client information they are legally required to keep confidential, and that information will be recorded and used to create future answers for other users.

    • by 2TecTom ( 311314 )

      real firms have private models

      • by taustin ( 171655 )

        I suspect there's more to their requirements than just that, but that was the biggest concern, apparently.

        • by 2TecTom ( 311314 )

          My point remains valid, most transnational corporations have their own private AIs.

          AI isn't the problem, all the classism and corruption are our real problems.

          All the unethical upper class people are wrecking everything for everybody and soon they'll destroy this civilization as they have many others. Greed is often our downfall.

    • How is this any different from lawyers using Microsoft Office 365, or similar?
      • How is this any different from lawyers using Microsoft Office 365, or similar?

        THIS ALL DAY. Any software which makes AI recommendations is simply not safe to use with confidential information. And even if you don't suspect malice on the publisher's part (as one reasonably does with Microsoft because they have constructed the world's worst spyware and a license agreement to match which says they can exfiltrate any of your data for basically any purpose, on a whim, without permission) you have to be concerned about incompetence.

        Microsoft has had multiple breakins to Azure where THEY HA

      • by taustin ( 171655 )

        They've had a policy (the same, I'm sure) on that for much, much, much longer.

        Other than that, it really isn't.

        (A lot of lawyers don't use Office at all, because there are other word processors that got into the business of legal templates way back, and lawyers, like most users, don't like change, and the templates they have aren't for Office.)

  • Very rarely do you see the people at the top having the real knowledge of their field. The number of people I know in a CTO or CISO role that are qualified or educated in security matters, I actually can't name any, but I'm sure it's not 0. I've been in meetings where a CTO level person will complain that 2FA is slowing down the login process, so we need to remove it. I've been in a meeting with a CISO, where I was told (paraphrased): “Don't send PGP keys with your emails, they're scaring the clien
    • “Remove all IP based filtering on the RDP connections on the firewall, it's too difficult to update that stupid field.”,

      IP filtering on RDP saved me, and a business system I look after, when CVE-2019-0708 (BlueKeep) arrived. I love filtering by IP.

      That guy who wanted it removed seems like a knobhead.

  • by dskoll ( 99328 ) on Wednesday January 28, 2026 @06:50PM (#65955392) Homepage
    We have the best incompetence, frankly. Powered by AI... good old American AI. ChatGPT is not Chinese, you know... it's American. Madhu had some terrific secrets. Really terrific. And he put them on an American AI. Not a Chinese AI. Talking of China, I think I need to put tariffs on China. I buy American dishes and plates... terrific quality. Don't need any "China". Only those leftist lunatics use "China". Paper Big Mac containers for me. They're American. Terrific American food in American-made containers. I'm not mad at Madhu. I should be, you know. Mad-who? It's even in his name. We'll call him Messed-up Madhu... [etc etc etc]
  • What else is new?

    Interestingly, copy & paste is now considered the most problematic data leakage vector by many security experts.

  • What is this bullshit. The article canâ(TM)t even get the source material correct.

  • by ledow ( 319597 )

    It's an increasing trend that people "just don't care" about data security.

    Whether that's simple things like copyright, or things like GDPR, or things like trusting third parties on the other side of the world that knowingly misuse data while operating in foreign legal jurisdictions.

    It's a laziness, not a protest from them. They just don't care about people's data because they've grown up in a world where nobody seems to care about their data.

    You see it in new-hires all the time. Just copy-pasting shite o

  • by DarkOx ( 621550 )

    He
    1) Asked for and received special permission
    2) Did not upload anything classified to a non-classified system
    3) Did upload materials not for public distribution but did not distribute them publicly, simply used an unclassified system to handle unclassified data.. It is not like he posted it on reddit. I don't see what would be different than if he'd pasted the text into Google Docs or Word 365 to make some edits.

    I don't really like it, I don't think it was great judgment but I also don't see what exactly w

    • by pavon ( 30274 )

      > I don't see what would be different than if he'd pasted the text into Google Docs or Word 365 to make some edits.
      Government employees are prohibited from using those public cloud services for OUO as well. There are separate instances of some of these services like Office 365 which can be used for OUO, but they are kept separate for defense in depth, given these services can have bugs that allowed people to access documents they should be allowed to.

      Furthermore, it is worse because the TOS for ChatGPT

  • Nobody ever reads the whole document anyway, and ChatGPT is probably the one making the choice of who gets the contracts, so it's just an example of someone passing the information along to someone else (ChatGPT) of MUCH higher intelligence.
  • storage in the public toliets at Mar a largo,

"The trouble with doing something right the first time is that nobody appreciates how difficult it was." -- Walt West

Working...