Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Medicine Security IT

At Least 750 US Hospitals Faced Disruptions During Last Year's CrowdStrike Outage, Study Finds (wired.com) 31

At least 759 US hospitals experienced network disruptions during the CrowdStrike outage on July 19, 2024, with more than 200 suffering outages that directly affected patient care services, according to a study published in JAMA Network Open by UC San Diego researchers. The researchers detected disruptions across 34% of the 2,232 hospital networks they scanned, finding outages in health records systems, fetal monitoring equipment, medical imaging storage, and patient transfer platforms.

Most services recovered within six hours, though some remained offline for more than 48 hours. CrowdStrike dismissed the study as "junk science," arguing the researchers failed to verify whether affected networks actually ran CrowdStrike software. The researchers defended their methodology, noting they could scan only about one-third of America's hospitals, suggesting the actual impact may have been significantly larger.

At Least 750 US Hospitals Faced Disruptions During Last Year's CrowdStrike Outage, Study Finds

Comments Filter:
  • Fix was simple, go into safe mode, run some dos commands to remove the update, boot up.

    The problem was related to having to touch each server.
    Running VMs, or servers with IPMI you could handle them remotely and quickly. if you have monitoring you know which servers got affected pretty quickly.
    Took me an hour from recognizing the problem to fixing the few servers affected.
    Luck and proper systems in place can reduce many outages.
    I have to question those that took a long time, unless there are blocker
    • ESXi hosts where safe but not Hyper-V!

    • by tlhIngan ( 30335 )

      The servers were the easy part since they could be remotely managed - they likely were running some sort of VM hypervisor so it was a matter of booting a Windows recovery image, entering the Bitlocker key and then making the changes.

      Of course, the key part was "Bitlocker key" since people probably didn't have it handy, and at least for servers it was likely in a state where you could copy and paste the key in so you weren't typing the number manually.

      The hard part was the user aspect - repeating the same s

    • by gweihir ( 88907 )

      Except that this is not true. Additional steps include finding the Bitlocker recovery keys and inputting them and suddenly nothing is "easy" anymore, except to an idiot.

      • by jhoegl ( 638955 )
        yes, the idiot is the one who doesnt have his systems setup such that its easy to deal with these things.
        Oof.
        • by gweihir ( 88907 )

          Because people expected this to happen? Why and how would they? If a recovery procedure includes these steps, then they are for things like "data center burned down" and these specific steps are not optimized because that makes no sense for that use-case.

          Seriously, you have no clue how this works in the real world.

  • by registrations_suck ( 1075251 ) on Tuesday July 22, 2025 @12:32AM (#65536058)

    Seems like Crowdstrike is the one doing the striking.

    Maybe people should stop using that shitware.

    • Try to tell that to the management of the company I work for.

    • Re:Crowdstrike (Score:5, Insightful)

      by thegarbz ( 1787294 ) on Tuesday July 22, 2025 @04:42AM (#65536262)

      Maybe people should stop using that shitware.

      Cost or outages alone isn't a metric. The question is what is its reduction in incidents, and how much is the cost of not using them. Crowdstrike was nasty, but it's much cheaper than a successful ransomware attack.

      • It's. It as if they're the only player in this market.

        But you know why life hears of them? They fuck up. Not exactly a good reason to be known.

        • I made no comments about the quality of the player. I'm calling out the idea that comparing cost of incident alone isn't the issue. In your comparison now you need to judge not only risks but also question whether the cost of transferring to another company is worth it to maintain what should be the same risk coverage, except potentially from a company that hasn't made a mistake to learn from.

          But you know why life hears of them? They fuck up. Not exactly a good reason to be known.

          The best thing about fucking up is that you know what happened and have the ability to put in place systems that pre

    • by gweihir ( 88907 )

      Maybe the law should actually start to require solid engineering in critical tech and dish out hard, personal (!) punishment to fuckups like the CrowdStroke leadership.

  • by FritzTheCat1030 ( 758024 ) on Tuesday July 22, 2025 @03:58AM (#65536228)
    Nobody learned and everyone is still running this crap. And McAfee/Trellix. My work laptop behaves like a machine from 2002 because all of the "security" software constantly eating resources. It would be expensive for companies to actually care about security, so they just all run the same garbage so when they get hacked they can just say..."Whaaat? It's not OUR fault. We were following 'industry standard best-practices'. Don't blame us".
    • Yep, I was working at an academic Medical Center in New Hampshire when they reassigned the Disaster Recovery guy to 1st level technical support to get him to quit.

      The place became run by insurance and lawyers.

      I quit when they said we were going to skip the code to prevent medication errors because it would be cheaper to settle the lawsuits.

    • Nobody learned and everyone is still running this crap.

      Learned what? There was one case. The news story here is that people are talking about something that happened a year ago. What do you learn when a company has promised to improve and demonstrated no further fuck up? How much money do you spend dropping the company you know for the one you don't? This isn't like downloading Notepad++ because of a friend's recommendation. Migration costs money.

      My work laptop behaves like a machine from 2002 because all of the "security" software constantly eating resources.

      We don't use crowdstrike and my laptop does the same. Who do I migrate to? Which is the mythical company that doesn'

  • No organization should expect to be free from 'disruptions.'

    Instead, they must have plans and processes in place to operate through 'disruptions.'

  • You'd think Health Care is the industry where privacy, and data security, would be considered biblical style, stone tablet passed laws from God. For the hand waving, desk pounding, stand-off, statements of absolutism, it's all smoke, mirrors, and lies. Using "CrowdStrike" as a means for anything useful in terms of privacy or security, is the equivalent of putting a white towel, over the shit stain, on the blanket. Even if that did work, and it doesn't, the number of issues everywhere else, is alarming, s
  • If too many people understand how abysmally they failed and how easy it would have been to prevent that failure, they would be dead. Negligence can hardly get more gross than what they did. Their C-Levels should all be in prison for an extended stint and have their personal fortune impounded to compensate their victims.

Once it hits the fan, the only rational choice is to sweep it up, package it, and sell it as fertilizer.

Working...