


At Least 750 US Hospitals Faced Disruptions During Last Year's CrowdStrike Outage, Study Finds (wired.com) 31
At least 759 US hospitals experienced network disruptions during the CrowdStrike outage on July 19, 2024, with more than 200 suffering outages that directly affected patient care services, according to a study published in JAMA Network Open by UC San Diego researchers. The researchers detected disruptions across 34% of the 2,232 hospital networks they scanned, finding outages in health records systems, fetal monitoring equipment, medical imaging storage, and patient transfer platforms.
Most services recovered within six hours, though some remained offline for more than 48 hours. CrowdStrike dismissed the study as "junk science," arguing the researchers failed to verify whether affected networks actually ran CrowdStrike software. The researchers defended their methodology, noting they could scan only about one-third of America's hospitals, suggesting the actual impact may have been significantly larger.
Most services recovered within six hours, though some remained offline for more than 48 hours. CrowdStrike dismissed the study as "junk science," arguing the researchers failed to verify whether affected networks actually ran CrowdStrike software. The researchers defended their methodology, noting they could scan only about one-third of America's hospitals, suggesting the actual impact may have been significantly larger.
Fix was simple (Score:2)
The problem was related to having to touch each server.
Running VMs, or servers with IPMI you could handle them remotely and quickly. if you have monitoring you know which servers got affected pretty quickly.
Took me an hour from recognizing the problem to fixing the few servers affected.
Luck and proper systems in place can reduce many outages.
I have to question those that took a long time, unless there are blocker
ESXi hosts where safe but not Hyper-V! (Score:2)
ESXi hosts where safe but not Hyper-V!
Re: (Score:3)
The servers were the easy part since they could be remotely managed - they likely were running some sort of VM hypervisor so it was a matter of booting a Windows recovery image, entering the Bitlocker key and then making the changes.
Of course, the key part was "Bitlocker key" since people probably didn't have it handy, and at least for servers it was likely in a state where you could copy and paste the key in so you weren't typing the number manually.
The hard part was the user aspect - repeating the same s
Re: (Score:2)
Except that this is not true. Additional steps include finding the Bitlocker recovery keys and inputting them and suddenly nothing is "easy" anymore, except to an idiot.
Re: (Score:2)
Oof.
Re: (Score:2)
Because people expected this to happen? Why and how would they? If a recovery procedure includes these steps, then they are for things like "data center burned down" and these specific steps are not optimized because that makes no sense for that use-case.
Seriously, you have no clue how this works in the real world.
Crowdstrike (Score:3)
Seems like Crowdstrike is the one doing the striking.
Maybe people should stop using that shitware.
Re: Crowdstrike (Score:2)
Try to tell that to the management of the company I work for.
Re:Crowdstrike (Score:5, Insightful)
Maybe people should stop using that shitware.
Cost or outages alone isn't a metric. The question is what is its reduction in incidents, and how much is the cost of not using them. Crowdstrike was nasty, but it's much cheaper than a successful ransomware attack.
Re: (Score:2)
It's. It as if they're the only player in this market.
But you know why life hears of them? They fuck up. Not exactly a good reason to be known.
Re: (Score:2)
I made no comments about the quality of the player. I'm calling out the idea that comparing cost of incident alone isn't the issue. In your comparison now you need to judge not only risks but also question whether the cost of transferring to another company is worth it to maintain what should be the same risk coverage, except potentially from a company that hasn't made a mistake to learn from.
But you know why life hears of them? They fuck up. Not exactly a good reason to be known.
The best thing about fucking up is that you know what happened and have the ability to put in place systems that pre
Re: (Score:2)
Maybe the law should actually start to require solid engineering in critical tech and dish out hard, personal (!) punishment to fuckups like the CrowdStroke leadership.
Re: (Score:2)
“What you've just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul.”
Re: (Score:2)
Thanks. I almost clicked on that link.
Re: (Score:1)
Because they generally are? 87% of the hospitals in the US are nonfederal, aka. private hospitals running as nonprofits or for-profit.
Re: (Score:2)
That's the problem.
Re: (Score:2)
Suburban hospitals don't have a big enough tax base to survive on their own because of the low population density and rural hospitals have it even worse. So you can expect most of those to close down over the next few years as the government money gets pulled.
One of the major problems our country has is we heavily subsidize things like you kind of have to but
Re: (Score:2)
Europe thinks of Hospitals as critical infrastructure. And slowly regulation catches up to that.
And nobody learned... (Score:3, Interesting)
Re: (Score:2)
Yep, I was working at an academic Medical Center in New Hampshire when they reassigned the Disaster Recovery guy to 1st level technical support to get him to quit.
The place became run by insurance and lawyers.
I quit when they said we were going to skip the code to prevent medication errors because it would be cheaper to settle the lawsuits.
Re: (Score:2)
Evil and greed at work.
Re: (Score:2)
Nobody learned and everyone is still running this crap.
Learned what? There was one case. The news story here is that people are talking about something that happened a year ago. What do you learn when a company has promised to improve and demonstrated no further fuck up? How much money do you spend dropping the company you know for the one you don't? This isn't like downloading Notepad++ because of a friend's recommendation. Migration costs money.
My work laptop behaves like a machine from 2002 because all of the "security" software constantly eating resources.
We don't use crowdstrike and my laptop does the same. Who do I migrate to? Which is the mythical company that doesn'
Ok, but take the right lesson. (Score:2)
No organization should expect to be free from 'disruptions.'
Instead, they must have plans and processes in place to operate through 'disruptions.'
Health Care infrastructure is terrifying! (Score:2)
Of course CrowdSroke will lie (Score:2)
If too many people understand how abysmally they failed and how easy it would have been to prevent that failure, they would be dead. Negligence can hardly get more gross than what they did. Their C-Levels should all be in prison for an extended stint and have their personal fortune impounded to compensate their victims.