Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Businesses IT Technology

CrowdStrike To Delta: Stop Pointing the Finger at Us 189

CrowdStrike says that it isn't to blame for Delta Air Lines' dayslong meltdown following the tech outage caused by the cybersecurity company, and that it isn't responsible for all of the money that the carrier says it lost. From a report: In a letter responding to the airline's recent public comments and hiring of a prominent lawyer, CrowdStrike said Delta's threats of a lawsuit have contributed to a "misleading narrative" that the cybersecurity company was responsible for the airline's tech decisions and response to the outage. "Should Delta pursue this path, Delta will have to explain to the public, its shareholders, and ultimately a jury why CrowdStrike took responsibility for its actions -- swiftly, transparently, and constructively -- while Delta did not," wrote Michael Carlinsky, an attorney at law firm Quinn Emanuel Urquhart & Sullivan.

The letter to Delta's legal team Sunday evening is the latest move in a growing conflict between the cybersecurity firm and the airline, which was thrown into several days of disarray following the outage. Delta Chief Executive Ed Bastian said in an interview on CNBC last week that the outage cost the airline about $500 million, including lost revenue and compensation costs. The airline has alerted CrowdStrike and Microsoft that it is planning to pursue legal claims to recover its losses, and has hired litigation firm Boies Schiller Flexner to assist, according to a memo Bastian sent to Delta employees last week. CrowdStrike said Sunday that its liability is contractually capped at an amount in the "single-digit millions."
This discussion has been archived. No new comments can be posted.

CrowdStrike To Delta: Stop Pointing the Finger at Us

Comments Filter:
  • by dbialac ( 320955 ) on Monday August 05, 2024 @11:36AM (#64682086)
    Why did Delta have so much more of a problem recovering from the outage than everyone else? They clearly do have problems they need to work out. Still, CrowdStrike is still ultimately at fault.
    • by TechyImmigrant ( 175943 ) on Monday August 05, 2024 @11:41AM (#64682116) Homepage Journal

      Yep. If you kicked over someone's delicate vase and broke it, that doesn't make it the fault of the vase for being delicate.

      Delta showed themselves to be technically incompetent by choosing crowdstrike and not having a survivability system in place. Not a good look for an airline but crowdstrike were negligent.

      • Yep. If you kicked over someone's delicate vase and broke it, that doesn't make it the fault of the vase for being delicate.

        Congratulations for twisting the scenario to your advantage.
        One could always take steps to insure the vase's safety.

        • by ls671 ( 1122017 )

          In other words, Delta would probably have had exactly the same problems if they simply had to reboot all their systems at once but crowdstrike forced that reboot.

          The crowdstrike episode forked all kinds of interesting discussions about systems with a chicken and egg problem.

          Example: VMWare ESX using AD authentication with AD instances all running in VMs on ESX servers somewhere, nfs servers using AD or other external auth servers with auth servers all running in VMs and VMs needing nfs servers access to run

          • VMWare can crash hard if iscsi drops out so may just on CHAP passwords.

            Also the storage it self needs to be direct to disk and can be on an storage only servers likely not running windows.

            • Using CHAP is a mortal sin, making this a weak argument. The stability of the chosen platform is a separate issue.

              Delta's choices, post trauma, are a valid argument, but only a small one. Cratering thousands of systems still has lots of culpability, no matter how easy the subsequent fix (which it wasn't).

              Delta will argue that Crowdstrike's undeniably negligent QA made them suffer beyond the contractual obligation of their contract with Crowdstrike, a negligence ploy to make Delta stakeholders happy. Only fi

        • by thegarbz ( 1787294 ) on Monday August 05, 2024 @12:18PM (#64682294)

          Congratulations for twisting the scenario to your advantage.
          One could always take steps to insure the vase's safety.

          Indeed they can. Delta didn't. Here's the thing: All airlines were affected, now sure Delta is a big airline they carry twice as many passengers as AirFrance / KLM. Twice. Double. Yet Delta claimed $500m losses and Airfrance/KLM claimed $11m I'll leave this as an exercise for a grade two kid to prove that the math doesn't add up there.

          • Re: (Score:2, Insightful)

            by cstacy ( 534252 )

            Congratulations for twisting the scenario to your advantage.
            One could always take steps to insure the vase's safety.

            Indeed they can. Delta didn't. Here's the thing: All airlines were affected, now sure Delta is a big airline they carry twice as many passengers as AirFrance / KLM. Twice. Double. Yet Delta claimed $500m losses and Airfrance/KLM claimed $11m I'll leave this as an exercise for a grade two kid to prove that the math doesn't add up there.

            Systems issues (not IT, I mean operational issues such as relocating pilots) do not scale linearly in an airline operation. It makes perfect sense that an airline doing twice the flights would have at least one order of magnitude more problems.

        • by tbords ( 9006337 )

          Yep. If you kicked over someone's delicate vase and broke it, that doesn't make it the fault of the vase for being delicate.

          Congratulations for twisting the scenario to your advantage. One could always take steps to insure the vase's safety.

          Would hiring security to watch out for your delicate vase count as taking steps? In this case, Crowdstrike would be the security hired specifically for this use case.

          • No.
            Having backups and a way to quickly restore the VM image from yesterday's backup would count, though.
            I know a small company which uses Crowdstrike. Guess what they did to be up and running in about 1 hour.

        • That would fall over in court.

          When a contractor enters your house and knocks over the vase, you are not held responsible, they are. It is not you who is responsible for making the vase safe in your own home, it was perfectly safe standing on the table till the contractor knocked it off. They are liable.

          Same with cars. When someone hits your car when parked on the public highway (legally) or on your drive you are not expected to have erected a barrier to prevent a collision nor are you expected to have pu

      • by TWX ( 665546 ) on Monday August 05, 2024 @11:51AM (#64682182)

        On the other hand if someone left their delicate vase out where it could be very easily kicked without actually trying to do so then there would be a rather dim view taken of their actions to protect themselves and their property.

        Do I believe that Crowdstrike could justifiably be taken to the cleaners over this? Absolutely. I could see them being found liable for about five days' worth of loss of service, so Friday through the following Tuesday. Beyond that though, any large organization not structured to take care of itself in such circumstances seems to be on that org itself rather than on the vendor of the faulty software.

        • Even if they have the "Not responsible" weasel clause in their license agreement consider Crowdstrike a product that won't have the same level of trust from now on.
          Crowdstrike Vulcan was an innovative tool and their failure has done more damage to them than any hacker team could have.

          • by TWX ( 665546 )

            Even if they have the "Not responsible" weasel clause in their license agreement consider Crowdstrike a product that won't have the same level of trust from now on.
            Crowdstrike Vulcan was an innovative tool and their failure has done more damage to them than any hacker team could have.

            I figured that goes without saying. If I was in the decision-making loop for this sort of product, I'd take a hard look at how my organization's systems are structured for resiliency and I'd look at what it would take to diversify the products providing this function to avoid having complete outages, and I'd also look at fundamental system architecture to further diversify the platforms to again make it harder for this sort of thing to happen.

        • On the other hand if someone left their delicate vase out where it could be very easily kicked....

          As I read this analogy everyone has their delicate vase where it could easily be kicked and Crowdstrike was the company hired to prevent anyone kicking it. They then accidentally kicked it themselves. The difference between Delta and other companies happened after they kicked it. Other companies had teams of cleaners and replacement vases who quickly swept in, cleaned up the mess and put another vase on display.

          Delta's response was somewhat less competent since they seemed to be entirely reliant on Crow

      • What if you told that vase to pull auto updates immediately, despite common knowledge that doing so is ultimately going to end in a broken vase? Any sysadmin worth his salt is going to have a company schedule for updates, and push to an internal testing environment first.

        • by Junta ( 36770 )

          Of the criticisms, this probably doesn't apply.

          From what I read, Crowdstrike didn't have a mechanism to opt-out of this automatic update. It was classed as more like 'definitions' rather than a 'code' update, and only 'code' updates had a way to defer/carefully manage the deployment.

          Also, kind of the whole point of their promise of value is you pay *them* to test and manage the update process. So the fact they failed to properly test *and* did not do a phased rollout is on them.

          The argument comes in as to

          • by Slayer ( 6656 )

            From what I read, Crowdstrike didn't have a mechanism to opt-out of this automatic update. It was classed as more like 'definitions' rather than a 'code' update, and only 'code' updates had a way to defer/carefully manage the deployment.

            Assuming, that Crowdstrike software continues running even if it can't reach an update server, it would only take a simple entry in the firewall - and competent admins - to time manage all updates.

            Also, kind of the whole point of their promise of value is you pay *them* to test and manage the update process. So the fact they failed to properly test *and* did not do a phased rollout is on them.

            Yes, Crowdstrike failed massively here. If they won't accept that, and/or their contract gives them a free pass, how would they ever be motivated to become better? Looking at Solarwinds, which is still alive and kicking, I have little hope for improvement any time soon.

            This gets complicated because they might have, for example, opted for a particularly draconian 'rescue boot' situation to harden security, never expecting *every Windows instance they have* to need such an activity all at the exact same time.

            In times when ransomware attacks hit MS Windo

            • by cstacy ( 534252 )

              Assuming, that Crowdstrike software continues running even if it can't reach an update server, it would only take a simple entry in the firewall - and competent admins - to time manage all updates.

              Crowdstrike opens an encrypted channel to their client, and sends a specific update for that machine. The entire operation is encrypted, and inside that stream, they are validating client certs that you don't have, and doing God Knows what else. The user does not have any kind of distribution server; it's all coming only directly from Crowdstrike. The entire thing is opaque and out of your control.

              Could you explain how you are going to trivially (as you say) implement your own patch distribution mechanism

              • by samdu ( 114873 )

                Not to mention that, in our case, the issue presented itself overnight. While the vast bulk of our IT department was sleeping. Had our SecOps guy not woken up to pee and decided to take a look at work stuff, we wouldn't have known until business hours started. As it was, we managed to get most things up by midday (servers were up pretty quickly - workstations had to wait).

              • AFAICS all that you can do is disable Crowdstrike at the firewall. In which case, why did you buy it?

                You can disable it in the firewall on a schedule. You'd have the protection but you could delay updates on specific machines in a network segment by not allowing access during certain hours.

                Then the ones that do maintain access would fail out before the others get hit. The big side effect is that real time protection updates would no longer be real time and it might be a bigger risk to not get the updates even if it sometimes breaks everything.

          • by cstacy ( 534252 )

            Of the criticisms, this probably doesn't apply.

            From what I read, Crowdstrike didn't have a mechanism to opt-out of this automatic update. It was classed as more like 'definitions' rather than a 'code' update, and only 'code' updates had a way to defer/carefully manage the deployment.

            Microsoft has a system for vetting driver code updates which would have caught this kind of bug. But that process takes at least weeks to get through, and the whole point of Crowdstrike is that it detects threats (that is, new OS and network activity patterns) in real time, through their AI in the cloud hooked to all their customer's machines (sensors), and they push out "updates" in real time.

            The problem is that these "updates" are not like anti-virus signatures. That's not what Crowdstrike does. The updates, which they call "definitions" are in fact code updates. They are modifying code that is hooked/patched deeply into the core of WIndows functionality. In this case, altering the code that makes Named Pipes work. Its a low-level OS code update right into the most critical parts of Windows.

            Why Microsoft lets them do this, call it "definitions" and claim it's not code (when it is patching Windows itself), and not have to go through the driver vetting process...that's the question.

            I have an answer for you, and I think you already know it. The answer is: $$$$$

          • None of that would have stopped a sufficiently paranoid admin.

            It's pretty simple to set up a trusted man-in-the-middle proxy on your own network, when you already have a trusted CA on the network (Active Directory). The agents cannot download a shitty update from a URL you block at your proxy.

            If you are still allowing this to happen or have not come up with a prevention strategy as an IT director after seeing what has happened to Delta, you are a failure of an IT director.

            Always be the gatekeeper of what c

        • by rahmrh ( 939610 )

          Or even do the updates spread out over a few days such that you can abort the updates and only lose some percentage of the system.

          Where I used to work the updates were spread across an entire week simply to limit what happens WHEN there is a bad update.

          • by cstacy ( 534252 )

            Or even do the updates spread out over a few days such that you can abort the updates and only lose some percentage of the system.

            Where I used to work the updates were spread across an entire week simply to limit what happens WHEN there is a bad update.

            These are nothing like Windows Updates, and it is impossible to implement a distribution mechanism like you are suggesting.

            Crowdstrike could implement it for you, but the entire point of their product is instant updates to all the machines within minutes of their AI having detected a threat from any customer machine ("sensor") in the global base of all customers planet-wide.

            I wouldn't trust such a system, but that's their business model, that's what's implemented, and that's what all these chumps installed

        • by cstacy ( 534252 )

          What if you told that vase to pull auto updates immediately, despite common knowledge that doing so is ultimately going to end in a broken vase? Any sysadmin worth his salt is going to have a company schedule for updates, and push to an internal testing environment first.

          Crowdstrike does not offer that feature.
          You cannot delay or refuse real-time push updates.
          It's not like Windows Updates.

          Would *I* ever allow that on my systems?
          No.
          But that is what everyone is doing.

          • There are companies that use CrowdStrike that caught the update failure before it hit their production servers. They figured out a way to do this, maybe by managing the outbound firewall, but they did it so there must be at least one way to accomplish this.
            • I've never heard of this, and a quick search gave me nothing useful. Who managed to do this? Do you have a link or article?

      • If that vase was sold as being indestructible, or at least very robust, and its purpose was presented as being a way to ensure that customers would never have another broken vase, then yes, it would be the seller's problem if the vase turned out to be "delicate" and fragile.

      • "You fucked up - you trusted us!" - Crowdstrike
    • Disclaimer - a statement made to save one's ass (or something like that).

      Crowdstrike even says in their TOS their software shouldn't be used in anything like a mission-critical system. Their wording is considerably less clear than what I'm putting here and IANAL, but their disclaimer essentially says "we're not responsible if our stuff breaks your stuff, runs off with your wife or makes your hair fall out". As long as their product remains "fit for intended use" - whatever that means - this is just an un

      • by Junta ( 36770 )

        A disclaimer can only go so far, I'd think.

        Notably, Crowdstrikes general marketing material explicitly declares CrowdStrike to protect your mission-critical systems. I assume you can't disclaimer your way out of something you explicitly say otherwise.

        • by mmell ( 832646 )
          You're obviously new to this. Oh, yeah, I've seen variations on "we don't promise our software does what we says it does" in more than one TOS. I've also seen variations on "even if it is our fault it's broke, whatever it does to you is not our fault." As I said, a bunch of lawyers will go argue about this and get rich doing it. What the TOS says doesn't matter, it's what the JUDGE says that counts. X^D
          • by Luckyo ( 1726890 )

            Judge will consider TOS, so it will matter. How much it will matter is what the question will be.

            Factors like advertisement and marketing will play into it. How bad of a negligence on part of each party will also play a part. We shall see.

      • by cstacy ( 534252 )

        Disclaimer - a statement made to save one's ass (or something like that).

        Crowdstrike even says in their TOS their software shouldn't be used in anything like a mission-critical system.

        Everything in the universe of computers has disclaimers like that, but that doesn't mean things will go that way in front of a judge.

        And fine-print aside, the entire marketing pitch and point of Crowdstrike is that it protects your mission-critical systems. That's why everyone had it installed on their mission-critical systems.

      • by Luckyo ( 1726890 )

        In general, disclamers may absolve negligence. They do not absolve gross negligence.

        I suspect this will be a legal fight over whether this negligence was considered legally gross negligence or not.

      • I'm sure Crowdstrike had no problems writing specific contract language for a large "logo" customer like Delta.

        That would supercede whatever we see in the general EULA. Delta may have had some additional liability language there that is opaque to everyone else until legal discovery.

      • Then you sue them for lying.

        The sales pitch said the exact opposite.

        I was there.

    • by chill ( 34294 )

      If your get in a car accident that was the fault of the other driver, they're at fault. HOWEVER, if you are injured and were found not to keep your vehicle in safe conditions; disabled the airbags; and were not wearing a seat belt; then YOU share responsibility and damages will be greatly reduced.

      Responsibility is a shared model and "ultimate fault" is only one component.

    • Because their people weren't properly trained in what to do if the system shut down. Basically, Delta had the niftiest airline scheduling system going by far, and so long as the entire system didn't go down at once it had all sorts of features significantly better than those of their competitors that could reroute planes where they were needed most in case of weather or other issues that would cause planes to not make it to their intended destination, all with minimal interference by humans. However this me

    • Crowdstrike may have taken ownership and transparent about their corporate culture and repeated history of not properly testing. My organization uses this garbage root kit and less than a week after the outage, they decided to double down on crowdstrike. I think the idiots are running the show at most companies. So good luck to delta suing crowdstrike. Lol
  • by mmell ( 832646 ) on Monday August 05, 2024 @11:37AM (#64682092)
    You're the idiots that thought relying absolutely and without reservation on the correct and highly available operation of Crowdstrike's Microsoft software. What were you thinking?
    • It's been unreal seeing people attack this position when we have ample real world and immediately relevant experience demonstrating it's veracity.

      Glad to see everyone is slowly coming around to it, but I got so much hate in the days after the initial incident for suggesting corporations not blindly trust security vendors to the point of auto-implementing their patches.

    • Thinking? I do not believe that was part of the decision process.

    • Oh the irony of an Airline doing anything without reservation..
  • by Fly Swatter ( 30498 ) on Monday August 05, 2024 @11:39AM (#64682102) Homepage
    It's always some else's fault: where's our money?
  • Let them fight (Score:5, Interesting)

    by jacks smirking reven ( 909048 ) on Monday August 05, 2024 @11:39AM (#64682104)

    I pray this goes to some sort of trial, the discovery of how bad the situation for both companies would be most enlightening I imagine.

    Of course it's in the interest of both companies to keep that under wraps so this is just public angling for a behind closed doors negotation.

  • Blame the victim (Score:4, Insightful)

    by mspohr ( 589790 ) on Monday August 05, 2024 @11:40AM (#64682114)

    You didn't fix it fast enough when we disabled all your software.
    Must be your fault.
    Delta's only fault is relying on the flaky Microsoft/Crowd strike software in the first place.

    • crowdStrike can say we have an way to delay updates but you need to trun it on and that setting is very hidden and not in the docs.

      • by cstacy ( 534252 )

        crowdStrike can say we have an way to delay updates but you need to trun it on and that setting is very hidden and not in the docs.

        Are you making that up?

    • Re: (Score:2, Insightful)

      by thegarbz ( 1787294 )

      You didn't fix it fast enough when we disabled all your software.
      Must be your fault.
      Delta's only fault is relying on the flaky Microsoft/Crowd strike software in the first place.

      No. All evidence points to it being Delta's fault. Here's the thing. If shoot you in the leg you have two options: 1. Run to hospital and get it fixed, or 2. ignore it, plod about for days until it's gangrenous then have it amputated and spend weeks in intensive care on drugs. The thing is for number 2. your medical bills are really your own fault there.

      Delta is double the size of AirFrance in terms of passenger volume. Both were affected by Crowdstrike. Delta claims $500m loss, AirFrance claims $11m loss.

      • by brunes69 ( 86786 ) <slashdot@keirs[ ]d.org ['tea' in gap]> on Monday August 05, 2024 @02:53PM (#64682936)

        You don't understand enough about how this software works and the deployment and the issues at hand.

        If Delta had installed Crowdstrike on their Domain Controllers, and they also relied on Windows Bitlocker, it would have made the problem an order of magnutide worse because it caused a cascading failure network wide due to the combination of Crowdstrike taking the DCs offline while all of the other machines could not get validate their BItLocker keys.

        If you didn't protect your DCs, or if you didn't use BitLocker, you escaped this.

        How was it deployed at Air France vs how was it deployed at Delta? Who knows. So you can't just sit back and do an armchair analysis without the detail.

        Critically, protecting DCs with Crowdstrike, as well as using BitLocker, *WAS* considered a best practice. So it is not like Delta would have been doing something wrong if they had it set up this way.

        • by thegarbz ( 1787294 ) on Monday August 05, 2024 @06:27PM (#64683616)

          I don't need to. The only relevant part about this is a business continuity plan. Delta and everyone else need to have a system and plan in place on how to deal with precisely the situation that came up. Delta's version of that plan was poor (or non-existent), other companies where better prepared.

          The way you implement your software and how your run your business is not the fault of the vendor. Fundamentally it doesn't matter who did what, the point is that a significant portion of the blame lies in your own implementation. Delta's affect here was extraordinary across the entire industry, i.e. something (whatever that may be) was their own fault.

          I'm reminded of NotPetya which was spread throughout the shipping world by Maersk 5 or so years ago. It was interesting to see how Europe's biggest port coped with complete system loss and the impact it had on other companies. DHL was down completely in Europe. They ended up renting warehouses just to throw boxes into because they had no ability to deal with what was incoming mail. But the question was, why was mail coming in at all? Well seems like the Port of Rotterdam had a now famously efficient business continuity plan. Completely computer failure and they picked up pen and paper and kept the port running at almost full capacity throughout the duration even as all their partners crumbled.

          It ultimately doesn't matter how Crowdstrike was installed. The issue here was a business continuity plan failure. I actually had a friend who was flying KLM on that Friday. She posted a nice picture on Instagram of her hand written ticket and hand written baggage tag, along with a complaint about how long it took to board because they were manually ticking people off on paper lists as they got on the plane. I guess this is why KLM resumed flights on the same day while Delta were standing and pouting.

  • by Viol8 ( 599362 ) on Monday August 05, 2024 @11:46AM (#64682150) Homepage

    Crowdstrike for its piss poor release testing and all the companies affected by allowing ad hoc updates to a critical system driver on critical 24/7 machines without installing the driver on some test systems first Just In Case which has been standard practice since god was a boy, though clearly forgotten by the current generation of "admins" (aka point and click monkeys a lot of them).

    • Crowdstrike for its piss poor release testing and all the companies affected by allowing ad hoc updates to a critical system driver on critical 24/7 machines without installing the driver on some test systems first Just In Case which has been standard practice since god was a boy, though clearly forgotten by the current generation of "admins" (aka point and click monkeys a lot of them).

      Crowdstrike seemed to have little to no update control for the end users. So the admins running Crowdstrike may not really be to blame for not having an way to test all updates.

      • No, they're still to blame.

        If you are running a fucking airline and can't think to write the ability to schedule security updates to critical systems into the RFP, then everyone in your IT management at the director level and above along with the management of your procurement department need to be fucking fired.

        If your company has a policy of managing update deployments, and a vendor isn't able to deliver the controls necessary to meet that policy; then a decision maker who chooses that vendor is either cr

    • Exactly. Choose 10% random suckers in your company. These should be live tests, not lab tests. Push the update to them first. Easier to explain to the CIO/CEO a little oops than a "we were idiots like somehow most other sysadmins and had auto-updates". Why not fire all incompetent sysadmins instead?

    • by Slayer ( 6656 )

      Crowdstrike for its piss poor release testing

      I keep hearing that statement about "inadequate release testing" and "they failed to test a specific type of target platform", but from all I know, neither Delta nor these millions of other affected companies used an obscure platform for their Windows servers. This was not Windows XP 64 bit running on Longsoon servers.

      My impression is, that some last minute completely untested hotfix was ramrodded into the release, which then blew up Windows shops all over the world.

    • Crowdstrike for its piss poor release testing and all the companies affected by allowing ad hoc updates to a critical system driver on critical 24/7 machines without installing the driver on some test systems first Just In Case which has been standard practice since god was a boy, though clearly forgotten by the current generation of "admins" (aka point and click monkeys a lot of them).

      An update to configuration files triggered a preexisting bug in their kernel driver. People were confused because these files are located in the driver folder and have sys extensions yet are not actually kernel drivers.

  • Just saying... (Score:5, Insightful)

    by theendlessnow ( 516149 ) * on Monday August 05, 2024 @11:50AM (#64682172)
    Dear Crowdstrike, you insisted on software with "god level" privs. Sadly, you do also need to take responsibility when abusing (even accidentally) your powers.

    Take your dings.... and yes, some might seem very very painful.
    • ...this was a zero ring circus?
    • Dear Crowdstrike, you insisted on software with "god level" privs. Sadly, you do also need to take responsibility when abusing (even accidentally) your powers.

      Indeed they should, and they will. That doesn't mean they need to also take responsibility for other people's incompetence. Europe's most affected airline was the AirFrance/KLM group. They had a complete stop of operations. Yet they had $11m loss, compared to Delta's claimed $500m while handling normally only half the amount of flight movements.

      You are not responsible for your customer's business continuity strategy. There was no reason Delta should have been down for days while the rest of the entire indus

    • Delta probably pays a few thousand per month to Crowdsrike for the use of their software. In response, there's probably a contract in place severely limiting Crowdstrike's liability to some relatively small number.

      A court would think extremely hard before overturning a contract like that. Both sides went into it with their eyes open and got something out of the deal.

      You get what you pay for. If you want to buy my product but you insist that I assume unlimited liability, I'm gonna charge you 100 times
    • Dear Crowdstrike, you insisted on software with "god level" privs.

      It’s not as if Microsoft leaves them a whole lot of choice. Since Windows NT 3.1, Windows has only ever supported two of the four Intel rings of execution — Ring 0 (kernel mode) and Ring 3 (user mode). If drivers had the option of running in Ring 1 they could potentially be isolated when they misbehave without risking corrupting kernel structures — but that option doesn’t exist. The only place where CrowdStrike Falcon Sensor can functionally run on Windows is in Ring 0. That

  • This could be a very entertaining trial.

  • My employer and clients use Crowdstrike's endpoint protection on Windows systems. I think well under 10% of our Windows machines were impacted and none o our Linux systems were impacted.

    If our experience is any indication, Crowdstrike's update just seemed to be like a trace round fired through the Fortune 5000 to find orgs w/ crappy IT teams.

    That doesn't excuse them putting out a bad update, but this wailing and gnashing of teeth is a bit much.

    Novel idea: stop relying on modern slave labor aka H1b and third

    • Recovery (Score:4, Interesting)

      by JBMcB ( 73720 ) on Monday August 05, 2024 @12:35PM (#64682370)

      The first day of Delta's outage was Crowdstrike's fault. Every day after that was Delta's fault.

      My friend runs virtual servers and desktops for a good sized university. They knew something was wrong pretty quick as enough VMs failed boot that it triggered some alarms. Once they figured out it wasn't their fault, they pushed the last good known image to all the servers and desktops, disabled Crowdstrike's auto-update via policy, and they were back up that afternoon. He managed most of this while on vacation in Europe.

      Delta's disaster recovery plan was walking around to every machine and manually repairing them.

      • by brunes69 ( 86786 )

        The problem was highly exacerbated if you used BitLocker everywhere and had Crowdstrike on the domain controller of such a large environment.

        I assume neither of these were true in your friends little university.

        • by JBMcB ( 73720 )

          Bitlocker and Crowdstrike were deployed everywhere. The thing that saved them was vSphere wasn't tied to the domain, so they could log in directly and redeploy.

  • Ultimately Delta owns its infrastructure and the failures that occur within it. Yes, relying on Windows for the bulk of your systems is questionable but not unreasonable. What Delta seemed to forget was proper testing, traffic analysis, and due diligence overall. Delta's systems are linked to the Internet where the Crowdstrike update could have occurred to production systems without validating everything in a -1 / pre-production environment where things can break and you validate what can be deployed withou

  • "misleading narrative" that the cybersecurity company [Crowdstrike] was responsible for the airline's [Delta] tech decisions

    Hey, *you* decided to install our software -- that crashed all your systems.

  • To cut costs, they have a system that is zero risk on backups. The system that the airlines have setup is to have flights that depend on other flights, if the crews don't get there then it's game over for that flight. If there are flights that depend on the lost flight, then those flights are delayed or canceled. The problem with this is you get cascading failures if you lose X percent of flights. I was flying when southwest had their cascading failures two years ago. As I listened to the sob stories of th
    • I was on Delta and I thought, glad I'm not on a budget carrier that can't handle a system failure

      Bigger airlines that have been around for longer typically have older backend systems and have been cutting costs as much as they can for longer. Deferred maintenance will catch up with you eventually.

      • I'm not so worried about that. The FAA does a lot of enforcement there, I'm more worried about canceled flights.
  • by rastos1 ( 601318 ) on Monday August 05, 2024 @04:21PM (#64683228)

    Delta will have to explain to ... why CrowdStrike took responsibility for its actions -- swiftly, transparently, and constructively -- while Delta did not

    Could it be ... I mean, it may sounds stupid, but hear me out ... could it be because CrowdStrike had just to copy a new fixed version of some file to one download server (*) that was working, while Delta had to deal with thousands of computers that were unable to to boot and were not reachable over network?

    *) I'm willing to admit that it is not one download server but perhaps a dozen and it could be not one file but perhaps two or five. The point still stands.

  • CrowdStrike's attempt to paint itself as the victim while companies deal with the fallout of their inadequacies is misleading, irresponsible and insulting. I spent 24 of 28 hours in a data center walking from rack to rack fixing servers. There is nothing I could of done to prevent this, crowdstrike was pretty much required for our cyber insurance policy. We spend millions of dollars on their software, and it nearly ends up costing me my job.

    It is preposterous for CrowdStrike to imply that Delta is responsib

  • ... "the finger" is actually pointing straight up in the air. It's the back of the hand pointing at CrowdStrike.

The intelligence of any discussion diminishes with the square of the number of participants. -- Adam Walinsky

Working...