Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Oracle IT

Oracle Engineers Caused Days-Long Software Outage at US Hospitals (cnbc.com) 56

Oracle engineers mistakenly triggered a five-day software outage at a number of Community Health Systems hospitals, causing the facilities to temporarily return to paper-based patient records. From a report: CHS told CNBC that the outage involving Oracle Health, the company's electronic health record (EHR) system, affected "several" hospitals, leading them to activate "downtime procedures." Trade publication Becker's Hospital Review reported that 45 hospitals were hit.

The outage began on April 23, after engineers conducting maintenance work mistakenly deleted critical storage connected to a key database, a CHS spokesperson said in a statement. The outage was resolved on Monday, and was not related to a cyberattack or other security incident. CHS is based in Tennessee and includes 72 hospitals in 14 states, according to the medical system's website.

This discussion has been archived. No new comments can be posted.

Oracle Engineers Caused Days-Long Software Outage at US Hospitals

Comments Filter:
  • The Oracle Guarantee (Score:4, Informative)

    by locater16 ( 2326718 ) on Monday April 28, 2025 @11:09PM (#65338745)
    People will die, your business will fail, and you will pay way more than you thought you were contractually obligated to for the privilege.
    • by Anonymous Coward

      Not to worry that H1B is going home for now, there are an endless number lined up.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Probably not an H1B. Likely remote support in India. And probably terminated, the underlying process has a lot of safeties in it, and to LOSE the storage requires a violation of the underlying process.

        SAN storage presentation/removal was the riskiest step typically done and there were a lot of safeties in the process, but it was pretty common for people to get in a hurry and cut a corner and/or ignore something that looks wrong (with a "It will be ok"). If the process was followed as written it was pret

    • by MrKaos ( 858439 ) on Tuesday April 29, 2025 @12:47AM (#65338829) Journal

      People will die

      Now we can say: "People are dying to see Oracle's software working properly."

    • by gweihir ( 88907 )

      And while things work somewhat, the tech will stand in your way and make everybody with a clue angry. I wonder how much extra they will charge their customers for this. (Well, they will not, but it is Oracle. If they could, they would...)

    • Having seen the process from the internal side a little bit, there's a reason Oracle spends so much on swag and business acquisition - there entire model is to entrap their customers and then start wedging their wallets open.

      Every hospital system that decided that Oracle was a better choice than Epic is going to find out why most hospital systems have gone with Epic.

  • Oracle (Score:5, Insightful)

    by phantomfive ( 622387 ) on Monday April 28, 2025 @11:10PM (#65338747) Journal
    It's actually amazing it worked at all, if it was Oracle. The only time to use Oracle is if you don't know any better.
    • by Anonymous Coward

      Might be hard to switch Hospital systems after Oracle buys the one you are using.

      • Re: (Score:3, Interesting)

        by Anonymous Coward
        My wife works in IT in a hospital system in Texas. As soon as Oracle announced the acquisition of Cerner, they started plans to move off Cerner to another product...
    • The only time to use Oracle is if you don't know any better.

      The problem with this is that you can substitute the name Oracle for literally any other company of that scale with that product. Oracle, SAP, IBM, debating who has the best enterprise scale database solution is like debating which pig is the cleanest in the mud pit.

      • The scale isn't very large. That is, it's not "Webscale." There are 7000 hospitals in the US, there are plenty of companies that deal with that kind of scale. I personally have dealt with that kind of scale.

        So in terms of database, Postgres works better. If you use AWS RDS, it's HIPAA compliant and you get backups with zero effort, which is better than Oracle, apparently.

        There are other problems, like a ton of edge cases. But we've seen that startups can grow to cover those kinds of things (ie, Peopleso
    • Re:Oracle (Score:4, Informative)

      by cmseagle ( 1195671 ) on Tuesday April 29, 2025 @09:16AM (#65339519)

      It's actually amazing it worked at all, if it was Oracle. The only time to use Oracle is if you don't know any better.

      Pretty much nobody who is using this software bought Oracle. They bought Cerner, who was acquired by Oracle in 2021 at which point it was renamed Oracle Health [wikipedia.org].

  • by Entrope ( 68843 ) on Monday April 28, 2025 @11:26PM (#65338761) Homepage

    The outage was resolved on Monday, and was not related to a cyberattack or other security incident.

    The cybersecurity people who give me headaches would definitely include unintended downtime a security incident -- a loss of the availability part of the security triad [nist.gov]. (They give me headaches because they act like they are convinced nobody else can pay attention to concerns like that.)

    The really cynical among us would consider the use of Oracle in the first place to be a kind of cyberattack....

    • by kaur ( 1948056 )

      Five-day outage is in a domain of business continuity.

      Deleting data - happens.
      Did they have backups?
      Did they have recovery plans and procedures?
      Had they tested them?
      What failed?

      • Five-day outage is in a domain of business continuity.

        And in this case they reverted to paper documentation to maintain some level of service.

        Unfortunately, in the context of healthcare even a successfully executed business continuity plan will probably result in some amount of patient harm. I don't intend that as a knock on the plan - when you lose critical IT systems in any industry you're going to see some degradation of performance/efficiency - but in healthcare the failure mode means people get hurt and/or die.

      • Deleting data - happens.
        Did they have backups?
        Did they have recovery plans and procedures?
        Had they tested them?

        Of course.

        What failed?

        It took them 5 days to figure out how to use RMAN.

        • Or using a backup solution that alters RMAN info

          Wanna bet it doesn't exist? It does, I worked on it and engineering support couldn't be convinced it was an issue... "why would you restore to the same DB instance". I won't say where they were.

    • by gweihir ( 88907 )

      While availability is a security property, it is not only that. It depends on the presence of an attacker that causes the threat to availability. More generally, this falls under resilience and that covers both intentional malicous action as well as other sources.

      Now, Oracle is certainly a repeat offender in being "non intentionally" malicious. Do not use them for anything. It is not worth it...

    • Well, inasmuch as letting Oracle run things for you is a cyberattack, in any case.
    • You might want to go to the hospital and have them check on that headache.
    • Even alleged 'accidental' (I prefer to categorize this as incompetent) deletion of critical data should be considered a security issue. Your security systems should protect against internal threats, and even those from 'official' support or maintenance.

      This sort of incident demonstrates the value of some relatively obscure file system permissions of operating systems etc back in the day, such as delete or rename inhibit (really the same thing, somewhat). Some of these Microsoft came to relatively late in th

  • by MrKaos ( 858439 ) on Tuesday April 29, 2025 @12:29AM (#65338821) Journal

    L A R R Y ! ! !

  • It happens (Score:5, Interesting)

    by jlowery ( 47102 ) on Tuesday April 29, 2025 @01:35AM (#65338851)

    A coworker of mine once absentmindedly blew away a critical database table at a large east coast hospital as I sat one desk over. It was then that the hospital found out their backups were broken.

    • Re:It happens (Score:5, Informative)

      by Anonymous Coward on Tuesday April 29, 2025 @02:17AM (#65338887)

      Backups should be reviewed and tested at least once a year. It only takes a few things like renaming files, changing folder locations, or storage filling up to break a backup plan.

      If you don't periodically test your backups, all you have is a wish.

      • Yeah, coulda, shoulda, woulda.. it's all nice theoretically, but in practice it's a bit different. Staff shortage, staff knowledgement (or rather lack off), costs and experience are all factors in getting it up and running.
        • by gweihir ( 88907 )

          Staff sortage, skill, experience and cost: This one is on the bean-counters and they should be personally responsible for the damage they did. A broken backup due to lack of testing, including regular restore tests in critical infrastructure (which a hospital is) is on the level of criminal negligence.

          • Staff sortage, skill, experience and cost: This one is on the bean-counters and they should be personally responsible for the damage they did. A broken backup due to lack of testing, including regular restore tests in critical infrastructure (which a hospital is) is on the level of criminal negligence.

            Or damn-well should be!

        • Excuses for not following generally accepted guidance is for people that are fuckups and don't follow generally accepted guidance.

          If you bother to back your shit up, but make excuses for why you haven't tested that backup, you're incompetent. Period.

          • Yep, but as you know it, the world of IT is full with incompetent people, and you find them a lot at those large IT outsourcing companies which get hired by those other large institutions.
      • by Dan667 ( 564390 )
        your only good backup is the one you last tested ... successfully. I worked for a company and a critical database was deleted. We had backups, but management thought it was a waste of time so we never tested them. Well, the incremental backup was bad. Daily backups were bad. Monthly backup was bad. And finally an off site backup from several months ago was finally found good. We tested backups once a quarter after that. It took all of an hour to do it.
    • by gweihir ( 88907 )

      Amateurs at work. This is on a level that should not only get those responsible fired, but charged for criminal negligence. Every major IT security catalog requires regular restore tests on top of integrity tests of the backups. There is no excuse for a failure this gross today.

    • It amazes me that today people still haven't learned that an untested backup is no backup at all.

    • A coworker of mine once absentmindedly blew away a critical database table at a large east coast hospital as I sat one desk over. It was then that the hospital found out their backups were broken.

      That's ALWAYS how you find out. . . If you're an idiot!

      I have said for decades: It's not the Backup; it's The Restore!

  • It's so much safer.

    Computers are dangerous...

    Computers are very sharp tools, and like any sharp tool, when used badly, will do a lot of damage very fast. However the benefits are enormous, there just needs to be a recognition of the dangerous and therefore enough training and time for IT staff to find out what they are doing. Unfortunately this cost MONEY... CEOs and boards need to recognise the need to spend a lot of money to be safe; this will be hard for them!

    • It's so much safer.

      That's like saying you only eat raw, unwashed food because you don't trust that newfangled "fire" or "water". In case you're not a moronic troll, I'll explain it.

      Paper records can't be transferred quickly or easily. They require dedicated staff to locate, curate, and move the records. If the medical records staff are busy - or it's not a 24x7 site - your medical records are completely offline. In the Bad Old Days, getting one's records sent to a new provider could take weeks.

      The scale for even a small medic

      • I didn't add a 'sarcasm end' statement. Of course computers are better in lots of ways - but my point was that enough money has got to spent to ensure that when things do go wrong there are reasonable alternatives. And of course to ensure that things don't go wrong... THAT'S the point I was trying - obviously not clearly enough - to make!!

        • No worries! My biases are showing as well; I've encountered too much debate that comes from the "If someone doesn't like something, no one should have it" attitude.

          Your point is true for all IT, not just healthcare IT. I've seen large companies effectively cease operations for days while major systems were down. Most times it was planned (major upgrades or transitions), but a few were unplanned and even more painful.

        • but my point was that enough money has got to spent to ensure that when things do go wrong there are reasonable alternatives

          To be fair (to Oracle *gasp*), we don't know that there weren't reasonable alternatives. In this case, "downtime procedures" likely meant reverting to offline copies of records that could be printed out, written on, and transcribed/scanned back into the system when it was available.

          Without knowing more the fact that it took 5 days to fix this "oopsie" does seem inexcusable, though.

    • by Anonymous Coward

      Yes, because the paper record keeping of the 1950s at places like DOE Hanford has worked out so well for us, having hundreds of underground waste tanks where nobody has any idea what is in them due to decades of lost paper records. Now we have to spend billions and billions of dollars to treat each and every one of them as if they contain the worst possible thing in them, because it just might.

  • by gweihir ( 88907 ) on Tuesday April 29, 2025 @04:54AM (#65339027)

    How much will they charge their customers for this extra service?

  • What else do you expect when you use a database named after a lady who gets blitzed out of her gourd on swamp gas and then says whatever comes to mind?

    • by Anonymous Coward
      Dennis the Peasant: Listen. Strange women lying in ponds distributing swords is no basis for a system of government. Supreme executive power derives from a mandate from the masses, not from some farcical aquatic ceremony.
      Arthur: Be quiet!
      Dennis the Peasant: You can't expect to wield supreme power just 'cause some watery tart threw a sword at you!”
  • For something as mission critical as the IT systems of an effing hospital you must have tried and tested disaster recovery systems and setups in place. Anything else is flat-out irresponsible and a legal liability AFAICT/IMHO.

    If the fecal matter hits the rotary air impeller in such a facility you should be back up and running after 3 hours at worst.
    This is basic professional IT 101. You'd expect someone dabbling with Oracle databases to be aware of this.

    I guess they learned their lesson.

  • by Somervillain ( 4719341 ) on Tuesday April 29, 2025 @11:10AM (#65339865)
    For those outside the industry, Oracle is largely regarded as one of the worst Silicon Valley businesses. As of about 3 or 4 years ago (before the mass layoff wave started in the industry), the top tier was FAANG (Netflix, Google, Facebook, Apple...cool or advertising companies) then the business-oriented ones, like MS/SalesForce/etc, then internal positions (working for consumer of technology, like a top retailer or bank), and at the bottom, it was always IBM and Oracle.

    While their Java team, while tiny, is mighty, and I assume other individual groups have some talent, the company, as a whole, has a very bad reputation. People that work there hate it. When someone gets hired there, the response is rarely "congratulations", but more "I'm glad you finally found a job....especially given your criminal record and history of criminal activity at your place of work."

    Oracle and IBM are largely considered employers of last resort. I am sure individual teams may be excellent, but no product has ever risen in status after being acquired by Oracle. Java is actually innovating quite well, but....Oracle shit the bed with the licensing fiasco...for reasons no one can explain. If they had just not been idiots, Java would be the undisputed champion of platforms and probably well-loved by everyone but hipsters for business programming. Oracle's RDBMS actually works well, but is HORRIBLE to install and has terrible tools. Every Oracle Dev tool is probably the worst in the industry. No one "likes" Oracle. No one thinks they're run well.

    Larry Ellison has always had a reputation of being both an idiot and an asshole...and that was LONG before he went all in on Trump.

    The surprise isn't "holy shit that was a long outage on a mission critical system!" The surprise is that it doesn't happen more often given what everyone in the industry knows about Oracle.
    • by King_TJ ( 85913 )

      My last real experience interacting with Oracle is probably back in the early 2000's. I worked for a place that ran an Oracle DB on a DEC Alpha machine as their primary/mission-critical database server on-site.

      At some point, they purchased the JD Edwards OneWorld ERP package and started deploying it, and it was using Oracle as the back-end. I remember they had numerous bugs reported by the users that OneWorld support was clueless at fixing. Most of it was stuff my boss eventually just solved himself and it

      • by sodul ( 833177 )

        I did not have to deal with Oracle DBs too many times. One of the reasons some (mostly java) developers want to use Oracle is that it is capable to handling more complex queries that reduce the amount of code required on the service side. This offloads computing complexity to the database and the IT/DBA team is then footing the bill in server resources as well as human resources to keep the DB working. In a way it makes sense for them, they consider themselves and their code very efficient, their java app i

To write good code is a worthy challenge, and a source of civilized delight. -- stolen and paraphrased from William Safire

Working...