Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Data Storage

858TB of Government Data May Be Lost For Good After South Korea Data Center Fire (datacenterdynamics.com) 82

South Korea's government may have permanently lost 858TB of information after a fire at a data center in Daejeon. From a report: As reported by DCD, a battery fire at the National Information Resources Service (NIRS) data center, located in the city of Daejeon, on September 26, has caused havoc for government services in Korea. Work to restore the data center is ongoing, but officials fear data stored on the government's G-Drive may be gone for good. G-Drive, which stands for Government Drive and is not a Google product, was used by government staff to keep documents and other files. Each worker was allocated 30GB of space.

According to a report from The Chosun, the drive was one of 96 systems completely destroyed in the fire, and there is no backup. "The G-Drive couldn't have a backup system due to its large capacity," an unnamed official told The Chosun. "The remaining 95 systems have backup data in online or offline forms." While some departmers do not rely on G-Drive, those that do have been badly impacted in the aftermath of the fire. A source from the Ministry of Personnel Management said: "Employees stored all work materials on the G-Drive and used them as needed, but operations are now practically at a standstill."

This discussion has been archived. No new comments can be posted.

858TB of Government Data May Be Lost For Good After South Korea Data Center Fire

Comments Filter:
  • by oldgraybeard ( 2939809 ) on Wednesday October 08, 2025 @10:31AM (#65712036)
    Since they decided the users files were not worth having a backup of.
  • by Mononymous ( 6156676 ) on Wednesday October 08, 2025 @10:33AM (#65712044)

    The Korean government couldn't afford a petabyte of storage to back up documents without which "operations are at a standstill".

    • by larryjoe ( 135075 ) on Wednesday October 08, 2025 @11:11AM (#65712142)

      The Korean government couldn't afford a petabyte of storage to back up documents without which "operations are at a standstill".

      Yes, this. We're not talking about a huge amount of money or physical space ... for a national government. Furthermore, why do 28 thousand government workers each need 30GB of disk space? Usually information that is either critical or even just functional is stored in a database or a repository to allow access to team members or even just to survive the end of employment for that worker. This seems like a badly designed system.

      • by Calydor ( 739835 )

        It COULD be a case where some departments (marketing, for example) needed a lot of space, and it was deemed simpler to just allocate the same space to everyone whether they'd use it or not.

        Also possible there was a habit or common workflow of copying active case files etc. to your G-Drive, which means ideally those case files still exist in at least an older version elsewhere.

      • by tlhIngan ( 30335 )

        Yes, this. We're not talking about a huge amount of money or physical space ... for a national government. Furthermore, why do 28 thousand government workers each need 30GB of disk space? Usually information that is either critical or even just functional is stored in a database or a repository to allow access to team members or even just to survive the end of employment for that worker. This seems like a badly designed system.

        30GB of "cloud storage" (which is what G-Drive is) is likely the random scratch s

        • 30 GB of scratch is a lot for normal bureucrats. I assume here that South Korea does not have 28000 workers doing numerical simulations and heavy-duty film editing. This was asking for trouble since nobody would bother cleaning up that space: archive old stuff on backed-up space, delete junk etc. I would think that the Korean government lost about 100 TB worth of actually useful data and the rest was obsolete junk anyway.
      • Furthermore, why do 28 thousand government workers each need 30GB of disk space?

        They need them to store the 100-page AI-generated weekly reports they need to submit to justify not being replaced by AI.

    • I think something must have been lost in translation.

      More likely that this was "scratch" space, shouldn't have been used for critical stuff, and was never intended to be backed up.

      But "the Ministry of Personnel Management" objected to the fees that they were charged to store data on on of the other 95 systems that were also destroyed in the fire and were backed up and told employees to use the G-Drive which, quite possibly, every government employee got automatically so it was "free" to the department.

    • by UnknowingFool ( 672806 ) on Wednesday October 08, 2025 @11:56AM (#65712258)
      Nothing said it was a matter of cost. All that was said was it was "too large." The issue may have been the size of the data was too much for the government’s current backup infrastructure to handle. They probably needed to design a new system but never got around to doing so. Technically a petabyte server could be built into a single 4U server rack.
  • Banks next, space monkeys of Fight Club.

  • by ole_timer ( 4293573 ) on Wednesday October 08, 2025 @10:40AM (#65712062)
    is not a backup
  • No backup. (Score:4, Informative)

    by 0xG ( 712423 ) on Wednesday October 08, 2025 @10:40AM (#65712064)

    They could however have mirrored the data in another location.
    Talk about putting all your eggs in one basket!

    Heads will roll....

    • Does not matter which country it is. Government will always protect the incompetent and dodge any responsibility.
      • In BBC TV's 'Yes Minister' the civil servant refers to the year in which a lot of flooding destroyed files as a good year as it allowed them to get rid of embarrassing material that reflected badly on civil servants.

    • They could however have mirrored the data in another location. Talk about putting all your eggs in one basket!

      Heads will roll....

      My favorite is the snowjob. "The G-Drive couldn't have a backup system due to its large capacity."

      Couldn't. A quick Amazon lookup shows me a 20T Ironwolf Pro is $455 CDN. 50 of those gets you 1P, at under $25k CDN.

      Now, sure, you need a some of infrastructure to connect 50 drives. And you need some infrastructure and bandwidth to handle syncing between the two sites. But... that's a cloud provider's task. Bottom line is that for the price of less than a single Hyundai EV, you could own the hardware

      • Agreed, the claim that it couldn't be backed up because the size was too large is totally bogus. We're talking about a single rack of equipment for a cheap option capable enough to do weekly snapshots. Less than a shipping container for a higher end system.
        • Re: No backup. (Score:4, Insightful)

          by UnknowingFool ( 672806 ) on Wednesday October 08, 2025 @12:04PM (#65712278)
          The government official never said they it could not be done due to cost. Everyone here jumped to that misguided conclusion right away. He said it could not be backed up due to size. My interpretation is their current backup solution could not handle the size and they would have to design a new one. My company can easily afford to build a new petabyte server. However installing one is not as easy as me ordering a massive amount of HDDs and doing it over a weekend. There are procedures to follow when it comes to that kind of infrastructure change. Being a government agency, there were probably additional constraints on solving that problem.
          • The government official never said they it could not be done due to cost. Everyone here jumped to that misguided conclusion right away. He said it could not be backed up due to size. My interpretation is their current backup solution could not handle the size and they would have to design a new one. My company can easily afford to build a new petabyte server. However installing one is not as easy as me ordering a massive amount of HDDs and doing it over a weekend. There are procedures to follow when it comes to that kind of infrastructure change. Being a government agency, there were probably additional constraints on solving that problem.

            I agree the official didn't say it couldn't have a backup due to cost. And I demonstrated that size is not a prohibitive factor. That quantity of data can be backed up, and can be backed up easily. Could have had a backup. Not could not have had a backup.

            For the official to claim could not have when it could have is misleading. The why of it not having a backup doesn't get asked when the baseline is "couldn't have".

            Once we arrive back at could have had a backup system it's just about the reasons/ex

            • For the official to claim could not have when it could have is misleading. The why of it not having a backup doesn't get asked when the baseline is "couldn't have".

              In almost every case when a problem arises, every outsider's answer to the problem is "it could have been avoided." You and others do not exactly why the problem was not avoided but do not probe into details. The problem was not technically a petabyte server can be built easily these days. The problem was existing government infrastructure did not have a petabye server in place. From my time working with government organization, it takes lifetimes to change things. Someone could have realized they needed a

              • For the official to claim could not have when it could have is misleading. The why of it not having a backup doesn't get asked when the baseline is "couldn't have".

                In almost every case when a problem arises, every outsider's answer to the problem is "it could have been avoided." You and others do not exactly why the problem was not avoided but do not probe into details.

                Thanks for replying, but in two words, "don't care". Not about your comment, but about the direction it goes. I object to misleading verbiage. What the official said was demonstrably, provably, clearly false. That is bad and a problem and that shouldn't be allowed to go unchallenged.

                The why of there not being a backup is beside the point. The speed at which government moves is beside the point. That you got crap Internet is beside the point. They may all be interesting points, but they are not rele

                • Thanks for replying, but in two words, "don't care".

                  So you asked for an explanation and then basically don't care when one was provided. In other words, you were never really interested in the reason at all. You just wanted to complain to complain.

                  Not about your comment, but about the direction it goes. I object to misleading verbiage. What the official said was demonstrably, provably, clearly false.

                  No, it was not false. The actual verbiage: “The G-Drive couldn’t have a backup system due to its large capacity . . ." You don't work in South Korea and know of a system that the South Korean government that was up and running that could handle 858TB of data at the time. No you do not. You are still stu

          • The government official never said they it could not be done due to cost. Everyone here jumped to that misguided conclusion right away. He said it could not be backed up due to size. My interpretation is their current backup solution could not handle the size and they would have to design a new one.

            Data of that volume typically don't appear overnight. And it seems unlikely that they just now realized "OMG! That data is critical!"

            My point is that the need for backups probably didn't just spring up unexpectedly or grow too quickly for them to keep up. It seems likely that they had years to initiate and implement gradually without breaking any budgets, but chose not to do so.

            • but chose not to do so.

              That depends on your definition of "chose". If you have ever worked with government organizations, it takes a lot of work to make changes. There might have been a solution that was planned but it took too long to get through all the stages of planning, approvals, budgeting, bidding, etc.

      • by necro81 ( 917438 )

        My favorite is the snowjob. "The G-Drive couldn't have a backup system due to its large capacity."

        Agreed. Jeff Geerling made a 1.2 PB NAS [youtube.com] using a rack of drives and a Raspberry Pi. (Wholly inadequate for this South Korea job, but it does show that 1-PB storage isn't that hard or expensive.)

        Also: Tony Stark was able to build this in a cave! With a box of scraps!

    • You're assuming the data is important. It obviously wasn't meant to be treated as important, but users were using it as such.

      "was used by government staff to keep documents and other files. Each worker was allocated 30GB of space"

      Similar things would probably happen if workers are fired and the G Drive storage is purged as they exit. This is really an example of poor file system hygiene and group ownership.

    • They could however have mirrored the data in another location. Talk about putting all your eggs in one basket!

      Heads will roll....

      Heads will roll? Guess that depends on how many skeletons just burned up in that fire. Data centers are the new closet.

      858TB? Might make someone wonder how big the Epstein video surveillance archive is. Or, was.

      • by EvilSS ( 557649 )

        They could however have mirrored the data in another location. Talk about putting all your eggs in one basket!

        Heads will roll....

        Heads will roll? Guess that depends on how many skeletons just burned up in that fire. Data centers are the new closet.

        858TB? Might make someone wonder how big the Epstein video surveillance archive is. Or, was.

        4 people have already been arrested for professional negligence: https://www.datacenterdynamics... [datacenterdynamics.com]

  • by Lavandera ( 7308312 ) on Wednesday October 08, 2025 @10:47AM (#65712078)

    I was working 10+ years ago for one of the EU governments and the rule was that there should be backup located at leaset 30km away from the prime location.

    • That's it? Our corporate policy at my last job was to keep it on separate American coasts, 3000 miles apart, and separated by the Rockies and the Mississippi.

      30km would have us worrying every time there was a natural disaster. Wild fires can easily cover 30km, earthquakes travel hundreds of miles, regional electricity production can cause widespread blackouts putting your data centers out of commission.

      • Or so believe based on centuries of experience. The USA however is a nasty, dangerous place. ;)

        • Make a bet? The floods last year in Europe come to mind.
          In the first six months of 2025, 208,000 hectares of forest have already been destroyed by wildfires, and will get worse over time.

      • That's it? Our corporate policy at my last job was to keep it on separate American coasts, 3000 miles apart, and separated by the Rockies and the Mississippi.

        Maybe Lavandera was working for Malta, where 30 km is all they can do using their two islands.

    • I was working for a cloud saas provider in the early 2000s. We had 2 data centers in our city. I asked "Why don't we have one of our data centers out of town in the event of a major disaster in town?" Absolute crickets. This wasn't too far after 9/11 when the discussions about the likelyhood of a bio attack, dirty bomb attack, were still being made, in a city that a lot of defense related stuff.
  • by Anonymous Coward

    "The G-Drive couldn't have a backup system due to its large capacity," an unnamed official told The Chosun.

    I bet the backup costs sounds cheap now, compared to having operations stalled for days or weeks.
    If our servers burned in a fire without a backup, the company would have to dissolve immediately. It's just unacceptable. So we have snapshots, multi-region cloud backups, and 3x rotating offline backups.

  • Lizzo's ass was unavailable for comment.

  • by linear a ( 584575 ) on Wednesday October 08, 2025 @11:02AM (#65712106)
    List price is $1013/month to backup one petabyte on Amazon Deep Glacier. Of course, cloud costs are always higher but that gives you an order of magnitude cost estimate for a back-this-up-we-will-never-need-to-restore-it archival copy.
    • I was using Glacier to store (C) 70Tb for $900+/month. Heaven forbid should you want to do a read, they really get you for that. I switched to Wasabi and cut the price in 1/2.
      • by BeerCat ( 685972 )

        I was using Glacier to store (C) 70Tb for $900+/month. Heaven forbid should you want to do a read, they really get you for that. .

        I have come across a couple of different cases of outsourced IT which will "provide a backup service". And then discovering that restoring from the backup costs extra. Simply because the definition of "backup service" didn't include "backup and restore on demand service"

      • I was using Glacier to store (C) 70Tb for $900+/month. Heaven forbid should you want to do a read, they really get you for that.

        As a cold-storage backup presumably you need to do retrievals rarely if ever (until your data center burns down). At the ~$0.07/GB I see on the AWS website for transferring data out over the internet, it would have "only" cost them ~$70k to restore. That has to be cheap compared to the cost of whatever was lost.

  • Too big to fail? (Score:4, Insightful)

    by registrations_suck ( 1075251 ) on Wednesday October 08, 2025 @11:03AM (#65712110)

    "The G-Drive couldn't have a backup system due to its large capacity,"

    What kind of horse shit is that?

    • Quick google on largest consumer hard discs suggests they'd need a bit under 30 drives to back this up. Why, building a suitable backup system may have taken a couple of nerds up to a week to complete.
  • by techvet ( 918701 ) on Wednesday October 08, 2025 @11:05AM (#65712118)
    uptime (the battery) was the cause of them having massive downtime. On the positive side, there will be no complaints about how long it's taking to restore the data. Also, it's now invincible against ransomware!
  • Minister, we have good news and bad news about our database.

    The good news is we can finally get this Oracle shit out of our system !

    The bad news is we first have to destroy it all and start again.
  • by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Wednesday October 08, 2025 @11:14AM (#65712160) Homepage

    858TB in terms of 20TB drives is only 43 drives. One can put 90 drives into a single 4U server. It would weigh 200 lbs, but being a single 4U unit is somewhat portable and can be stored off-site.

    We are past the days of when 1PB is "too much".

    • by primebase ( 9535 )

      We're replicating about 1.6PB between two sites using "file storage appliances from a Round Rock, TX based company", and the remote side is 12U for a little less than 3PB capacity. Granted, that product is expensive however it's pretty bulletproof once it's rolled out, and even at our size we're considered "tiny" compared to some installations that use it. Not sure why a major government couldn't get their [ stuff ] together and user LITERALLY ANYTHING to create some kind of off-site backup for that compa

  • I bet it is all preserved in songs about their glorious leader.
  • If each of those government workers backed up their own work to a 32 GB USB drive all would be well. I bet they could get a great bulk deal from Samsung or Hynix on such tiny obsolete models. (Yeah, I know there are probably security issues with this plan while there are no security issues with the current plan, the data remains very secure.)
  • by rufey ( 683902 ) on Wednesday October 08, 2025 @12:09PM (#65712296)

    The G-Drive couldn't have a backup system due to its large capacity

    Seriously? A dataset that is close to a PetaByte is too big to backup?

    I've been doing business continuity planning for a couple of decades along with many other hats. Nothing is ever too large to backup.

    However, the other side of the coin is to make sure backups finish and can be recovered. I've been in situations where I wasn't involved the the company's backups, but saw first hand what happens when backups take too long to run where they eventually fail, day in and day out, but was assured that the backups were good. Purged some old data, and sure enough, not a month goes by that someone needs some of that old data that was purged but was backed up, only to discover that the backups were failing due to a number of factors including that they would never finish, and no one bothered to look into why or verify backup integrity.

    I keep redundant, and sometimes double redundant backups of most everything. I try to have backups in different geographical locations.

  • It is not like south korea is major manufacturer of storage devices, right?
    Samsung probably could find 1PB of flash memory lost between their offices cushions...
    And the justification that 1PB of data was too big to back-up is completely absurd, these days this amount of data is not even that much, there are people if more than that in their servers at home, let alone a datacenter, for crying out loud, there are systems with more RAM than this in a single pod.

  • by LondoMollari ( 172563 ) on Wednesday October 08, 2025 @12:19PM (#65712324) Homepage

    "The G-Drive couldn't have a backup system due to its large capacity"

    IBM TS4500 stores up to 2.63 exabytes per library, compressed. 877.5 petabytes native. Thats enough to store 3066 copies of the G drive with compression, without deduplication.

  • But, at least they didn't let the bad Americans hold their data. Couldn't have THAT happen, could you? More important to assert local control than it is to not lost the data...

    Given their behavior and reaction, this doesn't seem to be state secrets or classified information. This is just basic private shit. Use a professional cloud provider. Don't roll your own.

    • But, at least they didn't let the bad Americans hold their data. Couldn't have THAT happen, could you? More important to assert local control than it is to not lost the data...

      It's irrelevant anyway if you upload encrypted backups to cloud storage. That's one of the few cases where putting your data on someone else's server actually makes sense.

  • [last lines]
    James Hacker: How am I going to explain the missing documents to "The Mail"?
    Sir Humphrey Appleby: Well, this is what we normally do in circumstnces like these.
    James Hacker: [reads memo] This file contains the complete set of papers, except for a number of secret documents, a few others which are part of still active files, some correspondence lost in the floods of 1967...
    James Hacker: Was 1967 a particularly bad winter?
    Sir Humphrey Appleby: No, a marvellous winter. We lost no end of embarrassing files.

    https://www.imdb.com/title/tt0... [imdb.com]

  • by nospam007 ( 722110 ) * on Wednesday October 08, 2025 @02:51PM (#65712774)

    "The G-Drive couldn't have a backup system due to its large capacity,"

    **1. Tape libraries (LTO-9/10):**
    Still the king for bulk, cold, or archival storage. An LTO-9 tape holds 18 TB native (45 TB compressed). A mid-range autoloader with 60–80 slots covers your 900 TB easily, costs under €100 k, and draws almost no power. Ideal for long-term off-site backups if latency isn’t critical.

    **2. Object storage clusters:**
    Systems like MinIO, Ceph, or AWS S3 Glacier equivalents can handle petabytes with redundancy (e.g., 3× replication = 2.7 PB raw). Hardware could be 12–24 bay servers with 22 TB disks on 100 Gb Ethernet. You can build on-prem with commodity nodes or rent from cloud providers (AWS, Wasabi, Backblaze B2).

    **3. Enterprise backup appliances:**
    Dell EMC Data Domain, HPE StoreOnce, Quantum DXi, or Veeam-driven scale-out repositories use deduplication to shrink footprint; 900 TB effective might need only ~300 TB physical if data repeats heavily.

    A practical hybrid is active data mirrored to disk/object storage and cold copies vaulted to tape. The critical constraints are bandwidth (need ~10 Gbps just to back up 10 TB/hour) and verification time — at that scale, backup *integrity* becomes the bottleneck, not raw capacity.

    • You can even Veeam directly into S3. I saw a site using some kind of tape emulator application that made an S3 instance look like tape drive to the OS. Perfect if you just refuse to get rid of Veritas.
  • "Too big to backup" is the little dirty secret of cloud computing.
  • "The G-Drive couldn't have a backup system due to its large capacity"

    Best excuse of the year.
    That'll won't hold up in court, for sure.

"You know, we've won awards for this crap." -- David Letterman

Working...