Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Software Technology

MIT's New File System Won't Lose Data During Crashes 168

jan_jes sends news that MIT researchers will soon present a file system they say is mathematically guaranteed not to lose data during a crash. While building it, they wrote and rewrote the file system over and over, finding that the majority of their development time was spent defining the system components and the relationships between them. "With all these logics and proofs, there are so many ways to write them down, and each one of them has subtle implications down the line that we didn’t really understand." The file system is slow compared to other modern examples, but the researchers say their formal verification can also work with faster designs. Associate professor Nickolai Zeldovich said, "Making sure that the file system can recover from a crash at any point is tricky because there are so many different places that you could crash. You literally have to consider every instruction or every disk operation and think, ‘Well, what if I crash now? What now? What now?’ And so empirically, people have found lots of bugs in file systems that have to do with crash recovery, and they keep finding them, even in very well tested file systems, because it’s just so hard to do.”
This discussion has been archived. No new comments can be posted.

MIT's New File System Won't Lose Data During Crashes

Comments Filter:
  • But is it useful? (Score:5, Interesting)

    by Junta ( 36770 ) on Monday August 24, 2015 @07:58AM (#50379191)

    slow compared to other modern examples, but the researchers say their formal verification can also work with faster designs

    If we can accept 'slow', it's not that difficult to build an always consistent filesystem. While they can formally verify a faster design should one exist, there remains the question of whether it's possible to be formally proven to be resilient to data loss while taking some of the utterly required performance shortcuts for acceptable performance. I suspect the answer is that some essential performance 'shortcuts' will fail that verification.

    • If we can accept 'slow', it's not that difficult to build an always consistent filesystem.

      citation required

      • by Anonymous Coward on Monday August 24, 2015 @08:25AM (#50379391)

        A sufficiently slow writing filesystem is indistinguishable from a read-only filesystem. Read-only filesystems are consistent. Therefore, a slow enough writing filesystem is consistent.

        For more serious analysis of crash-resistant write sequences, look elsewhere in this discussion.

        • what does this phrase even mean? why does speed or slowness matter? you can play the video of a disk disaster at any speed you want

          • what does this phrase even mean? why does speed or slowness matter?

            Speed means the ability to finish reading and writing all data associated with a job before the job's soft real-time deadline has expired.

            • before the job's soft real-time deadline has expired.

              what does this have to do with anything?

              • by tepples ( 727027 )

                If reading or writing files in a particular file system is slow enough that it makes applications painful to use, the file system won't pass into widespread use.

      • If we can accept 'slow', it's not that difficult to build an always consistent filesystem.

        citation required

        Sheesh, do we have to do the Googling for you [wikipedia.org]?

    • Write a block to the storage device.
      Apply all necessary flush and synchronization commands, and a few unnecessary ones.
      Power off the storage device.
      Power the device back on.
      Read back the block to ensure it was actually written.
      Repeat as necessary until block is confirmed as having been written.
      Continue with block number two...

  • ... until they find a logical flaw in their proofs or the bugs in mechanical verifier(s) that helped them prove the driver correct.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      Or – more likely – until they find a flaw in their assumptions, like a lower-level software stack that swears "yes, this data has been committed to storage" slightly before that's actually true.

    • by gweihir ( 88907 )

      That is pretty unlikely, but the whole thing is a worthless stunt anyways. The problem is that they have to use some hardware model and that will have errors. Hence the assurances they claim are purely theoretical, and in practice their thing may well be less reliable than a well-tested file system with data-journal, like ext3.

  • by Anonymous Coward on Monday August 24, 2015 @08:00AM (#50379207)

    MIT's new "crash-proof file system" crashed today amid accusations of bugs in the formal proof verification software used to formally verify it.

    MIT are now working on a formal verification of the formal verification system, in order to avoid similar formal-verification-related problems in the future.

    • MIT's new "crash-proof file system" crashed today amid accusations of bugs in the formal proof verification software used to formally verify it.

      So the whole thing was a bit of a Coq-up?

  • Linux File Systems (Score:5, Interesting)

    by Anonymous Coward on Monday August 24, 2015 @08:04AM (#50379235)

    I find some of the current file systems to be adequately reliable. Even their performance is acceptable. But, the Linux systems are lacking.

    Is there a reliable Linux file system such as EXT4 that has an easy to use copy on write(CoW) feature to allow instant recovery of any file changed at any time?

    rm ./test
    restore --last test ./

    dd if=/dev/random of=./test bs=1M count=10
    restore --datetime test ./

    Novell Netware FS did all this and more in 1995 FFS! Fifteen years later and Linux doesn't seem to be able to do this. NTFS doesn't seem to be able to do this either. Yet Novell is dead?

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Well, ex-Googler Kent Overstreet recently announced a COW filesystem on lkml:

      https://lkml.org/lkml/2015/8/21/22

      Not ready for production I would say, but looks interesting.

    • by swb ( 14022 ) on Monday August 24, 2015 @08:22AM (#50379377)

      I still think Netware's filesystem permission model was better than anything out there now, at least for filesharing.

      The feature I miss the most is allowing traversal through a directory hierarchy a user has no explicit permissions for to get to a folder they do have permissions for. I find the workarounds for this in other filesystems to be extremely ugly.

      I think NDS was better in a lot of ways than AD, although it would have been nice if there had been something better than bindery mode for eliminating the need for users to know their fully qualified NDS name.

      I also kind of wish TCP/IP had used the network:mac numbering scheme that IPX used. The rest of IPX/SPX I don't need, but there'd be no talk of address exhaustion of IPv4 if that scheme had been adopted, little need for DHCP address assignment and the addressing scheme would scale to the larger broadcast domains enabled by modern switching (avoiding the need to renumber legacy segments completely when they exhausted a /24 space and expansion via mask reduction wasn't possible due to linear numbering on adjacent segments).

      • by tepples ( 727027 ) <.tepples. .at. .gmail.com.> on Monday August 24, 2015 @08:57AM (#50379631) Homepage Journal

        The feature I miss the most is allowing traversal through a directory hierarchy a user has no explicit permissions for to get to a folder they do have permissions for. I find the workarounds for this in other filesystems to be extremely ugly.

        In POSIX, that's represented in a directory's mode bits with octal digit 1 (4: list files, 2: create or delete files, 1: traverse). What do you find ugly about mode 751 (owner create or delete, owner and group list, world traverse)?

        I also kind of wish TCP/IP had used the network:mac numbering scheme that IPX used.

        It does now. An IPv6 address is divided into a 48- or 56-bit network, a 16- or 8-bit subnet, and a 64-bit machine identifier commonly derived from the MAC.

      • by nbvb ( 32836 ) on Monday August 24, 2015 @09:47AM (#50380011) Journal

        Bah, forget NetWare, VINES and StreetTalk did everything you ask for and then some way before NDS was even a thought.

        VINES' ACL's were beautifully granular ...

    • by Christian Smith ( 3497 ) on Monday August 24, 2015 @08:32AM (#50379453) Homepage

      I find some of the current file systems to be adequately reliable. Even their performance is acceptable. But, the Linux systems are lacking.

      Is there a reliable Linux file system such as EXT4 that has an easy to use copy on write(CoW) feature to allow instant recovery of any file changed at any time?

      NILFS2 [kernel.org] provides continuous point in time snapshots, which can be selectively mounted and data recovered. Not quite as instant recovery as your use case examples, but it's only a few commands/wrapper scripts away.

    • by allo ( 1728082 )

      nilfs2 does.

    • by vovin ( 12759 )

      You are looking for NILFS2.
      Of course you use more disk because you nothing gets deleted...

    • Novell Netware FS did all this and more in 1995 FFS! Fifteen years later ...

      I think your Novell system needs a new clock battery or something...

  • by aglider ( 2435074 ) on Monday August 24, 2015 @08:06AM (#50379247) Homepage
    Not a matter of proof. The distance between perfect design and buggy implementation. IMHO.
  • i thought Journaled file systems already possessed this feature.

    • i thought Journaled file systems already possessed this feature.

      just like air bags and seat belts have eliminated all deaths on the road

    • You need to ensure that blocks are written to the media in the correct order. Or at least that everying before a synchronization point was completely written to the media. But even that is not always true because devices will lie and claim to have flushed data when they have not. So you also need to ensure that your underlying block based device is operating correctly, and that can be tricky when it's a third party device.

  • by ledow ( 319597 ) on Monday August 24, 2015 @08:13AM (#50379299) Homepage

    Write zero to a flag.
    Write data to temporary area.
    Calculate checksum and keep with temporary area.
    When write is complete, signal application.
    Copy data from temporary area when convenient.
    Check checksum from temporary to permanent is the same.
    Mark flag when finished.

    If you crash before you write the zero, you don't have anything to write anyway.
    If you crash mid-write, you've not signalled the application that you've done anything anyway. And you can checksum to see if you crashed JUST BEFORE the end, or half-way through.
    If you crash mid-copy, your next restart should spot the temporary area being full with a zero-flag (meaning you haven't properly written it yet). Resume from the copy stage. Checksum will double-check this for you.
    If you crash post-copy, pre-flagging, you end up doing the copy twice, big deal.
    If you crash post-flagging, your filesystem is consistent.

    I'm sure that things like error-handling are much more complex (what if you have space for the initial copy but not the full copy? What if the device goes read-only mid-way through?) but in terms of consistency is it really all that hard?

    The problem is that somewhere, somehow, applications are waiting for you to confirm the write, and you can either delay (which slows everything down), or lie (which breaks consistency). Past that, it doesn't really matter. And if you get cut-off before you can confirm the write, data will be lost EVEN ON A PERFECT FILESYSTEM. You might be filesystem-consistent, but it won't reflect everything that was written.

    Journalling doesn't need to be mathematically-proven, just logically thought through. But fast journalling filesystems are damn hard, as these guys have found out.

    • by Guspaz ( 556486 ) on Monday August 24, 2015 @09:37AM (#50379929)

      > When write is complete, signal application.

      How do you know the write was complete? Most storage hardware lies about completing the write. The ZFS folks found this out the hard way: their filesystem was supposed to survive arbitrary power failures, and on a limited set of hardware that was true. In reality, most drives/controllers say they've committed the write to disk when it's still in their cache.

      Any filesystem that claims to survive crashes needs to take into account that any write confirmation could be a lie, and that any data it has written in the past may still be in a volatile cache.

      • by fnj ( 64210 )

        You can't expect software to paper over BUSTED HARDWARE. If a disk drive flat out lies about status, expose the goddam manufacturer and sue him out of existence. If you think anyone can paper over the scenario you just outlined, then what about this - what about a disk drive that NEVER WRITES ANYTHING but lies and says everything is going hunky dory? Pretty damn sure there's nothing you can do in that scenario.

        I've heard that story about the "drives that lie about about write-to-physical-media-complete" man

    • Why am I hearing Jeremy Clarkson asking "how hard can it be?" just before utterly screwing something up?

      Perhaps there is just a tad more difficulty to it than you are considering?

    • I think you're basically right. I read about an even simpler system. They wanted to prove if a robotic surgical arm, controlled by a multi-axis joystick, would always no matter what move only when the surgeon commanded it.

      So basically, you read the joystick position sensor for an axis. You multiply by a coefficient, usually less than 1. You send that number to the arm controller. Arm controller tries to move to that position.

      Smooth, linear, no discontinuities...you technically only need to check about

  • by bradgoodman ( 964302 ) on Monday August 24, 2015 @08:15AM (#50379321) Homepage
    No specifics. This has been done a million times with journaling filesystem (and block layers) - no idea why this is better or different. But what about disk failure? But what about data loss? But what about (undetected) data corruption (at the disk)? What about unexpected power hits that could drop a disk or tear a write? Not even going to get into snapshotting, disaster recovery, etc. There's a lot more to this than "surviving a crash".
    • But what about disk failure?

      this is like expecting your seat belt to keep you safe during the apocalypse

      • No. It's not. You think [your favorite bank] puts all their financial data on a plain 'old off-the-shelf [Insert brand here] and assumes that it'll all be good? They use multi-million dollar systems which do mirroring, integrity checking, verification, etc. "High-end" storage and filesystems systems do things like verification and checking at multiple levels (end-to-end, drive, block, filesystem, array, etc) so a $100 disk drive doesn't corrupt data and take down a $100 billion dollar bank. As for the apoca
    • by Bengie ( 1121981 )
      You need to read in-between the lines.

      "MIT's New File System Won't Lose Data During Crashes" can be read as "MIT's New File System Won't be at fault for lost data once committed during any interruption of writes"

      ZFS does the same thing, minus the proofs. If you do a sync write and ZFS says it completed, then that data is not going to be lost due to any fault of ZFS. But what if someone threw all of your harddrives into lava? Again, not the fault of ZFS. Same idea.

      Rule of thumb, if your FS needs FSCK,
  • I've seen RAID groups fail sort of violently (granted in some tough environments) where one disk crashed and so did the others next two it. Three out of five disks in a RAID 5 gone. Only option was backup. How would any filesystem survive that?
    • the stars will implode and the universe will come to an end, how would any filesystem survive that?

      • Exactly! Seriously, I've seen this. Being in IT for almost 20 years you see edge cases, dumb things, and just plain bad luck.
    • I've seen RAID groups fail sort of violently (granted in some tough environments)

      Missile hit? :-D
      • by Guspaz ( 556486 )

        You joke, but I've seen IOPS drop in a RAID array because somebody was talking loudly next to the server. It was kind of fun to shout at the server and watch the disk activity drop. For testing purposes, of course.

      • Nothing quite that interesting. One was in a school. Their server room was near the gym. The other was in a factory. The server room was in the middle of the shop.
        • My wife once worked on a system that was in the next room from the skate sharpener. She'd never seen flames coming out of a hard disk before.

          • Wow, that would be scary.

            I remember one time about 10 years ago we got a handful of new HP servers in and were going through the burn-in process. Quite literally, apparently, as one of them had a RAID controller who's capacitors exploded quite violently setting off fire alarms and making us run for fire extinguishers when when we fired it up. (pun intended..).
    • It sounds like a quip; but it's the truth: more redundancy. Nothing is going to allow a system(whether it is all handled by the filesystem or the work is divided between a RAID controller and a filesystem) to recover a chunk of data that the universe violently removes except another copy of that data or something from which it can be deterministically inferred(even if you accept the 'random number generator and a lot of patience' mode of data recovery, you still need to know when you've recovered the correc
    • by mlts ( 1038732 ) on Monday August 24, 2015 @10:35AM (#50380397)

      I personally encountered a drive array driver cause an entire array to get overwritten by garbage. I was quite glad that I had tape backups of the computers and the shared array, so a recovery was fairly easy (especially with IBM sysback.)

      Filesystems are one piece of a puzzle, but an important one. If that array decided to just write some garbage in a few sectors, almost no filesystem would notice that, allowing propagating of corrupted data to backups. The only two that might notice it would be a background ZFS task doing a scrub and noticing a 64 bit checksum is off, or ReFS doing something similar. Without RAID-Z2, the damage can't be repaired... but it can be found and relevant people notified.

    • by Bengie ( 1121981 )
      Their term of "Crash" is different than yours. You assume "crash" means the Universe self-destructed. They just assume the writes were interrupted, like power failure or your kernel locked up, not your harddrives dying.
    • by Zak3056 ( 69287 )

      I've seen RAID groups fail sort of violently (granted in some tough environments) where one disk crashed and so did the others next two it. Three out of five disks in a RAID 5 gone. Only option was backup. How would any filesystem survive that?

      It is not the responsibility of the file system to maintain data integrity in the face of catastrophic failure of the underlying storage hardware.

      • Agreed. The article does talk about catastrophic hardware failures too. Just thought it seemed a bit misleading and scant on details is all.
  • by Anonymous Coward

    I think any file system could be imagined as a simple case of a database system. You "commit" a file change and you must be sure that the change is written to disk before proceeding.

    So, any database system has a well know logical limitation named "Two General's Problem"

    https://en.wikipedia.org/wiki/Two_Generals%27_Problem

    The implication of this is that in a database system you cannot guarantee a fully automated recovery; always there is a remaining possibility that some changes should be roll

    • by Guspaz ( 556486 )

      You can never be sure that a write is committed to disk, because most hardware lies about that.

      • by fnj ( 64210 )

        You can never be sure that a write is committed to disk, because most hardware lies about that.

        I suppose you can find authoritative references for that claim, complete with manufacturer names and drive model numbers?

  • It's now August, the conference where they'll be presenting their work is in October, and the article is a tad short on specifics. They've done a formally verified formal verification of a filesystem. if it works, that's excellent news of course, but I'd wait until we have seen the thing work and with actual code to examine before making any comments or bets on how useful this is going to be. And this being an open source-oriented site, we should be asking whether the code will indeed be available under any kind of usable open source license.
  • by alexhs ( 877055 ) on Monday August 24, 2015 @08:42AM (#50379527) Homepage Journal

    Beware of bugs in the above code; I have only proved it correct, not tried it. -- Donald Knuth

    If the hard drive firmware is not proven, this FS won't be any better than ZFS and others.
    Writing safe file systems is the easy part (even trivial using synchronous writes, when you consider their design is "slow").
    The impossible part is dealing with firmwares that are known to lie (including pretending than a write is synchronous): how could you not lose data if the drive didn't ever write the data to the platters in the first place ?

    • Not impossible. Tell it to write to disk, wait for it to say it has. Then cut power to the drive, wait 30 seconds, reestablish power, then ask for the data back. If it isn't the same, repeat until it is. It'll be slow, and likely kill your drives, but you can be reasonably sure the drive did indeed write the data.

  • The idea of evaluating each step of the program for "what happens if this fails" is standard software engineering technique for mission-critical software. (That't not to say it's always actually done, just that it is the standard.) This method is hardly revolutionary (or even evolutionary.)

    • slashdot has truly degenerated into a cesspool when you consider that this drivel is one of the more insightful posts

  • Any journalling FS should provide a consistent state, checksumming FS like BTRFS or ZFS even provable.
    So they won't lose data, which is written and in a consistent state, and unwritten data cannot be saved by definition.

  • Yeah, right. (Score:5, Interesting)

    by dargaud ( 518470 ) <slashdot2@nOSpaM.gdargaud.net> on Monday August 24, 2015 @09:58AM (#50380083) Homepage
    In the words of Knuth the Great: "Beware of bugs in the above code; I have only proved it correct, not tried it."

    It reminds me of a story from the late 80s (?) at a tech conference. The makers of a real-time OS with real-time snapshots would periodically pull the plug on their systems, plug it back in and it would resume exactly what it was doing, to the delight and amazement of all the techies in the assistance. In the much larger and much more expensive booth in front of them was a richer vendor. The techies started coaxing them to do the same. After much hand wringing, they did, and after a very long rebuild time the system came back as a mess. Conclusion: the 1st vendor went out of business, the 2nd one is still very big.

  • Hi, MIT guys, formal proofs of filesystems are useless because you cannot incorporate physical systems into formal proofs. Real filesystems exist on real hardware.

    I guarantee that your file system will fail if I start ripping cables out. A suitably strong EMP will take it out. In fact, I bet I could nuke your filesystem if I used my ham radio transceiver too close to the device. Other things that would destroy your filesystem include floods, earthquakes, and a lightning strike.

    I began writing this by statin

  • As others have pointed out, the formal verification does make the software provably reliable, but does nothing to protect against hardware issues. Just as a datapoint, the Stratus VOS operating system has been checksumming at the block driver level since the OS was written in 1980. It has detected failures on every generation of hardware it has been used with since. Some of the failures we have seen: Undetected transient media errors (the error correction/checking isn't perfect); Flaky I/O busses; bugs

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...