Forgot your password?
typodupeerror
Sun Microsystems Software

ZFS Gets Built-In Deduplication 386

Posted by ScuttleMonkey
from the sounds-like-a-resource-hog-waiting-to-happen dept.
elREG writes to mention that Sun's ZFS now has built-in deduplication utilizing a master hash function to map duplicate blocks of data to a single block instead of storing multiples. "File-level deduplication has the lowest processing overhead but is the least efficient method. Block-level dedupe requires more processing power, and is said to be good for virtual machine images. Byte-range dedupe uses the most processing power and is ideal for small pieces of data that may be replicated and are not block-aligned, such as e-mail attachments. Sun reckons such deduplication is best done at the application level since an app would know about the data. ZFS provides block-level deduplication, using SHA256 hashing, and it maps naturally to ZFS's 256-bit block checksums. The deduplication is done inline, with ZFS assuming it's running with a multi-threaded operating system and on a server with lots of processing power. A multi-core server, in other words."
This discussion has been archived. No new comments can be posted.

ZFS Gets Built-In Deduplication

Comments Filter:
  • by Anonymous Coward on Monday November 02, 2009 @07:24PM (#29956668)

    Duplicate slashdot articles will be links back to the original one?

    • by Shikaku (1129753)

      Er, isn't block deduplication really really bad at a hard drive block failure point of view? You'd have to compress or otherwise change the data to have a copy now, or it'd just be marked redundant; if that block where all those redundant nodes are pointing to go bad, all of those files are now bad.

      • by ezzzD55J (697465) <slashdot5@scum.org> on Monday November 02, 2009 @07:46PM (#29956882) Homepage
        The single block is still stored redundantly, of course. Just not redundantly more than once.
        • If a hash were a replacement for data. that's all we'd need....goedelize the universe? Sometimes I just want to scream, or weep, or shoot everybody....or just drop to my knees and beg them to think - just a little tiny insignificant bit - think. Maybe it'll add up. Probably not, but it's the best I can do.
          • Re: (Score:3, Interesting)

            by jimicus (737525)

            If a hash were a replacement for data. that's all we'd need....goedelize the universe?

            Sometimes I just want to scream, or weep, or shoot everybody....or just drop to my knees and beg them to think - just a little tiny insignificant bit - think. Maybe it'll add up. Probably not, but it's the best I can do.

            Which is why ZFS allows you to specify using a proper file comparison rather than just a hash.

            It's unlikely you'll have a collision considering it's a 256-bit hash but, as you allude, that likelihood does go up somewhat when you're dealing with a filesystem which is designed to (and therefore presumably does) handle terabytes of information.

      • Re: (Score:2, Insightful)

        by Methlin (604355)

        Er, isn't block deduplication really really bad at a hard drive block failure point of view? You'd have to compress or otherwise change the data to have a copy now, or it'd just be marked redundant; if that block where all those redundant nodes are pointing to go bad, all of those files are now bad.

        If you were concerned about block level failure or even just drive level failure, you wouldn't be running your ZFS pool without redundancy (mirror or raidz(2)).

    • Re: (Score:3, Insightful)

      by noidentity (188756)

      Duplicate slashdot articles will be links back to the original one?

      No, see, this de-duplication is transparent at the interface level. So while dupes won't take extra disk space on Slashdot servers, we'll still see them as normal. Isn't it nice to know that this optimization will be taking place?

  • ...and would normally make me happy; except I'm a Mac user. Still good news, but could've been better for a certain sub-set of the population, darn it.

    File systems are one area where computer technology is lagging, comparatively speaking, so good to see innovation such as this.

    • by bcmm (768152) on Monday November 02, 2009 @07:35PM (#29956770)

      ...and would normally make me happy; except I'm a Mac user. Still good news, but could've been better for a certain sub-set of the population, darn it.

      Use open source, get cutting edge things.

      • by jeffb (2.718) (1189693) on Monday November 02, 2009 @08:11PM (#29957162)

        Use open source, get cutting edge things.

        The last time I tried to build an Intel box for Linux work, I lost my grip on the cheap generic case, and sustained a cut that sent me to the emergency room. One of the things I like about my Mac is the lack of cutting edges.

        • by Anonymous Coward on Monday November 02, 2009 @08:18PM (#29957248)

          Shoulda gone with a blade server, then you wouldn't have had to worry about the emergency room.

        • by MrCrassic (994046)

          This is called doin it wrong! :)

        • Re: (Score:3, Informative)

          by Tynin (634655)
          Not sure when you tried building it, but I build cheap computers for friends / family, at least 2 or 3 computers a year. Almost a decade ago... maybe really only 8 years ago, all cheapo generic cases stopped having razor sharp edges. I used to get cuts all the time, but cheap cases, at least in the realm of having sharp edges, haven't been an issue in a long time. (I purchase all my cheapo cases from newegg these days)
      • Use open source, get cutting edge things.

        Cutting edge is nice for the functionality; unfortunately it more often than not comes with unintended functionality. I like standing back a bit - not too much mind you, but enough to avoid the bleeding edge.

      • Re: (Score:3, Insightful)

        by joe_bruin (266648)

        Use open source, get cutting edge things.

        I run Linux, where's my ZFS? No, FUSE doesn't count.

    • by MBCook (132727)

      It's neat. I can see it being rather useful for our systems at work to de-duplicate our VMs (and perhaps our DB files, since we have replicated slaves). Network storage (where multiple users may have their own copies of static documents that they've never edited) could benefit, perhaps email storage as well.

      Personally though, I don't think there is too much on my hard drive that would benefit from this. I would love for OS X to get the built in checksumming that ZFS has so it can detect silent corruption t

    • by Trepidity (597) <delirium-slashdot AT hackish DOT org> on Monday November 02, 2009 @08:50PM (#29957720)

      If you're running a normal desktop or laptop, this isn't likely to be of great use in any case. There's non-negligible overhead in doing the deduplication process, and drive space at consumer-level sizes is dirt-cheap, so it's only really worth doing this you have a lot of block-level duplicate data. That might be the case if e.g. you have 30 VMs on the same machine each with a separate install of the same OS, but is unlikely to be the case on a normal Mac laptop.

  • Hash Collisions (Score:2, Interesting)

    by UltimApe (991552)

    Surely with high amounts of data (that zfs is supposed to be able to handle), a hash collision may occur? I'm sure a block is > 256bits. Do they just expect this never to happen?

    Although I suppose they could just be using it as a way to narrow down candidates for deduplication... doing a final bit for bit check before deciding the data is the same.

    • Re:Hash Collisions (Score:4, Informative)

      by CMonk (20789) on Monday November 02, 2009 @07:32PM (#29956748)

      That is covered very clearly in the blog article referenced from the Register article. http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup [sun.com]

    • Yeah. If you are concerned by the fact that a block might be 128 KB and the hashed value is only 256 bits, then an option like:

      zfs set dedup=verify tank

      Might be helpful.

      • Re: (Score:2, Interesting)

        by dotgain (630123)
        Before the instruction you posted, I found this explanation in TFA:

        An enormous amount of the world's commerce operates on this assumption, including your daily credit card transactions. However, if this makes you uneasy, that's OK: ZFS provies a 'verify' option that performs a full comparison of every incoming block with any alleged duplicate to ensure that they really are the same, and ZFS resolves the conflict if not. To enable this variant of dedup, just specify 'verify' instead of 'on':

        I fail to see h

        • by sgbett (739519)

          Hey! If no-one will notice then it won't be a problem ;)

        • by SLi (132609)

          No. We're talking about such amounts of data needed that there's no conceivable way now or in the near (1000-year) future that such a collision would be found by accident, and even after that only on some supercomputer that is larger than earth and is powered by its own sun. It's not going to happen by accident. The probabilities are just so much against it, given any conceivable amount of data - and there are elementary limits that come from physics that cannot be surpassed. Moore's law will stop working s

    • by Shikaku (1129753)

      If blocks that are supposedly from different files have the same block data, does it really matter if it's marked redundant?

      Not only that, do you really think a SHA256 hash collision can occur? And even if it does, for the sake of CPU time, a hash table is made for a quick check rather than checking every piece of data from the to be written and already available data to see if there is a copy in situations as this. If somehow they have the same hash, it SHOULD be checked to see if it is the same data byt

      • Re: (Score:3, Funny)

        by icebike (68054)

        If blocks that are supposedly from different files have the same block data, does it really matter if it's marked redundant?

        I thing the hash collision people are worrying about is when two blocks/files/byte-ranges are hashed to be identical but in fact differ.

        When that happens your Power Point presentation contains your Bosses bedroom-cam shots.

    • Re: (Score:3, Informative)

      by Rising Ape (1620461)

      The probability of a hash collision for a 256 bit hash (or even a 128 bit one) is negligible.

      How negligible? Well, the probability of a collision is never more then N^2 / 2^h, where N is the number of blocks stored and h is the number of bits in the hash. So, if we have 2^64 blocks stored (a mere billion terabytes or so for 128 byte blocks) , the probability of a collision is less than 2^(-128), or 10^(-38). Hardly worth worrying about.

      And that's an upper limit, not the actual value.

    • Re: (Score:3, Funny)

      by pclminion (145572)

      Suppose you can tolerate a chance of collision of 10^-18 per-block. Given a 256-bit hash, it would take 4.8e29 blocks to achieve this collision probability. Supposing a block size of 512 bytes, that's 223517417907714843750 terabytes.

      Now, supposing you have a 223517417907714843750 terabyte drive, and you can NOT tolerate a collision probability of 10^-18, then you can just do a bit-for-bit check of the colliding blocks before deciding if they are identical or not.

      • Re: (Score:3, Interesting)

        by pclminion (145572)
        Oops. I didn't mean 10^-18 per-block, I meant 10^-18 for the entire filesystem. (Obviously it doesn't make sense the other way)
    • Re:Hash Collisions (Score:5, Informative)

      by shutdown -p now (807394) on Monday November 02, 2009 @07:55PM (#29956960) Journal

      Before I left Acronis, I was the lead developer and designer for deduplication in Acronis Backup & Recovery 10 [acronis.com]. We also used SHA256 there, and naturally the possibility of a hash collision was investigated. After we did the math, it turned out that you're about 10^6 times more likely to lose data because of hardware failure (even considering RAID) than you are to lose it because of a hash collision.

      • I have an idea for an attack vector.

        Say File A is one block big. File A is publicly available on the server, not writable by users. Eve produces a SHA256 hash collision of file A and stores this file B in ~. Someone wants to retrieve file A but gets file B (e.g. like evilize exe [mscs.dal.ca] for MD5).
        Alternatively, if always the oldest file is kept, Eve has to know the next version of the file.

        Given big blocks and time until cryptoanalysis for SHA256 is at the state of where it is with MD5, why not?

        • by hedwards (940851)
          If I'm not mistaken, that would be a waste of time. Ultimately, you're looking to get a file executed in most cases in which case you don't really need that you just need some other exploit. If you do need to get that file retrieved, there are better ways of doing that as well.
        • Yes, it's a valid attack once you can generate hash collisions for SHA256 attacks, in the same way that 'sit between two parties and decrypt their communication' is a valid attack on RSA once you can factorise the product of two primes quickly. Currently, the best known attack on SHA256 is not feasible (and won't be for a very long time if computers only follow Moore's law).
        • Say File A is one block big. File A is publicly available on the server, not writable by users. Eve produces a SHA256 hash collision of file A

          The whole point of a cryptographic hash function [wikipedia.org] is that you're not supposed to be able to produce input matching a given hash value other than by brute force - that is, 2^N evaluations, where N is the digest size in bits. That's an ideal state - in practice, number of evaluations can be reduced, and this is also the case for SHA256 [iacr.org], but for this particular scenario (finding a message corresponding to a known hash, rather than just any two messages that collide with a random hash), it is still way beyond

        • by SLi (132609)

          But then you could just use your magic SHA-256 breaking skillz to divert bank transactions and many outright vital things in commerce and communications, so it seems to me that replacing the contents of a file on some file system would be petty crime compared to that.

    • Surely with high amounts of data (that zfs is supposed to be able to handle), a hash collision may occur?

      The birthday paradox says you'd have to look at 2^(n/2) candidates, on average, to find a collision for a given n-bit hash. In this case, that means you'd have to look at about 2^128 objects to find a collision with a particular one.

      On my home server, the default block size is 128KB. With a terabyte drive, that gives about 8.4 million blocks.

      GmPy says the likelihood of an event with probably of 1/(2^128) not happening 8.4 million times (well, 1024^4/(128*1024) times) in a row is 0.999999999999999999999999

    • by Junta (36770)

      They have the 'verify' mode to do what you prescribe, though I'm presuming it comes with a hefty performance penalty.

      I have no idea if they do this up front, inducing latency on all write operations, or as it goes.

      What I would like to see is a strategy where it does the hash calculation, writes block to new part of disk assuming it is unique, records the block location as an unverified block in a hash table, and schedules a dedupe scan if one not already pending. Then, a very low priority io task could sca

  • by Dwedit (232252) on Monday November 02, 2009 @07:31PM (#29956740) Homepage

    Are there any other filesystems with that feature? If not, I'm very strongly considering writing my own.

    • by mrmeval (662166)

      While you're at it write one in assembler as a replacement for the Apple II and 1541 so us retrogeeks can store MORE on a floppy. ;)

      I know of all the compression schemes but this block level stuff is fascinating.

      • by Korin43 (881732)
        Wouldn't compression do this? I've never written a program involving compression, but it seems like the first thing you'd look for is two places that have the same data, and then you could just store them as references to the original data.
    • by iMaple (769378) * on Monday November 02, 2009 @07:38PM (#29956802)

      Windows Storage Server 2003 (yes, yes I know its from Microsoft) shipped with this feature (that is called Single Instance Storage)
      http://blogs.technet.com/josebda/archive/2008/01/02/the-basics-of-single-instance-storage-sis-in-wss-2003-r2-and-wudss-2003.a [technet.com]

    • Re: (Score:2, Informative)

      by hapalibashi (1104507)
      Yes, Venti. I believe it originated in Plan9 from Bell Labs.
    • Re: (Score:2, Interesting)

      by ZerdZerd (1250080)

      I hope btrfs will get it. Or else you will have to add it :)

    • Re: (Score:3, Interesting)

      by TheSpoom (715771) *

      What I'm wondering about all of this is what happens when you edit one of the files? Does it "reduplicate" them? And if so, isn't that inefficient in terms of the time needed to update a large file (in that it would need to recopy the file over to another section of the disk in order to maintain the fact that there are two now-different copies)?

      • by hedwards (940851) on Monday November 02, 2009 @08:40PM (#29957554)
        ZFS is a copy on write filesystem, it already creates a temporary second copy so that the file system is always consistent if not quite up to date. I'd venture to guess that the new version of the file, not being identical to the old file would just be treated like copying it to a new name.
      • by PRMan (959735)

        And worse...What happens when you go through a set of files A and change a single IP Address in each of them, defeating the duplication, while filesets B & C still point to the same set. Now, you have just increased your disk space usage by 200% while not increasing the "size" of the files at all.

        This will be extremely counter-intuitive when you run out of disk space by globally changing "192.168.1.1" to "192.168.1.2" in a huge set of files.

        • Par for the course.. (Score:5, Interesting)

          by Junta (36770) on Monday November 02, 2009 @09:08PM (#29957962)

          Any filesystem implementing copy-on-write at all, data dedupe, and/or compression is already a strategy where the risk of exhausting oversubscribed storage due to unanticipated compression ratios or uniqueness is a risk. It's a reason why you have to be pretty explicit to NetApp filers implementing these features that you are accepting the risk of exhausting allocations if you actually make use of these features to the point of advertising more storage capacity than you actually have.

          You don't even need a fancy filesystem to expose yourself to this today:
          $ dd if=/dev/zero of=bigfile bs=1M seek=8191 count=1
          1+0 records in
          1+0 records out
          1048576 bytes (1.0 MB) copied, 0.00426769 s, 246 MB/s
          jbjohnso@wirbelwind:~$ ls -lh bigfile
            8.0G 2009-11-02 20:06 bigfile
          ~$ du -sh bigfile
          1.0M bigfile

          This possibility has been around a long file and the world hasn't melted. Essentially, if someone is using these features, they should be well aware of the risks incurred.

      • ZFS is copy on write, so every time you write a block it generates a new copy then decrements the reference count of the old copy. The 'reduplication' doesn't require any additional support, it will work automatically. Of course, you also want to check if the new block can be deduplicated...
  • by BitZtream (692029) on Monday November 02, 2009 @07:32PM (#29956752)

    I'm wondering how long its going to take for them to do something with ZFS that actually makes me slow down my overwhelming ZFS fanboyism.

    I just love these guys.

    My virtual machine NFS server is going to have to get this as soon as FBSD imports it, and I'll no longer have to worry about having backup software (like BackupPC, good stuff btw) that does this.

    I don't use high end SANs but it would seem to me that they are rapidly losing any particular advantage to a Solaris or FBSD file server.

    • by HockeyPuck (141947) on Monday November 02, 2009 @07:59PM (#29957016)

      The advantages of SANs are easy to realize, they need not necessarily be FibreChannel vs NAS (NFS/CIFS) as a SAN could be iSCSI, FCOE, FCIP, FICON etc..

      -Storage Consolidation compared with internal disk.
      -Fewer components in your servers that can break.
      -Server admins don't have to focus on Storage except at the VolMgr/Filesystem level
      -Higher Utilization (a WebServer might not need 500GB of internal disk).
      -Offloading storage based functions (RAID in the array vs RAID on your server's CPU, I'd rather the CPU perform application work rather than calculating parity, replacing failed disks etc). This increases when you want to replicate to a DR site.

      This is not a ZFS vs SANs argument. I think ZFS running on SAN based storage is a great idea as ZFS replaces/combines two applications that are already on the host (volmgr & filesystem).

      • Or, use ZFS to create a SAN for your other servers. Just create a ZVol, and share it out via iSCSI. On Solaris, it's as simple as setting shareiscsi for the dataset. On FreeBSD, you have to install an iSCSI target (there are a handful available in the ports tree) and configure it to share out the ZVol.

        • by afidel (530433)
          Or use a pair of them like the Sun Unified storage cluster using the 7310/7410. Of course Sun charges a fairly hefty fee for what you get (I got 72x450GB 15k drives in my EVA6400 for what they charge for the same storage is SATA and mine included 5 years of support).
    • by Anonymous Coward on Monday November 02, 2009 @08:06PM (#29957072)

      How about this: you can't remove a top-level vdev without destroying your storage pool. That means that if you accidentally use the "zpool add" command instead of "zpool attach" to add a new disk to a mirror, you are in a world of hurt.

      How about this: after years of ZFS being around, you still can't add or remove disks from a RAID-Z.

      How about this: If you have a mirror between two devices of different sizes, and you remove the smaller one, you won't be able to add it back. The vdev will autoexpand to fill the larger disk, even if no data is actually written, and the disk that was just a moment ago part of the mirror is now "too small".

      How about this: the whole system was designed with the implicit assumption that your storage needs would only ever grow, with the result that in nearly all cases it's impossible to ever scale a ZFS pool down.

      • by Methlin (604355) on Monday November 02, 2009 @08:27PM (#29957380)
        Mod parent up. These are all legit deficiencies in ZFS that really need to be fixed at some point. Currently the only solutions to these is to build a new storage pool, either on the same system or different system, and export/import; big PITA and potentially expensive. Off the top of my head I can't think of anyone that lets you do #2 except enterprise storage solutions and Drobo.
        • Re: (Score:3, Interesting)

          by SLi (132609)

          Mod parent up. These are all legit deficiencies in ZFS that really need to be fixed at some point.

          Only if it's costworthy. For a case I know about XFS lacks filesystem shrinking too, and it has been asked for many times. It has been estimated that it would take months for a skilled XFS engineer to code. If it's so important that someone is willing to put up that money (or effort), it may happen; otherwise it will not. I'm sure the same applies to ZFS.

      • Re: (Score:3, Informative)

        by KonoWatakushi (910213)

        How alarmist and uninformed; borderline FUD. The reality is as follows...

        First, you can't remove a vdev yet, but development is in progress, and support is expected very soon now. Same with crypto.

        Second, mistakenly typing add instead of attach will result in a warning that the specified redundancy is different, and refuse to add it.

        Third, yes, you can't expand the width of a RAID-Z. You can still grow it though, by replacing it with larger drives. Once the block pointer rewrite work is merged, removal

        • by greg1104 (461138) <gsmith@gregsmith.com> on Monday November 02, 2009 @10:55PM (#29959144) Homepage

          How alarmist and uninformed; borderline FUD. The reality is as follows...

          First, you can't remove a vdev yet, but development is in progress, and support is expected very soon now.

          The bug report for this problem goes back to at least April of 2003 [opensolaris.org]. With that background, and that I've been hearing ZFS proponents suggesting this is coming "very soon now" for years without a fix, I'll believe it when I see it. Raising awareness that Sun's development priorities clearly haven't been toward any shrinking operation isn't FUD, it's the truth. Now, to be fair, that class of operations isn't very well supported on anything short of really expensive hardware either, but if you need these capabilities the weaknesses of ZFS here do reduce its ability to work for every use case.

    • What do you know - you and I actually agree on something. Yeah, FreeBSD + ZFS is a complete win for pretty much everything involving file transfer. I honestly can't think of a single thing I don't like about it. The instant FreeBSD imports this, I'm swapping in a quad-core CPU to give it as much crunching power as it wants to do its thing.

    • There are enough tales of woe in the discussion groups of ZFS file systems that have melted down on people that I would not start shorting the midrange storage companies stock just yet. I myself have an 18TB ZFS filesystem on a X4540 and it was brought to a standstill a few weeks ago by one dead SATA disk. Didn't lose any data, and it might be buggy hardware and drivers, but still, Sun support had no explanation. That should not happen!

      I'm still a ZFS fanboy though - for about $1 per GB how can you lose. Th

    • Don't mistake in-filesystem deduplication and snapshots for a backup system. It's most certainly not backup and if you treat it as such you will eventually be very sorry. A SAN with ZFS, snapshots, and deduplication features is at best an archive, which is distinct in form and purpose from a backup. Still very useful, though. Ideally you have both archive and backup systems. To get a feel for the difference, consider that an archive is for when a user says, "I overwrote a file last week sometime. Can

  • by 0100010001010011 (652467) on Monday November 02, 2009 @07:36PM (#29956784)

    ZFS, from what I can tell, kicks ass. I've played around with it in virtual machines, taking drives off line, recreating them, adding drives, etc.

    When I search NewEgg I also search OpenSolaris' compatibility list.

    The two areas that Linux is playing catchup is Filesystems (like this) and Sound (OSS, Pulse, Alsa Oh My!). And before you go pointing out the btrfs project, this has been in servers for years. It's tried in an enterprise environment. Your file system is still in beta with a huge "Don't use this for important stuff" warning.

  • by icebike (68054) on Monday November 02, 2009 @07:38PM (#29956796)

    Imagine he amount of stuff you could (unreliably) store on a hard disk if massive de-duplication was built into the drive electronics. It could even do this quietly in the background.

    I say unreliably, because years ago we had a Novell server that used an automated compression scheme. Eventually, the drive got full anyway, and we had to migrate to a larger disk.

    But since the copy operation de-compressed files on the fly we couldn't copy because any attempt to reference several large compressed files instantly consumed all remaining space on the drive. What ensued was a nightmare of copy and delete files beginning with the smallest, and working our way up to the largest. It took over a day of manual effort before we freed up enough space to mass-move the remaining files.

    De-duplication is pretty much the same thing, compression by recording and eliminating duplicates. But any minor automated update of some files runs the risk of changing them such that what was a duplicate, must now be stored separately.

    This could trigger a similar situation where there was suddenly not enough room to store the same amount of data that was already on the device. (For some values of "suddenly" and "already").

    For archival stuff or OS components (executables, and source code etc) which virtually never change this would be great.

    But there is a hell to pay somewhere down the road.

    • by Shikaku (1129753)

      That's actually very easy to explain, and ZFS could have a very similar situation:

      Say you have on your hard drive these two files that have this, which in reality is 1GB worth of data for each file (the space is a seperate file):

      ABCDABCD ABCDABCD

      Every letter has equal weight, so those two files are stored .5GB without compression. Let's change it a little bit:

      AeBCDABfCD ABCgDABChD

      efgh are 1 byte.

      You now have 2GB worth of space taken :) that's a gotcha if I ever saw one.

      • by Shikaku (1129753)

        Oh, I guess I should mention the blocks in my case are stupidly large, and the point is data insertion/shifting can cause sudden increases in size with block level deduplication.

    • by dgatwood (11270)

      That's just classic bad design. There's no reason for the decompressed files to exist on disk at all just to decompress them. The software should have decompressed to RAM on the fly instead of storing the decompressed files as temp files on the hard drive. It's all probably because they made a poor attempt at shoehorning compression into a VFS layer that was too block-centric. Classic bad design all around.

      • by icebike (68054) on Monday November 02, 2009 @08:15PM (#29957208)

        Bad design on Novell's part, but the problem persists in the de-duplicated world, where de-duplicating to memory only is not a solution.

        Imagine a hundred very large file containing largely the same content. Not imagine CHANGING just a few characters in each file via some automated process. Now 100 files which were actually stored as ONE file balloon to 100 large files.

        On a drive that was already full, changing just a few characters (not adding any total content) could cause a disk full error.

        You really can't fake what you don't have. You either have enough disk to store all of your data or you run the risk of hind-sight telling you it was a really bad design.

    • by Znork (31774)

      But there is a hell to pay somewhere down the road.

      I'd certainly expect that. I don't quite get what people are so desperate to de-duplicate anyway. A stripped VM os image is less than a gigabyte, you can fit 150 of them on a drive that costs less than $100. You'd have to have vast ranges of perfectly synchronized virtual machines before you'd have made back even the cost of the time spent listening to the sales pitch.

      I can't really see many situations where the extra complexity and cost would end up actual

      • by icebike (68054)

        >I can't really see many situations where the extra complexity and cost would end up actually saving money.

        I could see it for write-only media.
        With the proper byte-range selection, you could probably find enough duplicate blocks in just about anything to greatly expand capacity.

      • by PRMan (959735)
        It would be great for ISPs, where each of their user instances have files in common. Also, for a backup drive for user PCs, where each user has the OS and probably a lot of documents in common.
      • by drsmithy (35869)

        I'd certainly expect that. I don't quite get what people are so desperate to de-duplicate anyway. A stripped VM os image is less than a gigabyte, you can fit 150 of them on a drive that costs less than $100.

        Firstly, because dedup gives you the space savings without the hassle of "stripping" the VM image.
        Secondly, because dedup also delivers other advantages by reducing physical disk IOs, improving cache efficiency and reducing replication traffic.
        Thirdly, because enterprise storage costs a lot more tha

    • by c6gunner (950153)

      This could trigger a similar situation where there was suddenly not enough room to store the same amount of data that was already on the device. (For some values of "suddenly" and "already").

      Yes, but what's the likelihood of that occurring? We're talking about block level duplication here. If you have two identical files and you add a bit to the end of one, you're not creating a duplicate fi;e - you're just adding a few blocks while still referencing the original de-dupped file. Now, if you were doing file-level duplication it might be an issue, but this way ... I can't see it ever being a problem unless your array is already at 99.9% percent capacity (and that's just a bad idea in general).

    • First, why would you want it built into a hard drive? Your deduplication ratio would then be limited to what you can store on one drive. The drive would have no way to reference blocks on other drives in the same system. Doing it in software allows you reference (in this case) all data within the entire zpool. That could be petabytes of storage (theoretically it could be far more, but that's probably the realistic limits today due to hardware/performance constraints).

      As for your "hell to pay later" t
  • I tried this on my RAID-1 system and it got converted to RAID-0.

  • by Animats (122034) on Monday November 02, 2009 @09:44PM (#29958384) Homepage

    I'd argue that file systems should know about and support three types of files:

    • Unit files. Unit files are written once, and change only by being replaced. Most common files are unit files. Program executables, HTML files, etc. are unit files. The file system should guarantee that if you open a unit file, you will always read a consistent version; it will never change underneath a read. Unit files are replaced by opening for write, writing a new version, and closing; upon close, the new version replaces the old. In the event of a system crash during writing, the old version of the file remains. If the writing program crashes before an explicit close, the old file remains. Unit files are good candidates for unduplication via hashing. While the file is open for writing, attempts to open for reading open the old version. This should be the default mode. (This would be a big convenience; you always read a good version. Good programs try to fake this by writing a new file, then renaming it to replace the old file, but most operating systems and file systems don't support atomic multiple rename, so there's a window of vulnerability. The file system should give you that for free.)
    • Log files Log files can only be appended to. UNIX supports this, with an open mode of O_APPEND. But it doesn't enforce it (you can still seek) and NFS doesn't implement it properly. Nor does Windows. Opens of a log file for reading should be guaranteed that they will always read exactly out to the last write. In the event of a system crash during writing, log files may be truncated, but must be truncated at an exact write boundary; trailing off into junk is unacceptable. Unduplication via hashing probably isn't worth the trouble.
    • Managed files Managed files are random-access files managed by a database or archive program. Random access is supported. The use of open modes O_SYNC, O_EXCL, or O_DIRECT during file creation indicates a managed file. Seeks while open for write are permitted, multiple opens access the same file, and O_SYNC and O_EXCL must work as documented. Unduplication via hashing probably isn't worth the trouble and is bad for database integrity.

    That's a useful way to look at files. Almost all files are "unit" files; they're written once and are never changed; they're only replaced. A relatively small number of programs and libraries use "managed" files, and they're mostly databases of one kind or another. Those are the programs that have to manage files very carefully, and those programs are usually written to be aware of concurrency and caching issues.

    Unix and Linux have the right modes defined. File systems just need to use them properly.

    • by greg1104 (461138) <gsmith@gregsmith.com> on Tuesday November 03, 2009 @01:14AM (#29960116) Homepage

      The main corner case in your suggested "unit file" implementation is where someone is overwriting a file too large for the filesystem to contain two copies of it. You have to truncate when this happens to fit the new one, you can't just keep the old one around until it's replaced. This makes it impossible to meet the spec you're asking for in all cases. The best you can do is try to keep the original around until disk space runs out, and only truncate it when forced to. However, if that's how the implementation works, then applications can't just blindly rely on the filesystem to always do the right thing and "give you that for free". They've still got to create the new file and confirm it got written out before they touch the original if they want to guarantee never losing the original good copy, so that they bomb with a disk space error rather than risk truncating the original. That's why this whole path doesn't go anywhere useful; better to work on poplarizing an API for atomic rewrites or something.

      As for your "managed files" case, that won't work for all database approaches. For example, in PostgreSQL, only writes to the database write-ahead log are done with O_SYNC/O_DIRECT. The main data block updates (and writes that are creating new data blocks) are written out asynchronously, and then when internal checkpoints reach their end any unwritten blocks are forced to disk with fsync if they're still in the OS cache. You'd be hard pressed to detect which of your suggested modes was the appropriate one for just the obvious behavior there, and there's still more weird corner cases to worry about buried in there too (like what the database does with the data blocks and the WAL to repair corruption after a crash).

      Both these highlight that it's hard to make improvements here at just the filesystem level. Some of the really desirable behavior is hard to do unless applications are modified to do something different too. That hasn't really been going well for ext4 [slashdot.org] this year, and how that played out highlights how hard an issue this is to crack.

  • BTRFS is better (Score:3, Interesting)

    by Theovon (109752) on Monday November 02, 2009 @11:18PM (#29959318)

    At first, BTRFS started out as an also-ran, trying to duplicate a bunch of ZFS features for Linux (where licensing wasn't compatible to incorporate ZFS into Linux). But then BTRFS took a number of things that were overly rigid about ZFS (shrinking volumes, block sizes, and some other stuff), and made it better, including totally unifying how data and metadata are stored. I'm sure there are a number of ways in which ZFS is still better (RAIDZ), but putting aside some of the enterprise features that most of us don't need, BTRFS is turning out to be more flexible, more expandable, more efficient, and better supported.

"But this one goes to eleven." -- Nigel Tufnel

Working...