ZFS For Mac OS X Source Code Available 251
nezmar writes "Noel Dellofano, who is part of the ZFS development team at Apple, has a post on Mac OS Forge announcing a late Christmas gift: he is making available binaries and source code, plus instructions, of the ZFS filesystem for Mac OS X."
Notes (Score:5, Informative)
When do they say, "Just Kidding!" (Score:5, Informative)
An absolutely, positively, amazing feature set. I can't wait until it's stable enough for production use. After 7 years of staying away from Apple products, I'm going back to the Mac.
Re:Hmm (Score:5, Informative)
Re:Great new filesystems (Score:5, Informative)
I've had no problems with 5T+ datasets, and we even get about a 10-20% performance boost out of it compared to UFS.
snapshotting & all those neat features work totally as expected.
Only minor issue I see is that a zfs send is single threaded, so you cant parralellize it over multiple processes easily.
Re:The real questions are... (Score:5, Informative)
sh-3.2# zfs
Read-Only ZFS Implementation
missing command
usage: zfs command args
Re:When do they say, "Just Kidding!" (Score:4, Informative)
Best ZFS Presentation (Score:5, Informative)
Re:Linux? (Score:4, Informative)
Sun CEO Encourages Apple to Use ZFS (Score:5, Informative)
Re:Total garbage - has no error result codes! (Score:5, Informative)
ZFS is designed to perform writes asynchronously. If the write should be able to complete, it returns success and then goes off to do it. It's a different way of thinking about a filesystem. You need to do a "zpool export" or something before you can unplug a detachable disk to avoid the panic when you unplug it. That's not a bug. It's by design.
No it isn't. You're just misunderstanding the semantics of ZFS.
No it isn't. It's just not a filesystem that's suitable for the masses. Average users cannot understand or manage an advanced storage pool system like ZFS. They're better off with filesystems that make sense to them, like HFS+, ext2 or NTFS.
Shame on all the geeks for telling everyone that ZFS will solve all their problems. ZFS is great under certain circumstances. It does what it does very well, but it isn't a filesystem for the masses.
Just plain not reporting errors is a bug. ZFS asynchronous write semantics is intentional, although counter-intuitive, behaviour.
Re:Linux? (Score:4, Informative)
Re:The real questions are... (Score:5, Informative)
Re:Notes (Score:5, Informative)
http://forre.st/storage [forre.st]
It works with newegg.com to find the best deals on HDs
"he is making" (Score:5, Informative)
Re:The real questions are... (Score:5, Informative)
Re:Port it to Linux (Score:4, Informative)
Re:The real questions are... (Score:1, Informative)
For LVM, one has to partition the disks first. If you're giving the whole disk to ZFS, just use something like "zpool create pool raidz c5t6d0 c5t6d1 c5t6d2 c5t6d3" (that's straight from memory - don't hold me to the exact syntax...). ZFS will partition the disks for you.
One less thing to worry about with ZFS.
Actually, (Score:5, Informative)
Re:The real questions are... (Score:4, Informative)
This is also why, when you create a ZFS pool using the read/write drivers, it defaults to creating a pool with ZFS version 6 on disk, so that it's compatible with the version of ZFS shipping with Leopard. (You run "zfs update" to transform your pool to the most recent on disk version if this kind of compatibility isn't an issue for you.)
BTW, Leopard also reads from BSD and Solaris-created ZFS drives just fine.
Re:Total garbage - has no error result codes! (Score:3, Informative)
Then, of course, checksumming everything does wonders to protect against bit rot and flaky cables.
Re:The real questions are... (Score:3, Informative)
I could say the same of NTFS. After throwing in the towel with regard to Windows as a base OS, I have years of accumulated data on NTFS volumes spread across a small pile of drives. Linux support for NTFS is still a little shaky. But with read-only access to NTFS, I can throw those old desktop or laptop drives into an enclosure, connect it, and either pull all the data over to a writable volume for ongoing work (and perhaps dispose of the old drive), or pick out individual pieces of data I want without worry of corrupting the volume.
How does this apply to ZFS? Not sure, since "piles of old data" isn't a likely scenario on ZFS. But I can imagine accessing shared NAS/SAN ZFS volumes with only one system managing dynamic allocation... or perhaps ZFS in place of ISO9660 to speed up large software installations?
NTFS-3G on Linux is stable (Score:5, Informative)
Since ZFS is new, I don't think your scenario applies, and it's not intended for DVD/CD use.
Re:Great new filesystems (Score:4, Informative)
Here's a link explaining the parent [oreilly.com] for all you c|net "reporters" and NYT technology stringers who read slashdot. You know who you are.
Re:The real questions are... (Score:3, Informative)
No you don't, LVM Physical Volumes can be initialised straight onto whole unpartitioned disks (/dev/sda).
Re:NTFS-3G on Linux is stable (Score:1, Informative)
Re:linux md is grow-able, as is xfs and ext3 (Score:4, Informative)
For anyone who has not seen the ZFS demonstration videos by Bill Moore you must watch the link.
High Bandwidth versions - http://www.sun.com/software/media/real/zfs_learningcenter/high_band [sun.com]...
Low Bandwidth versions - http://www.sun.com/software/media/real/zfs_learningcenter/low_bandw [sun.com]...
Also general info here:
- http://www.sun.com/software/solaris/ds/zfs.jsp [sun.com]
- http://www.sun.com/software/solaris/zfs_learning_center.jsp [sun.com]
Comment removed (Score:3, Informative)
Re:The real questions are... (Score:2, Informative)
I have setup a few raid setups (LVM, and ZFS, and hardware based.)
where ZFS is nice, their is a nice documented one way to do it. It's biggest advantage (for me) are a single interface for turning on/off encryption, compression, and file system snapshots. Things this does for you for example, is you can turn on compression for a entire volume, or directory within the filesystem but you don't have to take it offline ever, because it will compress new files only (to start), and you can then walk through the rest as CPU allows...
You do have all these options for LVM, with ext3, but in the truly linux tradition, their are many, many different ways to do the same thing. So if you don't want to spend hours researching the how's and whys you easily end up creating more work for yourself. But I am sure if you do all the research you can create a more efficient solution. But you'll also end up with many different interfaces (that has advantages too). For example I am replacing a RHEL hardware raid with Solaris zfs raid now. To admin the RHEL solution, I use LVM to admin allocations, use ext3 to look for file system corruption, use the Raid manager program to add/remove/grow drives, and NFS to export partitions/permisions, and various webmin plugins for backups restores, file versions. All of these are now within the zfstools on solaris.
Re:The real questions are... (Score:3, Informative)
I've been running the 7.0 branch since about October '07, and I've been using ZFS. Granted, it's my home machine, but it's my primary workstation and I *need* this machine to bring home the bacon, working via home office and all. While I don't use it for important data, the file systems I chose were intentional in order to beat it up a lot: /usr/src, /usr/obj, and /usr/ports -- that's where all of the kernel/userlad/ports compiles take place (at least by default). Tons of reads/writes/delete since I update my ports and source trees almost daily. Never once have I had a problem. On a whim, I ran a zpool status once and found a corruption error on /usr/obj, but I never noticed because it just kept on chugging along. Not that /usr/obj is a terribly critical directory, one which I nuke on a regular basis to ensure a pristine system rebuild. Still, I ran a "zpool scrub" on it live, and it fixed the problem w/o ever going offline.
Not only that, I also use the built-in compression (gzip9) on /usr/src and /usr/ports and my machine is amd64. The former really taxes the system, and the latter platform seems to lag slightly behind i386. Sometimes my machine has quick freezes when the ZFS file systems are being beat up, but my machine is only a 2.0GHz w/ 1GB of memory. For pre-release software, FreeBSD 7.0 really kicks some ass.
Of course, anecdotes are not synonymous with data. ZFS obviously has some issues. But then again, so did reiserfs and ext3 when they were first cast into the mainstream kernel and distros. Sure, the BSD userbase with ZFS is *much* smaller than those of Linux, but I expect ZFS will shine in short order.
I wish they'd finished UFS support first. (Score:3, Informative)
HFS? I've had HFS partitions get corrupted just be letting them get too full. That's just nuts.
ZFS? Sun says ZFS doesn't need file system check and repair tools, it can't fail. That's what DEC said about AdvFS, than then later on came up with salvage tools to pull data out of a damaged AdvFS file system. That's what the Linux folks used to say about Reiser FS, too. Even before the Hans Reiser incident it had become clear that it wasn't true, and I've got no reason to assume that ZFS will be any better, not over the long term.
The only journalled file system I've found that has come anywhere near that goal has been Network Appliance's, and they have complete control over the hardware and software and no third-party applications and drivers running on the hardware. And, of course, few places have very many NetApps (we certainly never had more than 4 at a time) so I can't say that the apparent stability of our boxes isn't due to the fact that we simply never had many of them...
Apple refreshed UFS for Panther, bringing in SoftUpdates to give it the performance advantages of journalling, then dropped it.
Apple has created layers that run over network file systems that implement almost all of the application-visible differences between HFS and remote CIFS and NFS shares, but you can't take full advantage of these for local UFS file systems. Why not? Don't ask me, ask Apple.
I blame corporate ADHD.