ZFS For Mac OS X Source Code Available 251
nezmar writes "Noel Dellofano, who is part of the ZFS development team at Apple, has a post on Mac OS Forge announcing a late Christmas gift: he is making available binaries and source code, plus instructions, of the ZFS filesystem for Mac OS X."
The real questions are... (Score:5, Interesting)
Not ready for prime time... (Score:4, Interesting)
Great new filesystems (Score:5, Interesting)
Re:Hmm (Score:3, Interesting)
Not that it really matters what Sun thinks about their F/OSS filesystem that anyone can download, modify or incorporate into their OS, but they are excited about Apple's adoption of ZFS, and have been contributing resources to the 'ZFS for OS X' project. It was widely rumored that ZFS would at least be an option in the shipping version of Leopard, if not the default filesystem. Someone over at Sun was even crowing about this a few months before Leopard was released.
I'd say Sun looks favorably upon this.
Re:Not ready for prime time... (Score:5, Interesting)
I'll bet one of the reasons they're putting it out there is the hope that a few kind souls with some time on their hands will submit some patches and work out the kinks; given the amount of interest there is for this to be working on Mac OS X -- and there's a lot.
Maybe between Apple, some Sun devs on their breaks and Amit Singh they can have this all wrapped up in a few months :)
Academic question: What would have happened if MS had open sourced WinFS? Even under their PL, there would probably have been enough interest among enough dedicated nerds to... who knows.
Re:The real questions are... (Score:5, Interesting)
Re:The real questions are... (Score:3, Interesting)
I doubt that. Setting up a RAID array in Linux is about 4-5 lines in the CLI.
Re:The real questions are... (Score:5, Interesting)
Re:Port it to Linux (Score:3, Interesting)
What if someone did port ZFS to Linux? (Score:3, Interesting)
Re:When do they say, "Just Kidding!" (Score:3, Interesting)
FreeBSD... maybe... I kind of like the Apple hardware, though.
linux md is grow-able, as is xfs and ext3 (Score:4, Interesting)
then you need to mkfs, and if you run out of space you're screwed because you can't easily grow.
All of Linux's md raid modes are grow-able.
LVM2, XFS, and ext3 are all capable of not just expansion, but *online* expansion. With xfs, it's one command- xfs_grow -d. It automatically senses the new block device size and presto, you've got a larger file system.
BTDT two weeks ago when I added a drive to my RAID5 array, expanded the LVM2 physical volume, grew the logical volume, and then grew the XFS volume (I make the choice to run LVM2 on top of the array- I could have just as easily put XFS directly on the array device itself.) The only caveat is that you won't see the extra space until the resilvering is done.
I'm not saying it's equal to ZFS, but Linux's filesystems and volume management are a lot more capable than you're claiming, and everyone needs to calm down and realize that RAID is not ZFS, ZFS is not RAID, etc.
Re:Linux md isn't rocket science...nor is ZFS raid (Score:5, Interesting)
The data integrity advantages of ZFS over traditional RAID-4 and RAID-5 are hard to argue with... it validates the entire input-output path.
Re:What if someone did port ZFS to Linux? (Score:3, Interesting)
Long answer: The biggest issue (to my understanding) is that it will not be included in the official kernel. Google sponsored it to be included in FUSE to cover their butts because I suppose they just didn't want to get involved in the issues. I don't see why it couldn't be released as a patchset that someone would have to patch and install manually, at the very least.
But then again, this is my view and understanding of it. Although I may be wrong, I don't really care... I just want ZFS (without moving from Linux)
Re:Linux? (Score:5, Interesting)
And I think most people will agree with you that Fuse isn't good enough. But at the moment, there are only two options: complete reimplementation from the ground up, and Fuse. Fuse is easiest.
Re:Linux? (Score:4, Interesting)
ZFS on FreeBSD Stability (Score:1, Interesting)
Re:Total garbage - has no error result codes! (Score:1, Interesting)
Anyway, from a design point of view, ZFS should not cease to operate if a drive (removable or not) ceases to be available unexpectedly. Given that all writes are meant to be atomic, you should end up with the disk in a consistent state, but not necessarily the state you expect. If your application relies on non-atomic operations for referential integrity (e.g. an application depends on two separate writes to two separate files both completing successfully to keep the data consistent) you'll have problems.
It's actually a user interface issue - in return for all the benefits ZFS brings, the user commits to notifying it in advance of drive removals. If you don't want to do that, that is fine too - you just get a performance hit as no write caching takes place.
You shouldn't need to tell the OS via CLI or GUI that the drive is going away 'though - a spin/down or synch button or equivalent should be provided on the drive itself.
Press button on removable device. Filesystem notified of pending removal. Filesystem flushes caches. Once safe, LED on device goes from Red to Green. Green LED - device can be removed.
Or design a connector that notifies the OS that it is being removed _and_ gives enough notice to allow cache flushing to succeed. That's a tall order given the size of write caches and the speed of USB connections. You might get 100 ms to write 8 Mebibytes of data, if you are lucky.
Re:NTFS-3G on Linux is stable (Score:3, Interesting)
Re:The real questions are... (Score:3, Interesting)
ZFS supports adding additional RAID-Zs to a storage pool - but the last time I checked you couldn't resize/shape a single RAID-Z array.
Here is an illustrative scenario:
1. I have three 250GB drives on two systems - one FreeBSD (ZFS) and one linux (mdadm+LVM+ext3). I put them in a RAID-5 on the linux system, and a RAID-Z on the FreeBSD system. On both I have 500GB of storage with single-redundancy. I can partition this space into as many filesystems as I'd like with either approach.
2. I buy three more 250GB drives. On FreeBSD I have to create a new RAID-Z giving me 500GB of additional storage (1TB total). On linux I can reshape the existing RAID-5, giving me 1.25TB of total storage (all 750GB of extra space is usable). You can't add new drives to a RAID-Z, but you can add them to a linux RAID-5. You could also put them in a separate RAID and add them to the LVM group and have the same total space as RAID-Z in that way.
Don't get me wrong - I like ZFS and would consider using it. Linux should be pursuing it (obviously there are licensing issues to work out here) and not snubbing it. And I don't see any technical reason why you couldn't reshape a RAID-Z. It is a limitation though.
Re:linux md is grow-able, as is xfs and ext3 (Score:2, Interesting)