Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Software OS X Operating Systems

ZFS For Mac OS X Source Code Available 251

nezmar writes "Noel Dellofano, who is part of the ZFS development team at Apple, has a post on Mac OS Forge announcing a late Christmas gift: he is making available binaries and source code, plus instructions, of the ZFS filesystem for Mac OS X."
This discussion has been archived. No new comments can be posted.

ZFS For Mac OS X Source Code Available

Comments Filter:
  • by slyn ( 1111419 ) <ozzietheowl@gmail.com> on Sunday January 13, 2008 @06:30PM (#22028630)
    How stable is it, and how soon till I can get it on my Mac by default?
  • by maubp ( 303462 ) on Sunday January 13, 2008 @06:32PM (#22028652)
    Reading their FAQ, it sounds like there are lot of niggles to fix yet - including assumptions in other parts of Mac OS. All in all it sounds like ZFS isn't ready for general use on the Mac just yet. Maybe Mac OS X 10.6 will ship with this by default?
  • by PhotoGuy ( 189467 ) on Sunday January 13, 2008 @08:23PM (#22029622) Homepage
    It's a shame that I'm gunshy with new (to the OS) filesystems. ZFS has so much to offer, but every time I try out a new filesystem, I end up with data loss, even ones that are supposedly new and wonderful and robust. (Even when ext3 was new but stable, I lost stuff on it.) I can't wait to hear lots of positive feedback on its stability and performance, so I can get up the nerve to try it.
  • Re:Hmm (Score:3, Interesting)

    by leamanc ( 961376 ) on Sunday January 13, 2008 @09:26PM (#22030082) Homepage Journal

    Well then I wonder what Sun thinks of this.

    Not that it really matters what Sun thinks about their F/OSS filesystem that anyone can download, modify or incorporate into their OS, but they are excited about Apple's adoption of ZFS, and have been contributing resources to the 'ZFS for OS X' project. It was widely rumored that ZFS would at least be an option in the shipping version of Leopard, if not the default filesystem. Someone over at Sun was even crowing about this a few months before Leopard was released.

    I'd say Sun looks favorably upon this.

  • by iluvcapra ( 782887 ) on Sunday January 13, 2008 @10:33PM (#22030472)

    I'll bet one of the reasons they're putting it out there is the hope that a few kind souls with some time on their hands will submit some patches and work out the kinks; given the amount of interest there is for this to be working on Mac OS X -- and there's a lot.

    Maybe between Apple, some Sun devs on their breaks and Amit Singh they can have this all wrapped up in a few months :)

    Academic question: What would have happened if MS had open sourced WinFS? Even under their PL, there would probably have been enough interest among enough dedicated nerds to... who knows.

  • by QuietLagoon ( 813062 ) on Sunday January 13, 2008 @10:36PM (#22030484)
    It's already available on FreeBSD [freebsd.org] if you want to play.
  • by Eddi3 ( 1046882 ) on Sunday January 13, 2008 @11:28PM (#22030794) Homepage Journal
    "It's *very* easy to set up btw, much easier than setting up a RAID in Linux. "

    I doubt that. Setting up a RAID array in Linux is about 4-5 lines in the CLI.
  • by wodgy7 ( 850851 ) on Sunday January 13, 2008 @11:32PM (#22030824)
    It wasn't that easy to set up a RAID in Linux the last time I tried (admittedly long ago), but even in comparison, setting up a RAID-Z in ZFS is just a single line: "zpool create mypool raidz disk4s2 disk5s2 disk6s2"
  • Re:Port it to Linux (Score:3, Interesting)

    by Hucko ( 998827 ) on Monday January 14, 2008 @12:41AM (#22031174)
    I'm considering it, but your answer just says there is two different methodologies while claiming that one is better than the other. I am more wary of CDDL just as I am wary of Lucent's licence for Plan 9 (for sheer clever thinking, it would be my prefered OS -- discounting I haven't learnt how to use it effectively. Not as clever as the OS).
  • by halfdan the black ( 638018 ) on Monday January 14, 2008 @01:42AM (#22031520)
    Suppose I ported ZFS to Linux (not that I could, just suppose) as a native kernel module, and published the source code. If then I used ZFS on Linux, and some others also grabbed the 'Linux ZFS' code, built it and used it. What laws if any would I be breaking? Who and under what grounds could sue me / Linux ZFS users?
  • by osgeek ( 239988 ) on Monday January 14, 2008 @01:46AM (#22031544) Homepage Journal
    Every time I have to mess with Solaris, I'm annoyed at how much dorking I have to do with it to get it to have a reasonably modern environment and set of tools on it like a fresh install of Ubuntu or Fedora Core.

    FreeBSD... maybe... I kind of like the Apple hardware, though.
  • by SuperBanana ( 662181 ) on Monday January 14, 2008 @02:07AM (#22031698)

    then you need to mkfs, and if you run out of space you're screwed because you can't easily grow.

    All of Linux's md raid modes are grow-able.

    LVM2, XFS, and ext3 are all capable of not just expansion, but *online* expansion. With xfs, it's one command- xfs_grow -d. It automatically senses the new block device size and presto, you've got a larger file system.

    BTDT two weeks ago when I added a drive to my RAID5 array, expanded the LVM2 physical volume, grew the logical volume, and then grew the XFS volume (I make the choice to run LVM2 on top of the array- I could have just as easily put XFS directly on the array device itself.) The only caveat is that you won't see the extra space until the resilvering is done.

    I'm not saying it's equal to ZFS, but Linux's filesystems and volume management are a lot more capable than you're claiming, and everyone needs to calm down and realize that RAID is not ZFS, ZFS is not RAID, etc.

  • by wodgy7 ( 850851 ) on Monday January 14, 2008 @02:09AM (#22031710)
    You're mistaken. ZFS RAID-Z is definitely "raid" -- in fact it's RAID without the RAID-5 write hole on non-specialized (no NVRAM in the controller) hardware. Contrary to what you said, you *can* easily go from a single drive to a pair of mirrored drives (see ZFS admin guide, p. 59) or a RAID-Z (p. 60). The only real limitation is you cannot add an additional disk to an existing RAID-Z configuration, the idea right now being that you'll add another set of disks in RAID-Z as a top-level vdev. This is not optimal for a lot of scenarios but they're working on it. ZFS mirrored configurations are more flexible.

    The data integrity advantages of ZFS over traditional RAID-4 and RAID-5 are hard to argue with... it validates the entire input-output path.
  • by corychristison ( 951993 ) on Monday January 14, 2008 @02:49AM (#22031906)

    Who and under what grounds could sue me / Linux ZFS users?
    Short answer: nobody and nothing.

    Long answer: The biggest issue (to my understanding) is that it will not be included in the official kernel. Google sponsored it to be included in FUSE to cover their butts because I suppose they just didn't want to get involved in the issues. I don't see why it couldn't be released as a patchset that someone would have to patch and install manually, at the very least.
    But then again, this is my view and understanding of it. Although I may be wrong, I don't really care... I just want ZFS (without moving from Linux) :-(
  • Re:Linux? (Score:5, Interesting)

    by zsau ( 266209 ) <`slashdot' `at' `thecartographers.net'> on Monday January 14, 2008 @02:55AM (#22031934) Homepage Journal
    It's not a question of whether people thing CDDL is Free or not. There are "zealots" like Stallman who think that both GPL v2 and GPL v3 are free. But he would be the first to say you can't include GPL v3 code, like a future relicensed version of the Solaris kernel, in GPL v2 code, like the Linux kernel.

    And I think most people will agree with you that Fuse isn't good enough. But at the moment, there are only two options: complete reimplementation from the ground up, and Fuse. Fuse is easiest.
  • Re:Linux? (Score:4, Interesting)

    by lakeland ( 218447 ) <lakeland@acm.org> on Monday January 14, 2008 @03:51AM (#22032130) Homepage
    Actually, I see no reason why FUSE wouldn't be perfectly acceptable for zfs - at least until people start seriously considering zfs for root filesystems. FUSE is fast enough, stable enough, and flexible enough. We don't have X in the kernel, I don't see that we _need_ to have the filesystem in the kernel either.
  • by Anonymous Coward on Monday January 14, 2008 @04:55AM (#22032382)
    I was thinking of playing around with ZFS in the upcoming FreeBSD 7.0 release. Can anyone tell me how it is for hosting samba shares, ftp and http, etc with a pool of about 2TB? From what I've heard, it's still shaky, but I don't think it'd be under too much stress in my situation. Any tips or insight? I'm still kinda new to FreeBSD.
  • by Anonymous Coward on Monday January 14, 2008 @06:26AM (#22032790)
    Who is this Colonel Panic? Is he in the same division as General Failure?

    Anyway, from a design point of view, ZFS should not cease to operate if a drive (removable or not) ceases to be available unexpectedly. Given that all writes are meant to be atomic, you should end up with the disk in a consistent state, but not necessarily the state you expect. If your application relies on non-atomic operations for referential integrity (e.g. an application depends on two separate writes to two separate files both completing successfully to keep the data consistent) you'll have problems.

    It's actually a user interface issue - in return for all the benefits ZFS brings, the user commits to notifying it in advance of drive removals. If you don't want to do that, that is fine too - you just get a performance hit as no write caching takes place.

    You shouldn't need to tell the OS via CLI or GUI that the drive is going away 'though - a spin/down or synch button or equivalent should be provided on the drive itself.

    Press button on removable device. Filesystem notified of pending removal. Filesystem flushes caches. Once safe, LED on device goes from Red to Green. Green LED - device can be removed.

    Or design a connector that notifies the OS that it is being removed _and_ gives enough notice to allow cache flushing to succeed. That's a tall order given the size of write caches and the speed of USB connections. You might get 100 ms to write 8 Mebibytes of data, if you are lucky.

  • by Kjella ( 173770 ) on Monday January 14, 2008 @07:59AM (#22033156) Homepage
    YMMV, but in my experience it'll consistantly crash when moving large files (>2 or 4GB, not sure) from WD external disks. Fortunately since it's a user space driver, it doesn't do anything with the system. It has been perfectly stable for accessing my internal disks though.
  • by Rich0 ( 548339 ) on Monday January 14, 2008 @12:00PM (#22035258) Homepage
    Actually, RAID-Z has one big limitation - it isn't growable (while linux software RAID actually is growable). Otherwise I'm a big fan of ZFS and would love to see it reliable on linux. I think that the layer-breaks in this case give you new capabilities you couldn't get without them (I'm not just talking one-command filesystem setup - I'm talking about copy-on-write and better snapshotting and redundancy support and other things that need to cross layer bounderies unless you want to waste a lot of space).

    ZFS supports adding additional RAID-Zs to a storage pool - but the last time I checked you couldn't resize/shape a single RAID-Z array.

    Here is an illustrative scenario:

    1. I have three 250GB drives on two systems - one FreeBSD (ZFS) and one linux (mdadm+LVM+ext3). I put them in a RAID-5 on the linux system, and a RAID-Z on the FreeBSD system. On both I have 500GB of storage with single-redundancy. I can partition this space into as many filesystems as I'd like with either approach.

    2. I buy three more 250GB drives. On FreeBSD I have to create a new RAID-Z giving me 500GB of additional storage (1TB total). On linux I can reshape the existing RAID-5, giving me 1.25TB of total storage (all 750GB of extra space is usable). You can't add new drives to a RAID-Z, but you can add them to a linux RAID-5. You could also put them in a separate RAID and add them to the LVM group and have the same total space as RAID-Z in that way.

    Don't get me wrong - I like ZFS and would consider using it. Linux should be pursuing it (obviously there are licensing issues to work out here) and not snubbing it. And I don't see any technical reason why you couldn't reshape a RAID-Z. It is a limitation though.
  • by MauriceV ( 455290 ) on Monday January 14, 2008 @12:01PM (#22035272)
    How is ZFS not RAID? RAIDz? RAIDz2? Plus, LVM has another limitation. There is no easy way for one filesystem to utilize the free space of another. With ZFS, this is automatic.

"Life begins when you can spend your spare time programming instead of watching television." -- Cal Keegan

Working...