Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Google Technology

Google Switching To EXT4 Filesystem 348

An anonymous reader writes "Google is in the process of upgrading their existing EXT2 filesystem to the new and improved EXT4 filesystem. Google has benchmarked three different filesystems — XFS, EXT4 and JFS. In their benchmarking, EXT4 and XFS performed equally well. However, in view of the easier upgrade path from EXT2 to EXT4, Google has decided to go ahead with EXT4."
This discussion has been archived. No new comments can be posted.

Google Switching To EXT4 Filesystem

Comments Filter:
  • Time for a backup? (Score:5, Informative)

    by Itninja ( 937614 ) on Thursday January 14, 2010 @03:52PM (#30770502) Homepage
    I guess now is as good as any to go through my Gmail and Google Docs and make local backups. I'm sure my info is safe, but I have been through these types of 'upgrades' at work before and every once in a while....well, let's just say backups are never a bad idea.
    • by fuzzyfuzzyfungus ( 1223518 ) on Thursday January 14, 2010 @03:54PM (#30770526) Journal
      Not to worry. It's all in the cloud, right?
      • by castironpigeon ( 1056188 ) on Thursday January 14, 2010 @04:02PM (#30770658)
        Uh huh, the mushroom cloud.
        • by paradigm82 ( 959074 ) on Thursday January 14, 2010 @04:16PM (#30770932)
          It's probably nothing, probably. But I'm getting a small discrepancy in the file sizes...no, no, it's well within acceptable limits. Continue to stage 2.
        • by Anonymous Coward on Thursday January 14, 2010 @04:39PM (#30771314)

          Wait a minute. I'm a manager, and I've been reading a lot of case studies and watching a lot of webcasts about The Cloud. Based on all of this glorious marketing literature, I, as a manager, have absolutely no reason to doubt the safety of any data put in The Cloud.

          The case studies all use words like "secure", "MD5", "RSS feeds" and "encryption" to describe the security of The Cloud. I don't know about you, but that sounds damn secure to me! Some Clouds even use SSL and HTTP. That's rock solid in my book.

          And don't forget that you have to use Web Services to access The Cloud. Nothing is more secure than SOA and Web Services, with the exception of perhaps SaaS. But I think that Cloud Services 2.0 will combine the tiers into an MVC-compliant stack that uses SaaS to increase the security and partitioning of the data.

          My main concern isn't with the security of The Cloud, but rather with getting my Indian team to learn all about it so we can deploy some first-generation The Cloud applications and Web Services to provide the ultimate platform upon which we can layer our business intelligence and reporting, because there are still a few verticals that we need to leverage before we can move to The Cloud 2.0.

    • Re: (Score:2, Insightful)

      by Anonymous Coward
      Oh fuck off. It's not like Google is going to upgrade their entire multiply-redundant infrastructure all at once. And ext4 is a very conservative and stable FS. The "upgrade" process is to simply mount your old ext3 volume as ext4, and let new writes take advantage of ext4 features. If Google is actually still using ext2 rather than ext3, ext4 will be significantly *more* reliable. Not as good as XFS for preserving data integrity, but better than ext2.
      • by Itninja ( 937614 ) on Thursday January 14, 2010 @04:28PM (#30771112) Homepage
        Jeez, calm down junior! No need to open a can of fanboi on me....
      • by gmuslera ( 3436 )
        Data integrity (and replication) is managed in a layer over the fs, so the journaling could be an unneeded hit to the performance. Probably thats why they didnt upgraded to ext3 a long while ago.
      • Re: (Score:3, Insightful)

        by lymond01 ( 314120 )

        If Google is actually still using ext2 rather than ext3, ext4 will be significantly *more* reliable.

        It ain't the destination, it's the journey that worries me.

    • It sounds like EXT4 is fully compatible with 2 and 3, so even an EXT2 drive can be mounted as EXT4, which means the chances for failure are seriously reduced.

      But I totally hear what you're saying. Whenever you upgrade Anything, nothing is SUPPOSED to go wrong.

      However, It always does.

    • by tool462 ( 677306 ) on Thursday January 14, 2010 @04:17PM (#30770942)

      I usually let the bit-gods decide what data I have that is important enough to save. Over the years the bit-gods have taught me that:

      Music files: not important, Styx crossed the Styx to /dev/null in 2002
      Essay written for sophomore year high school english: Important, I assume to haunt me in some future political race.
      Porn collection: Like the subject matter within, it swells impressively, explodes, then enters a refractory period until it's ready to build up again.
      C++ program that graphs the Mandelbrot set: Important. I like feeling like an explorer navigating the cardioid's canyons.
      Photos of my children: Not important. If I need more baby photos, I can just have more babies.

    • Re: (Score:3, Insightful)

      by at_slashdot ( 674436 )

      "backups are never a bad idea."

      Depends, for example you reduce the security of data with the number of backups you keep (you could encrypt them but that has it's own problems).

  • Looks like Digitizor already melted.

  • by Anonymous Coward on Thursday January 14, 2010 @03:57PM (#30770572)

    Eats, shoots and leaves. Read it.

    • by schon ( 31600 ) on Thursday January 14, 2010 @04:03PM (#30770692)

      Maybe it was submitted by William Shatner?

  • by autocracy ( 192714 ) <slashdot2007@PAS ... m minus language> on Thursday January 14, 2010 @03:58PM (#30770592) Homepage

    I managed to ease a pageview out of it. That said, the /. summary says all they say, and you're all better served by the source they point to, which is what SHOULD have been in the article summary instead of the Digitzor site.

    See http://lists.openwall.net/linux-ext4/2010/01/04/8 [openwall.net]

  • Ted T'so (Score:5, Informative)

    by RPoet ( 20693 ) on Thursday January 14, 2010 @03:59PM (#30770604) Journal

    They have Ted T'so [h-online.com] of Linux filesystem fame working for them now.

  • Btrfs? (Score:3, Interesting)

    by Wonko the Sane ( 25252 ) * on Thursday January 14, 2010 @04:00PM (#30770616) Journal

    I guess they didn't consider btrfs ready enough for benchmarking yet.

    • Re: (Score:3, Funny)

      I wonder if oracle is really bttr about their rejection?
    • Re:Btrfs? (Score:5, Informative)

      by Paradigm_Complex ( 968558 ) on Thursday January 14, 2010 @04:11PM (#30770838)
      From kernel.org's BTRFS page [kernel.org]:

      Btrfs is under heavy development, and is not suitable for any uses other than benchmarking and review. The Btrfs disk format is not yet finalized, but it will only be changed if a critical bug is found and no workarounds are possible.

      It's ready for benchmarking, it's just not ready for widespread use yet. If Google was looking for a filesystem to make a switch to in the near future, BTRFS simply isn't an option quite yet.

      It's really easy at this point to move from EXT2 to EXT4 (I believe you can simply remount the partition as the new filesystem, maybe change a flag or two, and away you go). It's basically free performance. If Google is convinced it's stable, there isn't much reason not to do this. It could act as an interim filesystem until something significantly better - such as BTRFS - gets to the point where it's dependable. The fact BTRFS was not mentioned here doesn't mean it's completely ruled out.

    • Re: (Score:2, Insightful)

      by Tubal-Cain ( 1289912 )
      The chances of them using it would be pretty much nil. They are switching from ext2, and ext4's been "done" for over a year now. I'm sure they have a few benchmarks of btrfs, just not on as large of a scale as these tests were.
  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Thursday January 14, 2010 @04:00PM (#30770626)
    Comment removed based on user account deletion
    • It's Not Hans (Score:5, Interesting)

      by TheNinjaroach ( 878876 ) on Thursday January 14, 2010 @04:06PM (#30770752)
      I too have abandoned using ReiserFS but it's not about the horrible crime Hans committed. It's about the fact I don't think the company that he owned (who developed ReiserFS) has a great future, so I foresee maintenance problems with that filesystem. Sure, somebody else can continue their work but I'm not going to hold my breath.
      • So it's indirectly about the horrible crime Hans committed. Since it's because of that that his company has a poor future, and won't be maintaining Reiser for very long.

      • ReiserFS is in mainline, and is maintained by the kernel developers. Resier and Namesys all but abandoned it, which is one of many factors that kept the newer Reiser4 out of mainline, even though Reiser4 was superior to ReiserFS in many ways.

        • by Rich0 ( 548339 )

          ReiserFS is in mainline, and is maintained by the kernel developers.

          So is OS/2 HPFS. On the one hand that shows that ReiserFS will probably supported almost forever. On the other hand, I'm not sure I'd be rolling it out for new deployments or applications unless you're in a very tight niche.

      • Re:It's Not Hans (Score:4, Informative)

        by diegocg ( 1680514 ) on Thursday January 14, 2010 @05:15PM (#30771790)

        Reiserfs has been undermaintained for a lot of time AFAIK. When hans started working in reiser4, he forgot completely about adding needed features to v3. The reiserfs disk format may be good, but the codebase is outdated. Ext4 has an ancient disk format in many ways, but the codebase is scalable, it uses delayed allocation, the block allocator is solid, xattrs are fast, etc etc. Reiserfs still uses the BKL, the xattr support that Suse added is said to be slow and not very pretty, it had problems with error handling etc etc...

      • Re: (Score:3, Interesting)

        by mqduck ( 232646 )

        Personally, I think Hans should have been allowed to continue his work on ReiserFS while incarcerated. Better to let a guilty man contribute to society than do nothing but rot in prison, no?

      • Re: (Score:3, Informative)

        by rwa2 ( 4391 ) *

        I'm still running reiser3, and probably holding out for reiser4... it's been confusing since the benchmarks for the next-gen fs's have been all over the place, but some look promising:
        http://www.debian-administration.org/articles/388#comment_127 [debian-adm...ration.org]

        I've always run software RAIDs to crank out a bit more performance out of the slowest part of my system, and reiserfs3 has always worked better out of the box. I'd spent long hours tuning EXT3 stripe widths and directory indexes and stuff, and EXT3 always came out

    • Re:No ReiserFS? (Score:4, Insightful)

      by pdbaby ( 609052 ) on Thursday January 14, 2010 @04:06PM (#30770756)
      ...or maybe the fact that he's no longer involved brings up questions about its future direction. I'm sure they took a look at reiserfs previously
    • by Anonymous Coward on Thursday January 14, 2010 @04:06PM (#30770764)

      ...maybe they felt it wasn't cutting edge enough.

    • I'd imagine contacting a prison for tech support could be a bit awkward.
      (Yes, I know it's lame)

    • The association is too close in this case because a murderer's name is part of the file system name. If the product had been named something else the association wouldn't be there. Might as well stock the shelves with Bernardo Bath Oil and Dahmer Doodads. How well do you think that would go in the eyes of the corporate world? So it's not because the creator of the filesystem committed a crime, it's because the product has an unsavoury name - those are two distinct and unrelated issues.
    • by KlomDark ( 6370 )

      // Came here for the Reiser reference //// Not leaving disappointed! ////// Oops, this aint Fark...

    • Re: (Score:3, Funny)

      by gmuslera ( 3436 )
      To make the move to this new filesystem, they hired Ted T'so (actual maintainer of ext4). Hans wasn't available for the moment, and would be bad to have a famous employee that, well, did evil.
  • by Paradigm_Complex ( 968558 ) on Thursday January 14, 2010 @04:00PM (#30770634)
    The main advantage of EXT3 over EXT2 is that, with journaling, if you ever need to fsck the data, it goes a LOT quicker. It's interesting to note that Google never felt it needed that functionality.

    Additionally, I was under the impression that Google used massive numbers of commodity consumer-grade harddrives, as opposed to high-grade stuff which I presume is less likely to err. Couple this fact with the massive amount of data Google is working with and there has got to be a lot of filesystem errors, no?

    Can anyone else with experience with big database stuff hint as to why Google would not need to fsck their data (often enough for EXT3 to be worthwhile)? Is it cheaper just to overwrite the data from some backup elsewhere at this scale? How do they know the backup is clean without fscking that?
    • by spydum ( 828400 ) on Thursday January 14, 2010 @04:06PM (#30770768)

      Replicas stored across multiple servers -- if one is corrupted or unavailable requiring fsck, who cares? Ask the next server in line for the data.

    • First, google's servers each have their own battery [cnet.com], so it's unlikely that all the servers in a DC will go down at once. If only a few go down, their redundancy means that it's not a big deal - they can wait for the fsck. And moreover, even if an entire DC goes down (eg, due to cooling loss) they have the redundancy needed to deal with entire datacenter failures - with that kind of redundancy, fscking is only a minor inconvenience (plus with a cooling failure they might have time to sync and umount before p
    • by ls671 ( 1122017 ) *

      I always felt that fscking the data taking data that is already on the disk (the journal) into account was weaker than fscking the data independently (no journal). Or at least that it would bring more possibilities of errors (e.g. errors in the journal itself). It may very well be an unjustified impression that I have but at least it seems logical at first glance; A simpler file system means less risk of bugs, etc.

      http://slashdot.org/comments.pl?sid=1511104&cid=30770742 [slashdot.org]

      • Re: (Score:2, Informative)

        by amRadioHed ( 463061 )

        If you lost power while the journal was being written and it was incomplete then the journal entry would just be discarded and your filesystem itself would be fine, it would just be missing the changes from the last operation before the crash.

    • Re: (Score:2, Informative)

      by crazyvas ( 853396 )
      They use fast replication techniques to restore disk servers (chunkservers in GFS terminology) when they fail.

      The failure could be because of a component failure, disk corruption, or even a simply killing of the process. The detection is done via checksumming (as opposed to fscking), which also takes care of detecting higher-level issues that fscking might miss.

      Yes, it is much cheaper for them to overwrite data from another replica (3 replicas for all chunkservers is the default) using their fast re-re

  • by Anonymous Coward on Thursday January 14, 2010 @04:04PM (#30770714)

    From TFA:

    In their benchmarking, EXT4 and XFS performed, as impressively as each other.

    WTF kind of retarded sentence is that?! Did Rob Smith help you write that article?!

    In their benchmarking of EXT4 and XFS, EACH performed as impressively as THE OTHER.

  • by ls671 ( 1122017 ) * on Thursday January 14, 2010 @04:05PM (#30770742) Homepage

    We are still using ext2 on servers. Now I have an argument; if Google is still using ext2 maybe we aren't so foolish. We might update some day but it is not yet a priority. With UPS and proper fail over and backup procedure in place, I can't remember when a jounaling file system would have helped us in any way. They seem great for desktops/laptops although.

  • by bzipitidoo ( 647217 ) <bzipitidoo@yahoo.com> on Thursday January 14, 2010 @04:14PM (#30770898) Journal

    I've used XFS on a RAID1 setup with SATA drives, and found the performance of the delete operation extremely dependent on how the partition was formatted.

    I saw times of up to 5 minutes to delete a Linux kernel source tree on a partition that was formatted XFS with the defaults. Have to use something like sunit=64, swidth=64, and even then it takes 5 seconds to rm -rf /usr/src/linux. I've heard that SAS drives wouldn't exhibit this slowness. Under Reiserfs on the same system, the delete took 1 second. Anyway, XFS is notorious for slow delete operations.

    • For a lot of modern corporate data storage situations, deletion isn't really important. My company uses an in-house write-once file system (no idea what it's based on), because by and large, the cost of storing old data is negligible next to the advantages of being able to view an older version of the dataset, completely remove fragmentation from the picture, etc. I suspect deletion operations are fairly uncommon at Google; in the rare cases it is necessary it is quite possible they just copy the data they
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      mounting with nobarrier will change those 5 minutes to 5 seconds, but don't turn off your computer during the delete then.

  • GFS (Score:4, Insightful)

    by jonpublic ( 676412 ) on Thursday January 14, 2010 @04:16PM (#30770926)

    I thought google had their own file system named the google files system.

    http://labs.google.com/papers/gfs.html [google.com]

  • Might this prompt someone at Google to make an installable file system driver for Windows for EXT4? Right now, there is none, because of differing inode sizes and some extra features over EXT2 that EXT4 demands I think.
    • Re:Windows Driver (Score:5, Insightful)

      by fuzzyfuzzyfungus ( 1223518 ) on Thursday January 14, 2010 @04:38PM (#30771296) Journal
      I can't imagine why it would.

      To the best of my knowledge, Google uses pretty much no Windows servers themselves(at least not for any of their public facing products, they almost certainly have some kicking around) and "a vast number of instances of custom in-house server applications" is among the least plausible environments for a Windows server deployment, so that is unlikely to change.

      On the desktop side, Google has a bunch of stuff that runs on Windows; but it all communicates with Google's servers over various ordinary web protocols and stores local files with the OS provided filesystem. The benefits of EXT4 on Windows would have to be pretty damn compelling for them to start requiring a kernel driver install and a spare unformatted partition.

      I suppose it is conceivable that some Google employee might decide to do it, for more or less inscrutable reasons; but it would have no connection at all to Google's broader operation or strategy.
  • Ubuntu 9.10? (Score:5, Interesting)

    by GF678 ( 1453005 ) on Thursday January 14, 2010 @04:36PM (#30771252)

    Gee, I hope they're not using Ubuntu 9.10 by any chance: http://www.ubuntu.com/getubuntu/releasenotes/910 [ubuntu.com]

    There have been some reports of data corruption with fresh (not upgraded) ext4 file systems using the Ubuntu 9.10 kernel when writing to large files (over 512MB). The issue is under investigation, and if confirmed will be resolved in a post-release update. Users who routinely manipulate large files may want to consider using ext3 file systems until this issue is resolved. (453579)

    The damn bug is STILL not fixed apparently. Some people get the corruption, and some don't. Scares me enough to not even try using ext4 just yet, and I'm still surprised Canonical was stupid enough to have ext4 as the default filesystem in Karmic.

    Then again, perhaps Google knows what they're doing.

    • by Nimey ( 114278 )

      Then again, perhaps Google knows what they're doing.

      Moreso than your average Slashdotter, I expect.

    • Re:Ubuntu 9.10? (Score:5, Insightful)

      by Lennie ( 16154 ) on Thursday January 14, 2010 @05:22PM (#30771878)
      They employ the main developer of ext2, ext3 and ext4.

      He probably knows a lot about it.
    • Re:Ubuntu 9.10? (Score:4, Informative)

      by tytso ( 63275 ) * on Friday January 15, 2010 @01:03AM (#30775742) Homepage

      So Canonical has never reported this bug to LKML or to the linux-ext4 list as far as I am aware. No other distribution has complained about this > 512MB bug, either. The first I heard about it is when I scanned the Slashdot comments.

      Now that I'll know about it, I'll try to reproduce it with an upstream kernel. I'll note that in 9.04, Ubuntu had a bug which as far as I know, must have been caused by their screwing up some patch backports. Only Ubuntu's kernel had a bug where rm'ing a large directory hierarchy would have a tendency to cause a hang. No one was able to reproduce it on an upstream kernel,

      I will say that I don't ever push patches to Linus without running them through the XFS QA test suite. (Which is now generalized enough so it can be used on a number of file systems other than just XFS). If it doesn't have a "write a 640 MB file" and make sure it isn't corrupted, we can add it and then all of the file systems which use the XFSQA test suite can benefit from it.

      (I was recently proselytizing the use of the XFS QA suite to some Reiserfs and BTRFS developers. The "competition" between file systems is really more of a fanboy/fangirl thing than at the developer level. In fact, Chris Mason, the head btrfs developer, has helped me with some tricky ext3/ext4 bugs, and in the past couple of years I've been encouraging various companies to donote engineering time to help work on btrfs. With the exception of Hans Reiser, who has in the past me of trying to actively sabotage his project --- not true as far as I'm concerned --- we all are a pretty friendly bunch and work together and help each other out as we can.)

  • Downtime (Score:2, Interesting)

    by Joucifer ( 1718678 )
    Is this why Google was down for about 30 minutes today? Did anyone else even experience this or was it a local issue?

"I got everybody to pay up front...then I blew up their planet." "Now why didn't I think of that?" -- Post Bros. Comics

Working...