Forgot your password?
typodupeerror
Social Networks The Internet IT

Ma.gnolia User Data Is Gone For Good 450

Posted by kdawson
from the lost-in-the-clouds dept.
miller60 writes "The social bookmarking service Ma.gnolia reports that all its user data was irretrievably lost in the Jan. 30 database crash that knocked the service offline. Ma.gnolia founder Larry Halff recently discussed the crash and the lessons to be learned from Ma.gnolia's experience. A lesson for users: don't assume online services have lots of staff and servers, and always keep backup copies of your data. Ma.gnolia was a one-man operation running on two Mac OS X servers and four Mac minis."
This discussion has been archived. No new comments can be posted.

Ma.gnolia User Data Is Gone For Good

Comments Filter:
  • by Anonymous Coward on Friday February 20, 2009 @12:50PM (#26930981)
    Crashing Macs? That's unpossible!
    • Re:Mac reliability (Score:4, Insightful)

      by jetsci (1470207) on Friday February 20, 2009 @12:58PM (#26931137) Homepage Journal
      So umm...I have a confession...

      I had no idea anyone actually used Mac's as servers. Sure, I bet you can get apache running or something but I didn't realize anyone had. Therefore, this is my first bit of exposure to this idea of Macs as servers and its all negative!

      Woe is me.
      • Re:Mac reliability (Score:5, Informative)

        by BJZQ8 (644168) on Friday February 20, 2009 @01:04PM (#26931257) Homepage Journal
      • Re:Mac reliability (Score:5, Insightful)

        by beelsebob (529313) on Friday February 20, 2009 @01:19PM (#26931477)

        Yes, because they don't come with apache and php pre-installed, only a ticky box away from running.

        Seriously, do people still not realise that OS X is just UNIX with a pretty UI?

        • by Idiomatick (976696) on Friday February 20, 2009 @01:26PM (#26931563)
          In the same sense that horses and monkeys are JUST mammals. Doesn't mean that they share THAT much in common...
          • Re: (Score:3, Informative)

            by the_humeister (922869)

            No, but Mac OS 10.5.x can properly be called Unix [opengroup.org], but only the Intel version, not the PPC version.

            • Re: (Score:3, Informative)

              by Anonymous Coward

              But that's more of a PR thing than anything. If I raise cows in the pasture behind my hose then they aren't "USDA Certified Organic" or any other such thing, but that doesn't really change what they are - it just means the haven't been inspected an labeled by some committee.

              Same with Mac OS X being "Unix". It's more of a stamp of approval than anything.

        • Re:Mac reliability (Score:5, Informative)

          by Blakey Rat (99501) on Friday February 20, 2009 @02:05PM (#26932185)

          Yeah, but that's exactly the surprising part. Why would you pay Apple $3000 for a xserve running Apache and MySQL, with a crappy service contract (no next-day service, no on-site service-- I've looked into it), when you could buy an equivalent Dell server for $2100, running the exact same Apache and MySQL, and get a next-day and on-site service contract?

          Anyone who buys an xserve is an idiot.

          • !equivalent (Score:4, Informative)

            by WiseWeasel (92224) on Friday February 20, 2009 @02:59PM (#26932901)
            Mac OS X Server runs a host of services, particularly for managing Mac OS X clients, that you won't find on any other OS, so there are reasons to get a Xserve in particular; web serving just is not one of them.
          • Re: (Score:3, Insightful)

            by Anonymous Coward

            Why would you pay Apple $3000 for a xserve running Apache and MySQL...when you could buy an equivalent Dell server for $2100, running the exact same Apache and MySQL

            You wouldn't. It's a "right tool for the job" situation and XServes aren't the right tool for running Apache and MySQL. They have the flexibility to run Apache and Mysql, which is nice if you buy them for some other purpose and then either no longer need them for that purpose or find that you have spare capacity and want to use it that way. But

          • Re:Mac reliability (Score:5, Informative)

            by ThrowAwaySociety (1351793) on Friday February 20, 2009 @05:38PM (#26935145)

            a crappy service contract (no next-day service, no on-site service-- I've looked into it)

            Not very hard, apparently.
            http://www.apple.com/server/support/ [apple.com]

            You get 24/7 telephone and email support with 30-minute response. For hardware repairs, Apple-certified technicians provide onsite response within four hours during business hours and next-day onsite response when you contact Apple after business hours.

            • Re: (Score:3, Informative)

              by dmarcov (461598) *

              I'm as much of a Mac fanboy as the next guy, but I do want to point out that the "on-site" service isn't as amazing as it sounds.

              I have a Mac Pro and recently discovered that the on-site service is provided at the discretion of the local store/repair center and not Apple. If you call with a problem and want on-site service for it, they'll give you a list of local stores that you can then call and try and convince them to come out on a Saturday (it doesn't work, btw). I imagine if you bought all your systems

        • Re: (Score:3, Interesting)

          by Alrescha (50745)

          "Seriously, do people still not realise that OS X is just UNIX with a pretty UI?"

          Actually, I prefer to think of OS X as UNIX with a good UI. Alas, I can't say the same for the OS X Server tools.

          A.
          (on topic: at my company we back up our database to three different boxes, in two different physical locations, every day. It's also replicated across the country to a secondary facility in realtime. The backups are periodically written to DVD and stored in a safety deposit box. Oh yea, all this is encrypted.

      • Re: (Score:3, Interesting)

        by Tibor the Hun (143056)

        It's hard to know whether you're trolling or not.
        There are OS X servers out there and they perform rather well. I know because I admin 50 of them, and have met hundreds of others who administer them in school systems across the state.

        You may also be familiar with iTunes, or Apple's movie trailer website. I'm sure a large part of those are Xserves and Raids.
        I'm not saying they are maintenance free, but they are out there.

        Furthermore, a few years back there was a rather large beowulf cluster of mac towers tha

      • Re:Mac reliability (Score:5, Insightful)

        by SatanicPuppy (611928) * <Satanicpuppy AT gmail DOT com> on Friday February 20, 2009 @01:42PM (#26931817) Journal

        Mac servers are pretty. They do okay, they have nice swanky data enclosures, and the form factor is roughly the same as anyone elses.

        It's just whether or not you want to use OS X. I disagree that OS X is "just unix," however. It's not even "just linux" or "just bsd". OS X has it's own warts, and while it may be stable and friendly, I'd rather have a real *nix running on less pretty hardware.

        The best use I've ever had for the big Mac servers is running as a file server in a windows/mac environment. If you still have any pre-OS X machines around, that's about the only way to get them all on the same machine (If you say windows mac volume, I'm mailing a dead fish to your house).

        Otherwise, you know, you can install apache, whatever, but it's not any different from using a regular linux server in terms of increased functionality, and there are some significant OS update issues that can cause problems. Mac updates are of the all or nothing school, and they WILL break stuff, so you need to be careful.

        • Re: (Score:3, Informative)

          by drinkypoo (153816)

          The best use I've ever had for the big Mac servers is running as a file server in a windows/mac environment. If you still have any pre-OS X machines around, that's about the only way to get them all on the same machine

          Negatory - the best answer there is samba+netatalk. I did this at my house and then proceeded to do it again when I was the network admin at a spot with a mix of PCs and various-vintage Macs. Since you are generally running such a solution on a free Unix system (I did it on Linux both times) you also have access to pretty much ever other network filesystem too. Ostensibly it should be easy to add Appletalk DDP support to a modern Novell system running on SuSe, and it's definitely been done on various small

        • by DoofusOfDeath (636671) on Friday February 20, 2009 @03:07PM (#26933017)

          (If you say windows mac volume, I'm mailing a dead fish to your house).

          Why, so it will attract the penguins?

      • Re: (Score:3, Informative)

        by k2r (255754)

        > I bet you can get apache running
        Every Mac comes with apache - "getting it running" means checking a single box in the system preferences dialog.
        Same goes for Samba for example.

  • by jetsci (1470207) on Friday February 20, 2009 @12:50PM (#26930987) Homepage Journal
    Facebook was recently brought down when their hamster keeled over and ceased powering their Amiga.
  • Food for Stallman (Score:3, Insightful)

    by Rinisari (521266) * on Friday February 20, 2009 @12:53PM (#26931051) Homepage Journal

    This bad news is delicious food for Stallman's argument against "cloud" services.

    • by ZeroPly (881915) on Friday February 20, 2009 @12:57PM (#26931127)

      Stallman's argument is more that cloud services are almost always non-open. He does not have a per se objection to cloud services - and if you were to reveal all your source code and protocols, I doubt it would be objectionable to him.

      Of course it's impossible to free cloud services in the sense of modification and distribution, but if the source is open you have the chance to make your own.

    • It's food for any argument against any web service that doesn't publish it's reliability information or publicize the data for what types of mechanisms it has in place in case of disasters like a corrupt database, fried motherboard, or busted hard drive.

      There's a design methodology that's used by NASA for manned missions: Any individual component should be able to fail without compromising the mission. Of course, in the last few decades we've seen 2 out of 5 Shuttles go ka-boom! so obviously this NASA gu

      • Re: (Score:2, Insightful)

        by bsDaemon (87307)

        NASA guideline isn't enough and it's *REALLY* hard to prevent failure when a perfect storm of multiple systems experience failure at the same time.

        I'm not saying that saving Apollo 13 wasn't hard, or an extremely great accomplishment, however I am going to say "slick and pretty" (the shuttle) is generally the opposite of "robust" or "fault tolerant." Slick and pretty is also usually more expensive.

        The basic, non-pimped xserve is $2999. An identically configured node from eRacks, running your choice of BSD (the default on these quad-core Xeons seems to be OpenBSD) or Linux, $1894 -- leaving you with plenty of room in the budget to build a bigger, bad

      • Re: (Score:3, Insightful)

        by NormalVisual (565491)
        obviously this NASA guideline isn't enough and it's *REALLY* hard to prevent failure when a perfect storm of multiple systems experience failure at the same time.

        Neither the Challenger nor the Columbia represented simultaneous multiple failures. They *did* represent cascade failures that should have been planned for, but weren't.
    • Just wanted to let you know that "unleash the fyoorie" makes me laugh every time. I'm not sure why.
  • Needless loss (Score:5, Insightful)

    by hattig (47930) on Friday February 20, 2009 @12:54PM (#26931073) Journal

    Argh, why not just add a backup or replication database on one of the spare Mac Minis?

    That way you would have needed a complete server farm disaster to mess things up irretrievably.

    • Re:Needless loss (Score:5, Insightful)

      by qoncept (599709) on Friday February 20, 2009 @01:01PM (#26931209) Homepage
      Or back it up, like, once a day, or week, or ever, to a flash drive or something. That's a lesson that's already been learned, and it's common sense. I'm terrible about backing up my own data (anything I've lost and recovered is usually something that just happened to be on a remote web server somewhere, coincidentally, because it was always intended to be on the web). But all of my websites, with other users' data, are backed up. It doesn't take a very complex scheme or much thought. A cron job to dump your database and tar your web structure and then copy it to a different location.

      I definitely have my doubts that someone who could make this mistake is all that capable "lessons learned."
      • Re: (Score:3, Insightful)

        by metamatic (202216)

        Except cron+tar isn't sufficient. You need versioning. Otherwise if your database is corrupted and you don't notice immediately, your backup gets corrupted automatically.

        I back up my web sites using cron + rsync + rdiff-backup.

    • I would bet the admin realizes this all too well now.

      I can't believe anyone would run a commercial system without backing things up. Hell, even home users, if they have anything of value, need to do backups.

      It's just not that hard these days especially with cheap NAS boxes, low-cost hard drives, etc.
    • Why is it that I feel the stuff I have setup at home is more robust than some "professional" shops? Is the world more like The Daily WTF [thedailywtf.com] than I've been lead to believe?

      I'm not saying my system is perfect, but it's redundant in at least two locations.

      Laptop<->Server<->Server HD 2<->Dreamhost.

      My MySQL databases which just keep stuff like weather and temp (from my 1-Wire system) is dumped nightly and sent to my Gmail account. (It's also not a few TB server...) but seriously. How hard is it to

    • Re:Needless loss (Score:5, Insightful)

      by eln (21727) on Friday February 20, 2009 @01:10PM (#26931347) Homepage

      A simple periodic dump to an external hard drive would have at least been something. I know that small-time operations shouldn't be expected to have robust backup schemes, but if your primary purpose is to store other people's data, the FIRST thing on your mind should be how to back it up. Once you lose someone's data, they'll never use anything you put out again, and they'll tell all their friends not to either.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Small time operations should be expected to back up just the same as large ones. I wrote simple routines to backup my db nightly, compress it, upload it and on the receiving end decompress it and restore it. If any step fails it emails me. I check it manually every week and save backups for 2 years (quite a bit of data but for legal reasons it's necessary).

        The whole setup took me maybe a day to get working. There is NO excuse for not having backups.

        I lost one of my primary servers on a sunday at around 5pm.

      • Re:Needless loss (Score:5, Insightful)

        by Blakey Rat (99501) on Friday February 20, 2009 @01:57PM (#26932075)

        If you read the transcript, that's what they were doing, a simple firewire DB dump.

        The problem is that they never tested the backups, and they didn't keep versioned backups. So they'd been backing-up the corrupted database for awhile before the site finally crashed for good. When it crashed, they only had the corrupted database backup. Additionally, the DB server was on RAID but of course the corrupted DB would just get saved to both HDs, so that's no good in a situation like this.

        Basically, when the site crashed, he had three copies (2 RAID, 1 backup) of the data: all corrupt. The guy wasn't totally retarded when it came to backups, just 80% retarded.

    • Re:Needless loss (Score:5, Insightful)

      by Ephemeriis (315124) on Friday February 20, 2009 @01:17PM (#26931437)

      Argh, why not just add a backup or replication database on one of the spare Mac Minis?

      That way you would have needed a complete server farm disaster to mess things up irretrievably.

      Replication gives you redundancy, much like RAID does. It lets you survive a hardware failure or two. It is not a backup. If the building burns down, or a tree falls on your server room, or lightning fries everything you are still screwed.

      What they needed was a backup. A tape, or removable HDD, or a flash drive, or a CD, or something that can be taken out of the building on a regular basis. Once a day, once a week, once a month... Whatever.

      Then, no matter what happens to your live hardware, you've got a backup you can restore from. Buy some new hardware, throw your backup at it, done!

  • by Jailbrekr (73837) <jailbrekr@digitaladdiction.net> on Friday February 20, 2009 @12:56PM (#26931111) Homepage

    And how can they be slashdot worthy when they are a social networking site with ONLY a half a terabyte of data? In short, who cares?

    • Re: (Score:3, Funny)

      by KDingo (944605)

      At least if you had privacy concerns with them, you have nothing to worry about now.

  • lesson #1 (Score:5, Interesting)

    by petes_PoV (912422) on Friday February 20, 2009 @12:57PM (#26931131)
    on the 'net you can't tell the major corporation from the kid in a garage

    lesson #2, trust no-one with your data

    lesson #3 disaster recovery capability only exists after it's been tested

    lesson #4 backups are useless unless you can prove you can recover from them

    • Re: (Score:2, Funny)

      by keytohwy (975131)
      I thought lesson #1 was "don't get high on your on supply"
    • lesson #1 on the 'net you can't tell the major corporation from the kid in a garage

      lesson #2, trust no-one with your data

      1 and 2 don't really matter if you've got a backup. Who cares if it's some kid in a garage if you've got a backup? If it's more convenient to have your data on some kind of web service, go for it! But make sure you've got a backup.

      lesson #3 disaster recovery capability only exists after it's been tested

      lesson #4 backups are useless unless you can prove you can recover from them

      This is really where things fall apart over and over again. I've seen tons of clients with no backup at all... Or a backup that they've never tested and they just assume it's working correctly.

      It isn't a backup unless you can take it off-site, and it isn't a backup unless you

    • Re: (Score:3, Funny)

      by QuantumRiff (120817)

      I love whens someone has a replicated DB as a "backup". I like to say okay.. "Drop table users". And then it dawns on them that the drop command would replicate.... ;)

  • by wandazulu (265281) on Friday February 20, 2009 @12:58PM (#26931139)

    Good backup strategies are critical to any operation, regardless of platform. I've seen similar things happen with MSSQL server databases as well as Oracle running on the most powerful Sun box you can get (circa 2001).

    One database backup strategy I've seen used rather successfully is doing a straight SQL dump every night and then copying the sql file over to somewhere else; even if the database became hopelessly corrupted there's still a way to re-import everything.

    Of course, this is in *addition* to mirroring, tape backups, etc.


    • Agreed. We have a mirror that we do weekly EXPorts from, as not to slow production environment. On prod have a second safety net of RMAN, but I've never trusted it. I've taken all the bloody courses, it just seems too failure prone. Heaven forbid you open your database with reset logs. It mucks the SCN up, or something irrevocably small. I'm still not confident about changing Oracle versions and have backwards compatibility.

      In short, yeah, exp is tried and tested for recovery.
      • Opening oracle with resetlogs resets the online redo logs and sets the log sequence number to 1; also called creating a new incarnation of the database.

        That prevents applying archive logs from before the reset (i.e. previous incarnations ) but which may contain more recent data than what's in the datafiles.

        I use EXP/IMP myself, but for larger databases it can be impractical. One of my systems takes around 120 hours of processing time to read in an export and write it to a blank schema ( which we tested whe


        • This I know. But, heaven forbid you open your database like that, and you are screwed. I believe this oversight has been correct in Oracle 11g, but I'm not entirely sure, as I'm trained in 10g. We tend to keep our OLTP databases relatively clean via purge processes, and offload required data to our OLAP. As we have to maintain 7 years for litigation purposes on tape backup, having the EXP is basically yeah, the only thing I trust. It has never taken over 24hrs to perform an EXP, and this meets our dai
    • by BSAtHome (455370)

      Yes, but straight dumps take time, a lot of time. They need to be consistent to be useful and then you have to hold table/db locks which can interfere with the operation. Even if you can dump it without locks, 500GByte of data over a GBit link takes al least 1.2 hours. And that assumes that you can get the data that fast, let alone transport it. Mysql is slow at doing dumps on innodb (myisam can be copied rather easily).

      When databases and tables get large, things start to suck big time if you want real back

      • Re: (Score:3, Informative)

        by Glendale2x (210533)

        With a transactional database who cares how long it takes - the state isn't going to change. If you're backing up your 500GB MyISAM tables, well, you're asking for trouble. Since you mention MySQL, use innodb tables with the dump option "--single-transaction".

  • Lesson? (Score:5, Insightful)

    by The Moof (859402) on Friday February 20, 2009 @12:59PM (#26931153)

    discussed the crash and the lessons to be learned

    Lessons such as "Regularly monitor and maintain backups like and business should?"

    • Some other web 2 site died a month or two ago.

      The story was on /. but I don't remember the services name. Turned out the guy had a single copy on a RAID array which got wiped, game over.

      Lesson still not learned apparently.

    • "Ma.gnolia was a one-man operation running on two Mac OS X servers and four Mac minis"

      So what? Hard drives are cheap. Buy a couple and make backups.

    • Re:Lesson? (Score:5, Interesting)

      by Knowbuddy (21314) on Friday February 20, 2009 @01:46PM (#26931895) Homepage Journal

      Lessons such as "Regularly monitor and maintain backups like [any] business should?"

      I love it when people say things like this. It shows me that they've never actually had to set up an enterprise backup strategy. I'm certainly not defending the Ma.gnolia guys, but I also can't stand it when people are on a shakier soapbox than they realize.

      I'm sorry, but when you are used to the whiz-bang-pretty of Web2.0, the state of enterprise-level backups is horrifically archaic and dismal. And, btw, given the size of today's hard drives and databases, for pretty much all intents and purposes "Enterprise" == "More than one computer with more than just a few files on a drive".

      Compare and contrast: a 1 TB hard drive will run you roughly $100. Do you know how much it then costs to backup that TB?

      • LTO-4 tapes, 800GB each, $50-$150 each tape plus roughly $2500 for the drive. Figure 2 tapes/day * 10 days backups = 20 tapes * $100 = $2000 in tapes alone. Congrats, that 1 TB just cost you $4500 in enterprise backups ... not to mention the time involved each day in doing a backup. You might save yourself some time and money by doing incrementals ... but then you have to balance that risk with complexity of backups and difficulty in restores.

      • NAS is trickier. The cheap NAS solutions, sub $1000 such as Buffalo and LaCie, aren't going to get you much more than a TB or two. And at that point, are you really any better off than the RAID solution? Maybe, maybe not. As you start to scale into IBM or Dell solutions, you are almost immediately beyond a $2500 price point before you even get to hard drives. Oh, and don't forget the cost of a gigabit switch so that it doesn't take you days to do a single backup.

      • iSCSI? Seriously? Not an option for SOHO businesses.

      Then there's backup software to contend with. It's not just as simple as "go buy a copy of BackupExec" -- there's different licensing for databases, and network backups, and whatever arcane rules they want today. I'm a PC guy so I can't talk much about Enterprise-level Mac backup solutions, but I can without a doubt say that Time Machine is not one of them.

      It's even more dismal when it comes to Open Source solutions. Have you ever actually tried to setup Bacula? It may be the 600lb gorilla of OS backup solutions, but it's still a royal pain. And to the "just set up a cron job for rsync" guys, c'mon, really? Good luck with that.

      So, please, let's dispense with the thought that backups are easy. Backups really suck. Hard. That's why so many people want to think of RAID as a backup solution -- because the step from one hard drive to two or three is easy, but the next step is much farther away than you think.

  • by ACK!! (10229) on Friday February 20, 2009 @01:00PM (#26931183) Journal
    Like frickin' having a backup? Isn't that one of the first things you ever learn if your business relies on computers + userdata?
  • Macs (Score:2, Funny)

    by Anonymous Coward

    You shouldn't use shiny plastic ornaments for serious business.

  • My Mac servers run snapshots to external drives every hour. When something goes badly, it's back up in a few minutes. Not sure why that wouldn't have been done here.
    • Presumably because the database was stored in a single .sql file, mirrored by each server, Time Machine wouldn't be particularly effective, because it would copy across a (new) copy of the massive database every time.

      Time Machine is excellent for backing up lots of little files (on a home PC, say, or a web server's /httpd) but for backing up big files, it's very inefficient. Additionally, Time Machine wasn't included with OS X 10.4 (both distributions), so if it wasn't running Leopard or Snow Leopard, you'd

  • by PrimeWaveZ (513534) on Friday February 20, 2009 @01:14PM (#26931403)

    I mean, just because a few medium-profile sites running on Macs have experienced a failure causing data loss doesn't make them unique. Every OS and every type of hardware will, at some point, experience a failure. It's the PEOPLE that make the failure a problem, and it sure looks like this tard was a problem.

    Who the hell doesn't back up their data? Seriously? This is "Slashdot worthy" because some hapless Mac user lost their data. BOO FUCKING FAIL. Move on.

  • "Private relaunch?" (Score:5, Interesting)

    by mr_mischief (456295) on Friday February 20, 2009 @01:16PM (#26931433) Journal

    "Gee, Bob, we have the proof that this thing works. Why don't we sell it already?"

    "Well, Bill, nobody wants to buy it and grandfather in all the whining freeloaders and their data."

    "It's too bad we can't just drop all the data and start fresh."

    "Well, why not, Bill? All we have to do is say it's been lost and can't be recovered. We can tell the buyer what's actually happening so they don't think we're total IT rejects who couldn't figure out a data retention policy."

    "That's why I like working with you, Bob. You always have a way around the problem."

    Have fun with it. The names have been changed (one changed anyway and one added), well, because it probably has nothing to do with reality. It sure is fun to ponder, though.

  • by Chalex (71702) on Friday February 20, 2009 @01:18PM (#26931457) Homepage

    Rather than watch the video or download the 23MB MP3, you can read the full transcript here:

    http://ratafia.info/post/78915439/transcript-and-commentary-for-whither-magnolia [ratafia.info]

    I can read much faster than I can listen.

  • by Captain Spam (66120) on Friday February 20, 2009 @01:18PM (#26931465) Homepage

    All right, let me get this straight: First you people bitch and moan when Facebook says they'll save user data forever. NOW you people bitch and moan when this site loses user data forever! You're never happy, are you?!?

  • LH: "The server was RAID. Its disk was RAID, so that's one of the things we're looking at. But it was a software RAID, so if it's a filesystem problem then... that's not gonna do any good because the the errors were RAIDed as well."

    Since the file system and database were corrupted, it wouldn't matter if it was hardware RAID or software RAID. That's not the problem at all, the problem is there was no archival backup, and their only backup was a file sync... that replicated the database errors on the backup.

    To backup a database, you dump it in a serialized form, or maintain a serialized form of the data in parallel with the database.

  • Free service (Score:3, Insightful)

    by tsstahl (812393) on Friday February 20, 2009 @01:27PM (#26931585)
    And the users got what they paid for.

    Simple as that.

    The flip side is that this guy's service will probably be the MOST reliable going forward.

    Of course he should have had reliable backups; now he is the poster child for backups. Remember, nobody pays you for backups, only for restores.
  • Users: if you're trusting your data to someone else, you need to insure one of two things. Either you need a signed, iron-clad written contract guaranteeing service with nasty penalty clauses requiring the service to compensate you fully for all the costs of data loss (and sufficient insurance and/or confidence that the service has the wherewithall to pay those penalties and not just flee into bankruptcy leaving you holding the bag anyway), or you need a backup of your data under your own control and in a f

  • He kept all on one hard disk? Even I know that it is wrong. I presented my spouse a PC on her birthday with the hard disk of 500 GB, I mean it s not that hard to back up 500 GB nowadays.
  • Backup Testing? (Score:2, Insightful)

    by skyriser2 (179031)

    Ouch... Isn't part of a backup strategy to sometimes attempt a recovery from a backup, on a test system?

    • Ouch... Isn't part of a backup strategy to sometimes attempt a recovery from a backup, on a test system?

      Yes. He addresses this and acknowledges he did not either deliberately fail his system or conduct extensive tests to ensure his backup scheme was adequate.

      He acknowledges this was one of many 'lessons learned' (aka huge mistakes made).
  • What the hell is a "social bookmarking service"? Since the site is dead, going to their webpage didn't help clear that up at all. Is it seriously a social networking site where people share _bookmarks_?
  • Seriously, all hardware will eventually die, unless it joins the Q continuum or part vampire, demon or god... Really the summary should be. Mac user is retarded and doesn't backup. But then that would be redundant given that hes running a server on macs...
  • After all, it's the new paradigm.

When Dexter's on the Internet, can Hell be far behind?"

Working...