Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
The Internet

Online Website Backup Options? 173

pdcull writes "I can't be the only person on the planet who has this problem: I have a couple of websites, with around 2 GB of space in use on my hosting provider, plus a few MySQL databases. I need to keep up-to-date backups, as my host provides only a minimal backup function. However, with a Net connection that only gets to 150 Kbps on a good day, there is no way I can guarantee a decent backup on my home PC using FTP. So my question is: does anybody provide an online service where I can feed them a URL, an FTP password, and some money, and they will post me DVDs with my websites on them? If such services do exist (the closest I found was a site that promised to send CDs and had a special deal for customers that had expired in June!), has anybody had experience with them which they could share? Any recommendations of services to use or to avoid?"
This discussion has been archived. No new comments can be posted.

Online Website Backup Options?

Comments Filter:
  • by MosesJones ( 55544 ) on Monday August 04, 2008 @05:39AM (#24463935) Homepage

    Rather than "posting DVDs" I'd go for something like Amazon's S3 and just dump the backup to them. Here is a list of S3 Backup solutions [zawodny.com] that would do the job.

    I've personally moved away from hard-media as much as possible because the issue on restore is normally about the speed to get it back on the server and its there that online solutions really win as they have the peering arrangements to get you the bandwidth.

    • by beh ( 4759 ) * on Monday August 04, 2008 @06:45AM (#24464237)

      Similarly, I'm not using DVDs etc. for my server backup. A few years back, seeing how much my provider would charge me for a decent amount of backup space, I opted to get an additional server instead; the second server now provides
      secondary DNS, secondary MX to my regular system, but also has all data for a cold-standby ( I would still need to change addresses in DNS manually in case of a disaster, and bring up services, but pretty much all the data is in place).

      The data is synchronised between both servers several times a day - first backed up locally to a second disk on the same machine, then rsynced between the two...

      The solution was cheaper than the cost of the backup, and gives me extra flexibility in terms of what I can do. The only 'cost' is that both machines sacrificed disk space to be back-up for the other (since both machines have >400GB in disk space, giving up even half the disk space of each machine isn't a big limitation - at least, not for *my* needs. YMMV).

      • Ditto, a live clone is perhaps the easiest solution for online backup. I've been doing this for quite some time now. My servers have cyclical rsync setups, each one is backed up to the next in the chain (geographically dispersed too!). They're ready to failover if that's ever needed (cross fingers). Considering one can lease a (puny) dedicated server for $30-40, it's pretty darned easy. You could even go with a VPS if you don't care about the CPU resources.

      • Re: (Score:3, Informative)

        by merreborn ( 853723 )

        Similarly, I'm not using DVDs etc. for my server backup. A few years back, seeing how much my provider would charge me for a decent amount of backup space, I opted to get an additional server instead

        It's important, when using this method, that your second server be in a separate datacenter.

        Duplicating your data is only half of a backup plan. The other half is making sure that at least one of those duplicates is in a physically separate location.

        There are many things that can conceivably take down entire da

        • by beh ( 4759 ) *

          True - in my case, when I discussed it with the provider in question - we agreed (in the first instance) that the second server would be housed in a different server room from the first. That would still keep it within the same provider, but in separate power-loops / fire protection areas /...

          It's a first step... ...the second step was to go for a third machine with a different provider... ;-)

    • Re: (Score:3, Informative)

      by txoof ( 553270 )

      S3 is a pretty good option. I've been using the jungledisk client along with rsync to manage offsite home backups. S3 Is pretty cheap and the clients are fairly flexible.

      I haven't played with any remote clients, but your hosting provider can probably hook up one of the many clients mentioned in the parrent. The price of S3 is hard to beat. I spend about $6 per month on ~20 gigs worth of backups.

      • by ghoti ( 60903 ) on Monday August 04, 2008 @08:05AM (#24464651) Homepage

        JungleDisk's built-in backup can also keep older versions of files, which is great in case a file gets corrupted and you only discover that after a few days. It's dirt cheap too, $20 for a lifetime license on an unlimited number of machines.

        For this to work, you need to be able to run the jungledisk daemon though, which is not an option with some shared hosting plans. Also, to mount the S3 bucket as a disk, you obviously need root access. But if you do, JungleDisk is hard to beat IMHO.

        • Re: (Score:3, Informative)

          by alexgieg ( 948359 )

          . . . to mount the S3 bucket as a disk, you obviously need root access. But if you do, JungleDisk is hard to beat IMHO.

          Not really. If the server kernel has FUSE [wikimedia.org] enabled, and the user space tools are installed, any user member of the related group can mount a "jungledisked" S3 bucket in his userspace without the need for root access.

        • FOSS S3Backer [googlecode.com] uses FUSE to mount a bucket as a disk :)
    • You could also use S3Backer [googlecode.com] with an rsync script (or rsnapshot) on the host. That lets you mount the S3 bucket as a drive on your server through FUSE and then copy to it as if it were local. *NIX/BSD only, though.
    • Why not use Suso? (Score:4, Informative)

      by suso ( 153703 ) * on Monday August 04, 2008 @07:47AM (#24464557) Homepage Journal

      Sorry for the self plug, but this just seems silly. Your web host should be backing up your website and offer you restorations. I guess this isn't a standard feature any more. But it is at Suso [suso.com]. We backup your site and databases everyday. And can restore them for you for free.

      • by cdrudge ( 68377 ) on Monday August 04, 2008 @08:07AM (#24464663) Homepage

        One thing that I've learned though is you can not rely on a hosting company's backup to necessarily be timely, reliable, and/or convenient. If you want to backup multiple times during the day, have multiple generations of backups, be able to very quickly restore if need be, all can make the hosting backup unattractive. I'm not saying yours is that way, just with some of the hosting companies I've dealt with in the past.

        This also doesn't take into consideration the best-practice of having your backups off-site for disaster recovery. It doesn't help very much to have your backup server/drive/whatever 1U down in the rack when the building collapses, has a fire, floods, etc destroying everything in it.

      • Re: (Score:3, Interesting)

        by Lumpy ( 12016 )

        Because most sites have 2 failure points.

        1 - they buy the super cheap service with no backup.

        2 - the site is designed poorly with no backup capabilities.

        If your site has a DB, your site better have a admin function to sump the DB to a tgz file you can download. Mine generates a password protected rss feed and encrypted tgz file (in a password protected area.) I simply have a rss reader/retriever configured to watch all my sites and retrieve the backups when they are generated.

        I get that DB and any user/c

      • by ncc74656 ( 45571 ) *

        Your web host should be backing up your website and offer you restorations.

        If you're paying for managed hosting, yes. If you're using an unmanaged VPS (like me), not so much. Some of them will image your VPS on demand and store that image for recall if your system is FUBARed. For mine, I just have a daily rsync job running on a Mac mini at home (less power consumption than most, so it runs 24/7) that pulls down my email and websites. Another job creates and rsyncs database dumpfiles. I should probabl

    • If you keep your backups for one month, S3 costs about $300 per TB. That's not a bad price for offsite backup that's easily accessible from both your main and disaster recovery servers.
      price list [amazon.com]
      • Maybe I'm just the king of cheap, but I find $300/TB rather excessive, considering I could rent out a massive file server for less. There's a certain charm to the S3 solutions since you can use a turnkey service to do the actual backup grunt-work, but the big problem with Amazon's services is they're only cost-effective for small or short-run projects.

        I just ran the numbers for one of my existing web servers, and the same service delivered via Amazon's S3 would cost me over $1500, largely because of their

        • by STFS ( 671004 )
          Well, or you could just not use Amazon S3 for your big ass project.

          I like the fact that I can use S3 and only pay for what I use, even if it means that it wouldn't be cost efficient for me to do that if my usage goes through the roof.

          Don't think of S3 as your "one and only" storage solution. Think of it as a great starter kit and possibly an excellent addition to your existing infrastructure.

          For example, I know that a rather big video sharing site uses S3. I'm not sure what the details are exactly in

  • Why FTP? Use rsync. (Score:5, Informative)

    by NerveGas ( 168686 ) on Monday August 04, 2008 @05:40AM (#24463939)

    It seems like the only problem with your home computer is FTP. Why not use rsync, which does things much more intelligently - and with checksumming, guarantees correct data?

    The first time would be slow, but after that, things would go MUCH faster. Shoot, if you set up SSH keys, you can automate the entire process.

    • yeah, use rsync. (Score:5, Insightful)

      by SethJohnson ( 112166 ) on Monday August 04, 2008 @05:48AM (#24463975) Homepage Journal
      I 100% agree with NerveGas on the rsync suggestion. I use it in reverse to backup my laptop to my hosting provider.

      Here's the one thing to remember in terms of rsync. It's going to be the CURRENT snapshot of your data. Not a big deal, except if you're doing development and find out a week later that changes you made to your DB have had unintended consequences. If you've rsynced, you're going to want to have made additional local backups on a regular basis so you can roll back to one of those snapshots prior to when you hosed your DB. Apologies if that was obvious, but rsync is the transfer mechanism. You'll still want to manage archives locally.

      • Re:yeah, use rsync. (Score:5, Informative)

        by Bert64 ( 520050 ) <bert AT slashdot DOT firenzee DOT com> on Monday August 04, 2008 @05:55AM (#24464013) Homepage

        Then what you need is rdiff-backup, works like rsync except it keeps older copies stored as diffs.

        As for FTP, why the hell does anyone still use ftp? It's insecure, works badly with nat (which is all too common) and really offers nothing you don't get from other protocols.

        • Re:yeah, use rsync. (Score:5, Informative)

          by xaxa ( 988988 ) on Monday August 04, 2008 @06:53AM (#24464273)

          Then what you need is rdiff-backup, works like rsync except it keeps older copies stored as diffs.

          Another option is to use the --link-dest option to rsync. You give rsync a list of the older backups (with --link-dest), and the new backup is made using hard links to the old files where they're identical.
          I haven't looked at rdiff-backup, it probably provides similar functionality.

          Part of my backups script (written for zsh):

          setopt nullglob
          unsetopt nullglob

          rsync --verbose -8 --archive --recursive --link-dest=${^older[1,20]} \
                                  user@server:/ $backups/$date/

          • Re: (Score:3, Informative)

            by xaxa ( 988988 )

            Also, rsync has a --bwlimit option to limit the bandwidth it uses.

          • by spinkham ( 56603 )

            rdiff-backup makes incremental diffs of individual files, which saves a lot of space for large files which have small changes like database backups, virtual machine images, and large mailspools.
            On the other hand, the rsync schemes are somewhat more straightforward to deal with if you don't have much space in such files.

            • rdiff-backup makes incremental diffs of individual files, which saves a lot of space for large files which have small changes like database backups, virtual machine images, and large mailspools.

              And rsync with the link-dest option uses hard links, which occupy exactly the space for the inode they are using and no more.

              A diff is a small file which occupies a small amount of space on your disk - a non-zero sized file will occupy at least 4k depending on the size of your file system.

              Hard links use up a single inode and no more. Why mess about with diffs when you can change into a directory and see yesterday's backup? Don't like yesterdays? Change into another and see the day before that. The depth

              • by spinkham ( 56603 )

                If you have a 40 gig VM image, and you modify 1 bit, your scheme will add 40 gigs.
                rdiff-backup will add one bit + some small overhead.
                This is huge win for some users, not so much for others.
                For servers with large database dumps which are mostly static, it can be a large win.

              • Hard links use up a single inode and no more. Why mess about with diffs when you can change into a directory and see yesterday's backup? Don't like yesterdays? Change into another and see the day before that. The depth is limited by the number of inodes you have in your filesystem / number of inodes used in filesystem being backed up.

                Agree completely. Disk space is cheap as dirt anyway.

                I'm using rsync to mirror about 300 GB on 300+ remote partitions, with snapshots [mikerubel.org] going back up to a month, depending. I have 8 fairly low-end boxes doing about 40 partitions each. Normally all boxes finish in 90-100 minutes.

                Total cost for this project was less than 20k. Bids from commercial vendors for similar functionality were much, much higher.

          • by vanyel ( 28049 ) *

            rsnapshot takes care of managing the snapshots for you as well...

        • Or...use subversion to actually store your data. If you use FSFS format (the filesystem version of SVN, which IMHO is better than the embedded database format because it doesn't occasionally get corrupted), all data is actually *stored* as diffs anyway.

          You can actually do an rsync of the live data, and it'll work perfectly, and never overwrite things you need.

          If you're worried about past versions, you should be using source control, so IMHO, this is a better option than an almost-source control one like rd

        • by NMerriam ( 15122 )

          Then what you need is rdiff-backup, works like rsync except it keeps older copies stored as diffs.

          When it works, that it. The problem with rdiff-backup is that ultimately it's a massive script with no error-checking, and if anything ever goes wrong (including just the network having difficulty), you have to "fix" your backup, and then start over again. Of course the process of fixing usually takes twice or three times as long as actually performing the backup, so you can wind up with a backup that's impos

        • by Firehed ( 942385 )

          A lot of cheap hosts don't allow for SSH/SCP connections.

          • by ncc74656 ( 45571 ) *

            A lot of cheap hosts don't allow for SSH/SCP connections.

            How do you get your stuff up there without scp? If their answer is "use FTP," they need to go out of business yesterday. (I suppose they could use a web form served up over HTTPS to manage your space, but that'd quickly get annoying for any non-trivial website.)

        • Or snapshot the filesystem after each rsync. This is what I do with ZFS snapshots, it works wonderfully well... Other than ZFS, snapshots are supported by LVM on Linux and UFS1 and UFS2 on FreeBSD. The idea of combining rsync and ZFS snapshots has been first talked about on the zfs-discuss mailing list about 2 years ago.
      • Re:yeah, use rsync. (Score:5, Informative)

        by Lennie ( 16154 ) on Monday August 04, 2008 @06:56AM (#24464283)

        There is also the --backup --backup-dir options (you'll need both). It keeps a copy of the files that have been deleted or changed, if you use a script to keep it in seperate directories you'll have a pretty good history of all the changes.

      • Re: (Score:3, Informative)

        by Culture20 ( 968837 )
        Using hard links, you can make multiple trees using only the storage space of the changed files. Here's one example: http://www.mikerubel.org/computers/rsync_snapshots/ [mikerubel.org]
    • Re: (Score:2, Informative)

      by Andrew Ford ( 664799 )
      Or simply use rsnapshot. However whatever backup solution you use, make sure to create dumps of your databases as backing up the database files while they are in use will give you backup files you cannot restore from. If you backup your database dumps, you can exclude your databases files from the backup.
    • yeah, i say rsync and mysql replication would work nicely. of course you have to look into them and decide if they meet your needs, but i think you'll find it's probably good enough.
    • Re: (Score:2, Informative)

      Yes. rsync is lot better, I always use it. I personally use rsync with -z option to compress and decompress the files on the fly, which improves speed a lot, most of files being text files.
    • by houghi ( 78078 ) on Monday August 04, 2008 @06:23AM (#24464141)

      Many hosting providers do not have this option and not even sftp. :-/

      So that makes that you are stuck with FTP or need to change hosting provider, which is also not always an option.

      • So that makes that you are stuck with FTP

        wget --mirror?

        LFTP's scripted system allows mirroring the backup and only getting files that have changed. With some server-side scripting to dump database diffs it wouldn't be hard to make a FTP backup solution that only downloaded changed files.

    • by v1 ( 525388 ) on Monday August 04, 2008 @06:58AM (#24464293) Homepage Journal

      I use rsync on a few dozen systems here, some of which are over 1TB in size. Rsync works very well for me. Keep in mind that if you are rsyncing an open file such as a database, the rsync'd copy may be in an inconsistent state if changes are not fully committed as rsync passes through the file. There are a few options here for your database. First one that comes to mind is to close or commit and suspend/lock it, make a copy of it, and then unsuspend it. Then just let it back up the whole thing, and if you need to restore, overwrite the DB with the copy that was made after restoring. The time the DB is offline for the local copy will be much less than the time it takes rsync to pass through the DB, and will always leave you with a coherent DB backup.

      If your connection is slow, and if you are backing up large files, (both of which sound true for you?) be sure to use the keep-partial option.

      One of my connections is particularly slow and unreliable. (it's a user desktop over a slow connection) For that one I have made special arrangements to cron once an hour instead of once a day. It attempts the backup, which is often interrupted by the user sleeping/shutting down the machine. So it keeps trying it every hour it's on, until a backup completes successfully. Then it resets the 24 hr counter and won't attempt again for another day. That way I am getting backups as close to every 24 hrs as possible, without more.

      Another poster mentioned incrementals, which is not something I need here. In addition to using a version of rsync that does incrementals, you could also use something off-the-shelf/common like retrospect that does incremental but wouldn't normally work for your server, and instead of running that over the internet, run it on the local backup you are rsyncing to. If you need to go back in time a bit still can, but without figuring a way to jimmy in rsync through your network limits.

    • by raehl ( 609729 ) <raehl311&yahoo,com> on Monday August 04, 2008 @07:24AM (#24464399) Homepage

      ... his slow internet connection, and wants to pay something to not have to move files over his slow internet connection.

      How about:

      - Pay for a hosting provider that DOES provide real backup solutions....
      - Pay for a real broadband connection so you CAN download your site....

      As with most things that are 'important'...

      Right, Fast or Cheap - pick two.

  • Presumably, much of that 2 gig of data is static, so perhaps you could look into minimisation of exactly *what* you need to back up? It might be within the realm of your net access.

  • bqinternet (Score:2, Informative)

    by Anonymous Coward

    We use http://www.bqinternet.com/
    cheap, good, easy.

  • Gmail backup (Score:4, Informative)

    by tangent3 ( 449222 ) on Monday August 04, 2008 @05:57AM (#24464033)

    You may have to use extra tools to break your archive into seperate chunks fitting Gmail's maximum attachment size, but I've used Gmail to backup a relative small (~20mb) website. The trick is to make one complete backup, then make incremental backups using rdiff-backup. I have this done daily with a cron job, sending the bz2'ed diff to a Gmail account. Every month, it will make a complete backup again.

    And a seperate Gmail account for the backup of the mysql database.

    This may be harder to do with a 2GB website, i guess, since Gmail provides atm about 6GB of space which will probably last you about 2 months. Of course you could use multiple gmail accounts or automated deletion of older archives...

    But seriously, 2GB isn't too hard to do your from own PC if you only handle diffs. The first time download would take a while, but incremental backups shouldn't take too long unless your site changes drastically all the time.

    • Re:Gmail backup (Score:5, Insightful)

      by Anonymous Coward on Monday August 04, 2008 @06:53AM (#24464271)

      This strikes me as a really dumb thing to do; as both a) using it for data storage rather than primarily email storage and b) signing up for multiple accounts are both violations of the gmail TOS, you are just asking for your backups to not be available when you most need them.

      • Re: (Score:2, Interesting)

        While I certainly don't claim using Gmail for backup is a smart thing to do, can you point out where in the ToS this is stated, as I looked through it and see no mention of either restriction?

        • Create multiple user accounts in connection with any violation of the Agreement or create user accounts by automated means or under false or fraudulent pretenses

          So it's illegal to create more than one account if you're breaking the rules in some other way - not specifically illegal on it's own. The terms don't mention using it for anything other than email, though...

          See the program policies [google.com] and terms of use [google.com].

  • Wow (Score:5, Insightful)

    by crf00 ( 1048098 ) on Monday August 04, 2008 @05:59AM (#24464041) Homepage
    Wow! So you are asking somebody to download your website's home folder and database, look at the password and private information of members, and deliver you dvd that is ready to be restored with rootkit along?
    • Re:Wow (Score:5, Funny)

      by teknikl ( 539522 ) on Monday August 04, 2008 @06:03AM (#24464071)
      Yeah, I had noticed the complete lack of paranoia in the original post as well.
      • Re: (Score:2, Interesting)

        by pdcull ( 469825 )
        That's why I'd want somebody realiable. My hosting provider could steal my info too if they really wanted too, although I certainly trust them not to. Oh, I'm paranoid alright... it's just that living in a Rio de Janeiro slum, as I do, my paranoia is more about things like flying lead objects...
      • It's not difficult to set up a user with read-only access to mysql, and read only access to your entire web content. Since apache just basically points to a directory (or directories depending on how you set it up), to restore a backup you just copy the static content to /var/www/html/ (or wherever it's stored), and load up the MySQL data. That's it -- no possibility for a rootkit, and who cares if they have your password -- they only have read only access. I would guess if there was anything particularl
    • by ivoras ( 455934 )
      It doesn't have to be that way. Here's some free advertizing for a fellow FreeBSD developer: TarSnap [daemonology.net] offers high-grade encryption over the wire and on the storage, incremental backups, and it also uses Amazon S3.
  • by cperciva ( 102828 ) on Monday August 04, 2008 @06:03AM (#24464069) Homepage

    After looking at the available options, I decided that there was nothing which met my criteria for convenience, efficiency, and security. So I decided to create my own.

    I'm looking for beta testers: http://www.daemonology.net/blog/2008-05-06-tarsnap-beta-testing.html [daemonology.net]

  • by DrogMan ( 708650 ) on Monday August 04, 2008 @06:11AM (#24464093) Homepage
    rsync to get the data, cp -al to keep snapshots. I've been using this for years to manage TB of data over relatively low-speed links. You'll take a hit first-time (so kick it off at night, kill it in the morning, and the next night just execute the same command and it'll eventually catch up, then cp -al it, then lather rinse, repeat. This page: http://www.mikerubel.org/computers/rsync_snapshots/ [mikerubel.org] has been about for years. Use it!
    • Re: (Score:2, Interesting)

      by xehonk ( 930376 )
      And if you dont feel like writing scripts yourself, you can use rsnapshot, which will do all of the work for you.
    • rsync is definitely your friend. Check out the man pages and look up some examples on the net. (The command line options I use are rsync -avurtpogL --progress --delete, but YMMV.)

    • by bot24 ( 771104 )
      Or rdiff-backup which stores the latest version and diffs instead of using cp -al.
    • I hope you have a better plan if you ever need to do a full restore in anger. It's all right spending days backing up in small chunks, but if your data ever goes south, it's going to take at least as long to restore it all again. In the mean time, your web site/application/business is flat on its back.

  • by Lord_Sintra ( 923866 ) on Monday August 04, 2008 @06:27AM (#24464153)
    Send me your FTP details and some cash and I'll...backup...your data.
  • Unbelievable Backup Software, BackupPC, it uses Rsync, and will solve all your troubles, it's truly amazing backups/restore solution, check it out .. all the best! Riaan
  • by jonnyj ( 1011131 ) on Monday August 04, 2008 @06:29AM (#24464159)
    ...because if you are and you're planning to sent personal data (there can't be many 2GB web sites that contain no personal data at all) on DVD through the mail, you might want to look at recent pronouncements from the Information Commissioner. A large fine could be waiting for you if you go down that route.
  • Sitecopy (Score:3, Informative)

    by houghi ( 78078 ) on Monday August 04, 2008 @06:31AM (#24464165)

    I would seperate the content. First there is the MySQL part. Export it on a daily basis (or more often). You can export it as a whole or only those parts that you desire. Make a php page for each thing you desire to download and protect it however you like.
    Then point lynx to it to download the file.

    The content is another matter. To update my sites I use sitecopy [manyfish.co.uk] What I do is make the site localy and when I am ready, I run sitecopy and it will upload the site.
    As I do incremetial backups localy, I do have the previous version there.
    If this is not an option, it should not be too hard to use sitebackup to, uh, backup the site.

    Put all this in a script and crontab should do the rest.

  • I use a product called SquirrelSave:

    http://www.squirrelsave.com/ [squirrelsave.com]

    which uses a combination of rsync and SSH to push data to the backup servers. The client is currently only for Windows at the moment, but with promises of a Linux and OS X version coming soon.

    It generally works quite well - WinSCP is included to pull data back off the servers.

    • Unfortunately I just read the post - properly. SquirrelSave doesn't (yet) support server OSes according to the web site. Sorry 'about that.

  • by jimicus ( 737525 ) on Monday August 04, 2008 @06:44AM (#24464233)

    One thing a lot of people forget when they propose backup systems is not just how quickly can you take the backup, but how quickly do you need it back?

    A sync to your own PC with rsync will, once the first one's done, be very fast and efficient. If you're paranoid and want to ensure you can restore to a point in time in the last month, once the rsync is complete you can then copy the snapshot that's on your PC elsewhere.

    But you said yourself that your internet link isn't particularly fast. If you can't live with your site being unavailable for some time, what are you going to do if/when the time comes that you have to restore the data?

  • I'm in agreement that an rsync based offsite backup solution is always a great idea. rdiff-backup [nongnu.org] or duplicity [nongnu.org] is the way to go.

    That being said, proper backups is a must that any web host should provide. I used to use dreamhost and they did incrementals and gave you easy access to them. Some time ago I outgrew shared hosting and went to slicehost which offers absolutely awesome service and although backups cost extra, they do full nightly snapshots, and it's all easy to manage (restore, take your own sna

  • In you use wordpress for your site, you can use blogreplica.com [blogreplica.com], an online blog backup service which was created with this specific goal in mind. blogreplica.com [blogreplica.com] connects to your blog using XML-RPC and retrieves all the content to its servers where you have full access to it any time. Maybe this works for you
  • by Anonymous Coward on Monday August 04, 2008 @07:14AM (#24464347)

    NSA: We backup your data so you won't have to!

    How it works:
    First, edit each page on your website ab add the following meta tags: how-to, build, allah, infidels, bomb (or just any of the last three, if you're in a hurry).

    On the plus side, you don't need to give them your money, nor your password.

    On the minus side ... there is no minus side (I mean, who needs to travel anyway?)

    Posting anonymously as I have moderated in this thread (that, and they already know where I live).

  • Free service with great support for MacOS, Linux and Windows. Features 2GB free disk space, WebDAV, SFTP, HTTPS and more. Also has a nice easy-to-use picture viewer for sharing photos. https://mydisk.se/web/main.php?show=home&language=en [mydisk.se]
  • Consider Manent (http://trac.manent-backup.com , freshmeat entry: http://freshmeat.net/projects/manent [freshmeat.net]). It can currently back up local directory to a remote repo, so you can easily set it up to run at your server to back up to your home, and in the future it will be able to back up an FTP directory.
    It is extremely efficient in backing up a local repository. A 2GB working set should be a non-issue for it. I'm doing hourly backups of my 40-G home dir.
    Disclaimer: I am the author :)
  • Shared hosting (Score:5, Interesting)

    by DNS-and-BIND ( 461968 ) on Monday August 04, 2008 @07:35AM (#24464471) Homepage
    OK, I keep hearing "use rsync" or other software. What about those of us who use shared web hosting, and don't get a unix shell, but only a control panel? Or who have a shell, but uncaring or incompetent admins who won't or can't install rsync? I know the standard slashdot response is "get a new host that does" but there are dozens of legitimate reasons that someone could be saddled with this kind of web host.
    • Re: (Score:2, Insightful)

      by lukas84 ( 912874 )

      there are dozens of legitimate reasons that someone could be saddled with this kind of web host.

      No, sorry. Not a single one.

      • How about "my company assigned me this mess". How about "I have to use this crappy provider for geographic reasons (inside the Great Firewall of China)". I got 6 replies so far, and not one helpful one, just the typical Slashdot crap of "durrr, wave your magic wand and make everything all right" instead of "well, that's a tough one, let's figure out how we might be able to do this and hack out a solution." Nope, it's either throw money at the problem or nothing.
    • Re: (Score:2, Insightful)

      by pimpimpim ( 811140 )
      Either find a competent provider that already has the tools to do backups preinstalled. Or catch up on your (your technician's) system administration skills, If you have a serious business at your website, you should know what you are doing. The same goes for carpentry or someone who owns a car shop. You just don't get your money for nothing, you know.
    • If you're storing the website *only* on a hosting provider that won't give you a shell, and don't have a complete copy of the entire site in your hands at all times, you've got a much bigger problem.

      That is a very good sign that you're at a fly-by-night hosting company that's going to lose all your data. If you're worried about backup, you should pony up and get a decent hosting provider.

      But that is probably something worth addressing anyway. Fortunately, there are many things similar to rsync that will w

    • Re:Shared hosting (Score:4, Interesting)

      by Lumpy ( 12016 ) on Monday August 04, 2008 @08:20AM (#24464797) Homepage

      Write PHP or ASP code to generate your backups as a tar or zip and get the files that way.

      When you pay for the economy hosting, you gotta write your own solutions.

    • Re:Shared hosting (Score:4, Informative)

      by sootman ( 158191 ) on Monday August 04, 2008 @02:04PM (#24470039) Homepage Journal

      > OK, I keep hearing "use rsync" or other software. What
      > about those of us who use shared web hosting, and
      > don't get a unix shell, but only a control panel?

      As long as you've got scriptability on the client, you should be able to cobble something together. Like, in OS X, you can mount an FTP volume in the finder (Go -> Connect to Server -> ftp ://name:password@ftp.example.com) and then just

      rsync -avz /Volumes/ftp.example.com/public_html/* ~/example_com_backup/

      (Interestingly, it shows up as user@ftp.example.com in the Finder but the user name isn't shown in /Volumes/.)

      AFAIK, pretty much any modern OS (even Windows since 98, AFAIK) can mount FTP servers as volumes. OS X mounts them as R/O, which I always thought was lame, but that's another rant.

      > Or who have a shell, but uncaring or incompetent
      > admins who won't or can't install rsync?

      If you've got shell (ssh) access, you can use rsync. (Not over telnet, natch. If that's all you've got, look at the workaround above.) Rsync works over ssh with nothing more than

      rsync -avz user@example.com:~/www/* ~/example_com_backup/

      Use SSH keys to make life perfect.

      Or, google for 'site mirroring tool'. Many have an option to only download newly-changed files.

      To get your databases, make a page like

      print "<table border='1' cellpadding='5' cellspacing='0'>\n";
      for ($row=0;$row<mysql_num_rows($result);$row++) {
          print "<tr>\n";
          for ($col=0;$col<mysql_num_fields($result);$col++) {
              print "<td>";
              print mysql_result($result,$row,$col);
              print "</td>\n";
          print "</tr>\n";
      print "</table>\n";

      and download that every so often.

      For the original poster, who was complaining about downloading many gigs over a slow link, just rsync over and over until its done--if it drops a connection, the next attempt will start at the last good file.

      And if you've got a control panel, look for a button labeled 'backup'! My host uses CPanel and there's a magic button.

      Final option: how did the data get onto the www server in the first place? Isn't there already a "backup" on your local machine in the form of the original copies of all the files you've uploaded? If you haven't been backing up in the first place, well, yeah, making up for that might be a little painful. (Note: if your site hosts lots of user-uploaded content, ignore any perceived snarkiness. :-) )

      • Wow! Thanks for an actual helpful response. I'm going to implement some of this. Yeah, I do get some user-uploaded content (drupal site) and everything worthwhile is in the database dump anyhow.
        • by sootman ( 158191 )

          Thanks. Glad I could help. That's why we all come to slashdot--for the occasional spike in the S:N ratio. :-) Reply here if you've got any other questions. I've got many years of experience stringing together surprisingly useful things from sub-optimal components.

      • OS X mounts them as R/O, which I always thought was lame, but that's another rant.

        Nobody should ever be using ftp anyway. You have the ssh solution, which is great. But there are folks who insist on using the ftp thing. I've got to plug for MacFusion [macfusionapp.org], which tosses a GUI at FUSE and sshfs. It makes OSX use ssh mounted volumes transparently. For those with more hosted disk space than $DEITY (I'm looking at you, Dreamhost), this is a great way to do offline file storage for people who are more of the "dra

  • by pdcull ( 469825 ) on Monday August 04, 2008 @09:13AM (#24465367) Homepage
    Hi everyone, I didn't mention in my question that where I'm living (Rio de Janeiro slum) there aren't that many options for internet access. Also, as all my sites are my very much not-for-profit, I'm limited as to how much I can spend on my hosting plan. I've been using Lunarpages for a number of years now, and generally find them very good, although if I stupidly overwrite a file, or want to go back to a previous version of something, I'm out of luck. As I am a lazy (read time-challenged) individual, I tend to use Pmwiki and maintain my sites online, hence my need for regular, physical backups. Anyway, thanks everyone for your help, I still can't help thinking that somebody must be able to make a pile of cash offering such as service to non-techie site owners...
    • Hi there,

      for NGOs, cash will always be a problem.

      Tip: the rsync option some mentioned is really easy to get going once you set it up. The first time you synch to your home PC, you have to download the whole thing, so expect to take a while for that to happen. The next time, only the changed bits are downloaded, so it will really happen rather quickly.

      How to implement it, non-techie style: get a linux geek from the local university labs to help you out, shouldn't be hard do find someone knowledgeable who y

  • From the BSD camp (or at least one of the developers), there's TarSnap [daemonology.net], which offers very high encryption and confidentiality, and also incremental backups (via snapshotting).
  • Here in the UK we use an online backup company called Perfect Backup [perfectbackup.co.uk]. You just install a bit of software on each machine and it backups according to your own schedule. The best thing is, it does a binary diff of each file and only sends the changed parts of the file so conserving bandwidth. It's pretty configurable. The pricing seems pretty good too compared to some other providers. It's more expensive than people like Carbonite, but then this is a *business* grade product with support for things like Exch

  • How about getting a hosting provider that does backup? I've been using pair.com for my sites for several years. They have about daily or twice-daily recent snapshots on the same server (that you can access yourself if you need to), then on-site backup, then off-site backup. As far as I know they don't ship those to customers, but this doesn't look to me like a very big risk. I haven't had to use any of the backups, and I think they haven't had any (big) loss of data since they went online more than ten year
  • I looked into this recently. There are a lot of commercial offerings. However, the only thing I found was 1) FLOSS, 2) had S3 support out of the box, 3) had a storage format that was documented and simple enough to restore without the software or even the docs, and 4) didn't use "mystery crypto" was some software called "brackup" from Brad Fitzpatrick of LiveJournal and memcached fame ("brackup" = "brad's backup.")

    Brackup is written in perl using good OO practices and is very hackable. The file format is

  • I can normally resist this, but this is too much:

    "the closest I found was a site that... had a special deal for customers that had expired in June!"

    What about customers that died in May? Are they screwed again?

    Do they expect much repeat business from the recently departed?

    Is this a way to get around the memory erasure of the River Lethe (in Hades)? A way around the memory erasure during Buddhist reincarnation? If so, how would we know they successfully restored their memories? (I guess we would know the

  • There are many sites that give you tons of storage for backing up files, with various ways of storing the data, many of which are free. Google and http://www.adrive.com/ [adrive.com] are two that come to mind. No need to deal with your slow connection.

"In matrimony, to hesitate is sometimes to be saved." -- Butler