Online Website Backup Options? 173
pdcull writes "I can't be the only person on the planet who has this problem: I have a couple of websites, with around 2 GB of space in use on my hosting provider, plus a few MySQL databases. I need to keep up-to-date backups, as my host provides only a minimal backup function. However, with a Net connection that only gets to 150 Kbps on a good day, there is no way I can guarantee a decent backup on my home PC using FTP. So my question is: does anybody provide an online service where I can feed them a URL, an FTP password, and some money, and they will post me DVDs with my websites on them? If such services do exist (the closest I found was a site that promised to send CDs and had a special deal for customers that had expired in June!), has anybody had experience with them which they could share? Any recommendations of services to use or to avoid?"
Why not use an online solution? (Score:5, Informative)
Rather than "posting DVDs" I'd go for something like Amazon's S3 and just dump the backup to them. Here is a list of S3 Backup solutions [zawodny.com] that would do the job.
I've personally moved away from hard-media as much as possible because the issue on restore is normally about the speed to get it back on the server and its there that online solutions really win as they have the peering arrangements to get you the bandwidth.
Re:Why not use an online solution? (Score:5, Interesting)
Similarly, I'm not using DVDs etc. for my server backup. A few years back, seeing how much my provider would charge me for a decent amount of backup space, I opted to get an additional server instead; the second server now provides
secondary DNS, secondary MX to my regular system, but also has all data for a cold-standby ( I would still need to change addresses in DNS manually in case of a disaster, and bring up services, but pretty much all the data is in place).
The data is synchronised between both servers several times a day - first backed up locally to a second disk on the same machine, then rsynced between the two...
The solution was cheaper than the cost of the backup, and gives me extra flexibility in terms of what I can do. The only 'cost' is that both machines sacrificed disk space to be back-up for the other (since both machines have >400GB in disk space, giving up even half the disk space of each machine isn't a big limitation - at least, not for *my* needs. YMMV).
Re: (Score:2)
Ditto, a live clone is perhaps the easiest solution for online backup. I've been doing this for quite some time now. My servers have cyclical rsync setups, each one is backed up to the next in the chain (geographically dispersed too!). They're ready to failover if that's ever needed (cross fingers). Considering one can lease a (puny) dedicated server for $30-40, it's pretty darned easy. You could even go with a VPS if you don't care about the CPU resources.
Re: (Score:3, Informative)
It's important, when using this method, that your second server be in a separate datacenter.
Duplicating your data is only half of a backup plan. The other half is making sure that at least one of those duplicates is in a physically separate location.
There are many things that can conceivably take down entire da
Re: (Score:2)
True - in my case, when I discussed it with the provider in question - we agreed (in the first instance) that the second server would be housed in a different server room from the first. That would still keep it within the same provider, but in separate power-loops / fire protection areas /...
It's a first step... ...the second step was to go for a third machine with a different provider... ;-)
Re:Why not use an online solution? (Score:5, Insightful)
Sure, it will - but that problem you will have with a provider-based backup as well. If your data gets corrupted without you noticing, your backup will 'save' corrupt data...
What you can do to at least partially save yourself is to at least make sure the rsync users are jailed and can only rsync to the target directory, not being able to access anything else.
Re: (Score:3, Informative)
S3 is a pretty good option. I've been using the jungledisk client along with rsync to manage offsite home backups. S3 Is pretty cheap and the clients are fairly flexible.
I haven't played with any remote clients, but your hosting provider can probably hook up one of the many clients mentioned in the parrent. The price of S3 is hard to beat. I spend about $6 per month on ~20 gigs worth of backups.
Re:Why not use an online solution? (Score:4, Informative)
JungleDisk's built-in backup can also keep older versions of files, which is great in case a file gets corrupted and you only discover that after a few days. It's dirt cheap too, $20 for a lifetime license on an unlimited number of machines.
For this to work, you need to be able to run the jungledisk daemon though, which is not an option with some shared hosting plans. Also, to mount the S3 bucket as a disk, you obviously need root access. But if you do, JungleDisk is hard to beat IMHO.
Re: (Score:3, Informative)
. . . to mount the S3 bucket as a disk, you obviously need root access. But if you do, JungleDisk is hard to beat IMHO.
Not really. If the server kernel has FUSE [wikimedia.org] enabled, and the user space tools are installed, any user member of the related group can mount a "jungledisked" S3 bucket in his userspace without the need for root access.
Re: (Score:2)
Re: (Score:2)
Why not use Suso? (Score:4, Informative)
Sorry for the self plug, but this just seems silly. Your web host should be backing up your website and offer you restorations. I guess this isn't a standard feature any more. But it is at Suso [suso.com]. We backup your site and databases everyday. And can restore them for you for free.
Re:Why not use Suso? (Score:5, Insightful)
One thing that I've learned though is you can not rely on a hosting company's backup to necessarily be timely, reliable, and/or convenient. If you want to backup multiple times during the day, have multiple generations of backups, be able to very quickly restore if need be, all can make the hosting backup unattractive. I'm not saying yours is that way, just with some of the hosting companies I've dealt with in the past.
This also doesn't take into consideration the best-practice of having your backups off-site for disaster recovery. It doesn't help very much to have your backup server/drive/whatever 1U down in the rack when the building collapses, has a fire, floods, etc destroying everything in it.
Re: (Score:3, Interesting)
Because most sites have 2 failure points.
1 - they buy the super cheap service with no backup.
2 - the site is designed poorly with no backup capabilities.
If your site has a DB, your site better have a admin function to sump the DB to a tgz file you can download. Mine generates a password protected rss feed and encrypted tgz file (in a password protected area.) I simply have a rss reader/retriever configured to watch all my sites and retrieve the backups when they are generated.
I get that DB and any user/c
Re: (Score:2)
That RSS approach is, quite frankly, brilliant. Care to share the source?
Re: (Score:2)
If you're paying for managed hosting, yes. If you're using an unmanaged VPS (like me), not so much. Some of them will image your VPS on demand and store that image for recall if your system is FUBARed. For mine, I just have a daily rsync job running on a Mac mini at home (less power consumption than most, so it runs 24/7) that pulls down my email and websites. Another job creates and rsyncs database dumpfiles. I should probabl
S3 costs about $300 per TB (Score:2)
price list [amazon.com]
Re: (Score:2)
Maybe I'm just the king of cheap, but I find $300/TB rather excessive, considering I could rent out a massive file server for less. There's a certain charm to the S3 solutions since you can use a turnkey service to do the actual backup grunt-work, but the big problem with Amazon's services is they're only cost-effective for small or short-run projects.
I just ran the numbers for one of my existing web servers, and the same service delivered via Amazon's S3 would cost me over $1500, largely because of their
Re: (Score:2)
I like the fact that I can use S3 and only pay for what I use, even if it means that it wouldn't be cost efficient for me to do that if my usage goes through the roof.
Don't think of S3 as your "one and only" storage solution. Think of it as a great starter kit and possibly an excellent addition to your existing infrastructure.
For example, I know that a rather big video sharing site uses S3. I'm not sure what the details are exactly in
Re: (Score:2)
And your comment ignores the fact that every single one of these suggestions bypasses his slow home connection by backing up across the web to a different online site. His bandwidth problem is to his home, not from his web site.
These are all good suggestions to solve his primary problem, which is how to backup his site somewhere else. Maybe his DVD question is how he thinks of doing it, but it isn't necessarily the only or even the best way to do it. It is entirely possible that a bit of thinking outside
Why FTP? Use rsync. (Score:5, Informative)
It seems like the only problem with your home computer is FTP. Why not use rsync, which does things much more intelligently - and with checksumming, guarantees correct data?
The first time would be slow, but after that, things would go MUCH faster. Shoot, if you set up SSH keys, you can automate the entire process.
yeah, use rsync. (Score:5, Insightful)
Here's the one thing to remember in terms of rsync. It's going to be the CURRENT snapshot of your data. Not a big deal, except if you're doing development and find out a week later that changes you made to your DB have had unintended consequences. If you've rsynced, you're going to want to have made additional local backups on a regular basis so you can roll back to one of those snapshots prior to when you hosed your DB. Apologies if that was obvious, but rsync is the transfer mechanism. You'll still want to manage archives locally.
Seth
Re:yeah, use rsync. (Score:5, Informative)
Then what you need is rdiff-backup, works like rsync except it keeps older copies stored as diffs.
As for FTP, why the hell does anyone still use ftp? It's insecure, works badly with nat (which is all too common) and really offers nothing you don't get from other protocols.
Re:yeah, use rsync. (Score:5, Informative)
Then what you need is rdiff-backup, works like rsync except it keeps older copies stored as diffs.
Another option is to use the --link-dest option to rsync. You give rsync a list of the older backups (with --link-dest), and the new backup is made using hard links to the old files where they're identical.
I haven't looked at rdiff-backup, it probably provides similar functionality.
Part of my backups script (written for zsh):
setopt nullglob
older=($backups/*(/om))
unsetopt nullglob
rsync --verbose -8 --archive --recursive --link-dest=${^older[1,20]} \
user@server:/ $backups/$date/
Re: (Score:3, Informative)
Also, rsync has a --bwlimit option to limit the bandwidth it uses.
Re: (Score:2)
rdiff-backup makes incremental diffs of individual files, which saves a lot of space for large files which have small changes like database backups, virtual machine images, and large mailspools.
On the other hand, the rsync schemes are somewhat more straightforward to deal with if you don't have much space in such files.
Re: (Score:2)
rdiff-backup makes incremental diffs of individual files, which saves a lot of space for large files which have small changes like database backups, virtual machine images, and large mailspools.
And rsync with the link-dest option uses hard links, which occupy exactly the space for the inode they are using and no more.
A diff is a small file which occupies a small amount of space on your disk - a non-zero sized file will occupy at least 4k depending on the size of your file system.
Hard links use up a single inode and no more. Why mess about with diffs when you can change into a directory and see yesterday's backup? Don't like yesterdays? Change into another and see the day before that. The depth
Re: (Score:2)
If you have a 40 gig VM image, and you modify 1 bit, your scheme will add 40 gigs.
rdiff-backup will add one bit + some small overhead.
This is huge win for some users, not so much for others.
For servers with large database dumps which are mostly static, it can be a large win.
Re: (Score:2)
Hard links use up a single inode and no more. Why mess about with diffs when you can change into a directory and see yesterday's backup? Don't like yesterdays? Change into another and see the day before that. The depth is limited by the number of inodes you have in your filesystem / number of inodes used in filesystem being backed up.
Agree completely. Disk space is cheap as dirt anyway.
I'm using rsync to mirror about 300 GB on 300+ remote partitions, with snapshots [mikerubel.org] going back up to a month, depending. I have 8 fairly low-end boxes doing about 40 partitions each. Normally all boxes finish in 90-100 minutes.
Total cost for this project was less than 20k. Bids from commercial vendors for similar functionality were much, much higher.
Re: (Score:2)
rsnapshot takes care of managing the snapshots for you as well...
Re: (Score:2)
Or...use subversion to actually store your data. If you use FSFS format (the filesystem version of SVN, which IMHO is better than the embedded database format because it doesn't occasionally get corrupted), all data is actually *stored* as diffs anyway.
You can actually do an rsync of the live data, and it'll work perfectly, and never overwrite things you need.
If you're worried about past versions, you should be using source control, so IMHO, this is a better option than an almost-source control one like rd
Re: (Score:2)
When it works, that it. The problem with rdiff-backup is that ultimately it's a massive script with no error-checking, and if anything ever goes wrong (including just the network having difficulty), you have to "fix" your backup, and then start over again. Of course the process of fixing usually takes twice or three times as long as actually performing the backup, so you can wind up with a backup that's impos
Re: (Score:2)
A lot of cheap hosts don't allow for SSH/SCP connections.
Re: (Score:2)
How do you get your stuff up there without scp? If their answer is "use FTP," they need to go out of business yesterday. (I suppose they could use a web form served up over HTTPS to manage your space, but that'd quickly get annoying for any non-trivial website.)
Re: (Score:2)
Re:yeah, use rsync. (Score:5, Informative)
There is also the --backup --backup-dir options (you'll need both). It keeps a copy of the files that have been deleted or changed, if you use a script to keep it in seperate directories you'll have a pretty good history of all the changes.
Re: (Score:3, Informative)
Re: (Score:2, Informative)
Re: (Score:1)
Re: (Score:2, Informative)
Comment removed (Score:4, Insightful)
Re: (Score:2)
So that makes that you are stuck with FTP
wget --mirror?
LFTP's scripted system allows mirroring the backup and only getting files that have changed. With some server-side scripting to dump database diffs it wouldn't be hard to make a FTP backup solution that only downloaded changed files.
Re:Why FTP? Use rsync. (Score:4, Informative)
I use rsync on a few dozen systems here, some of which are over 1TB in size. Rsync works very well for me. Keep in mind that if you are rsyncing an open file such as a database, the rsync'd copy may be in an inconsistent state if changes are not fully committed as rsync passes through the file. There are a few options here for your database. First one that comes to mind is to close or commit and suspend/lock it, make a copy of it, and then unsuspend it. Then just let it back up the whole thing, and if you need to restore, overwrite the DB with the copy that was made after restoring. The time the DB is offline for the local copy will be much less than the time it takes rsync to pass through the DB, and will always leave you with a coherent DB backup.
If your connection is slow, and if you are backing up large files, (both of which sound true for you?) be sure to use the keep-partial option.
One of my connections is particularly slow and unreliable. (it's a user desktop over a slow connection) For that one I have made special arrangements to cron once an hour instead of once a day. It attempts the backup, which is often interrupted by the user sleeping/shutting down the machine. So it keeps trying it every hour it's on, until a backup completes successfully. Then it resets the 24 hr counter and won't attempt again for another day. That way I am getting backups as close to every 24 hrs as possible, without more.
Another poster mentioned incrementals, which is not something I need here. In addition to using a version of rsync that does incrementals, you could also use something off-the-shelf/common like retrospect that does incremental but wouldn't normally work for your server, and instead of running that over the internet, run it on the local backup you are rsyncing to. If you need to go back in time a bit still can, but without figuring a way to jimmy in rsync through your network limits.
Actually, his only problem is.... (Score:5, Insightful)
... his slow internet connection, and wants to pay something to not have to move files over his slow internet connection.
How about:
- Pay for a hosting provider that DOES provide real backup solutions....
- Pay for a real broadband connection so you CAN download your site....
As with most things that are 'important'...
Right, Fast or Cheap - pick two.
Re: (Score:2, Informative)
Re: (Score:3, Informative)
Nevertheless, as others have mentioned, if your data is fairly static, then the initial backup might be painful, but then backing up only changes shouldn't be too difficult.
I've never really understood some of the problems that come along, mainly because I'm not a website developer (only as a personal thing). If you develop your site locally and then upload it, all your pages and codes and images should already be on your own computer.
If you get a lot of dynamic content (people uploading media or writing t
Sure you need to back the full 2 gig? (Score:2, Interesting)
Presumably, much of that 2 gig of data is static, so perhaps you could look into minimisation of exactly *what* you need to back up? It might be within the realm of your net access.
bqinternet (Score:2, Informative)
We use http://www.bqinternet.com/
cheap, good, easy.
Gmail backup (Score:4, Informative)
You may have to use extra tools to break your archive into seperate chunks fitting Gmail's maximum attachment size, but I've used Gmail to backup a relative small (~20mb) website. The trick is to make one complete backup, then make incremental backups using rdiff-backup. I have this done daily with a cron job, sending the bz2'ed diff to a Gmail account. Every month, it will make a complete backup again.
And a seperate Gmail account for the backup of the mysql database.
This may be harder to do with a 2GB website, i guess, since Gmail provides atm about 6GB of space which will probably last you about 2 months. Of course you could use multiple gmail accounts or automated deletion of older archives...
But seriously, 2GB isn't too hard to do your from own PC if you only handle diffs. The first time download would take a while, but incremental backups shouldn't take too long unless your site changes drastically all the time.
Re:Gmail backup (Score:5, Insightful)
This strikes me as a really dumb thing to do; as both a) using it for data storage rather than primarily email storage and b) signing up for multiple accounts are both violations of the gmail TOS, you are just asking for your backups to not be available when you most need them.
Re: (Score:2, Interesting)
While I certainly don't claim using Gmail for backup is a smart thing to do, can you point out where in the ToS this is stated, as I looked through it and see no mention of either restriction?
Re: (Score:2)
Create multiple user accounts in connection with any violation of the Agreement or create user accounts by automated means or under false or fraudulent pretenses
So it's illegal to create more than one account if you're breaking the rules in some other way - not specifically illegal on it's own. The terms don't mention using it for anything other than email, though...
See the program policies [google.com] and terms of use [google.com].
Wow (Score:5, Insightful)
Re:Wow (Score:5, Funny)
Re: (Score:2, Interesting)
Re: (Score:2)
Re: (Score:2)
I had the same problem... (Score:5, Interesting)
After looking at the available options, I decided that there was nothing which met my criteria for convenience, efficiency, and security. So I decided to create my own.
I'm looking for beta testers: http://www.daemonology.net/blog/2008-05-06-tarsnap-beta-testing.html [daemonology.net]
rsync - it's in the tag (Score:5, Informative)
Re: (Score:2, Interesting)
Re: (Score:2)
rsync is definitely your friend. Check out the man pages and look up some examples on the net. (The command line options I use are rsync -avurtpogL --progress --delete, but YMMV.)
Re: (Score:1)
Why does everyone forget about restore time. (Score:2)
I hope you have a better plan if you ever need to do a full restore in anger. It's all right spending days backing up in small chunks, but if your data ever goes south, it's going to take at least as long to restore it all again. In the mean time, your web site/application/business is flat on its back.
Give them here (Score:5, Funny)
Backuppc.sourceforge.net (Score:2, Informative)
I sure hope you're no UK based... (Score:5, Informative)
Comment removed (Score:3, Informative)
SquirrelSave (Score:1)
I use a product called SquirrelSave:
http://www.squirrelsave.com/ [squirrelsave.com]
which uses a combination of rsync and SSH to push data to the backup servers. The client is currently only for Windows at the moment, but with promises of a Linux and OS X version coming soon.
It generally works quite well - WinSCP is included to pull data back off the servers.
Re: (Score:1)
Unfortunately I just read the post - properly. SquirrelSave doesn't (yet) support server OSes according to the web site. Sorry 'about that.
How quickly do you need it back? (Score:5, Insightful)
One thing a lot of people forget when they propose backup systems is not just how quickly can you take the backup, but how quickly do you need it back?
A sync to your own PC with rsync will, once the first one's done, be very fast and efficient. If you're paranoid and want to ensure you can restore to a point in time in the last month, once the rsync is complete you can then copy the snapshot that's on your PC elsewhere.
But you said yourself that your internet link isn't particularly fast. If you can't live with your site being unavailable for some time, what are you going to do if/when the time comes that you have to restore the data?
Switch Web Hosts -- Proper Backups are a MUST (Score:2)
I'm in agreement that an rsync based offsite backup solution is always a great idea. rdiff-backup [nongnu.org] or duplicity [nongnu.org] is the way to go.
That being said, proper backups is a must that any web host should provide. I used to use dreamhost and they did incrementals and gave you easy access to them. Some time ago I outgrew shared hosting and went to slicehost which offers absolutely awesome service and although backups cost extra, they do full nightly snapshots, and it's all easy to manage (restore, take your own sna
If you use wordpress, I have a solution for you (Score:1)
why not try ... (Score:5, Funny)
NSA.gov?
NSA: We backup your data so you won't have to!
How it works:
First, edit each page on your website ab add the following meta tags: how-to, build, allah, infidels, bomb (or just any of the last three, if you're in a hurry).
On the plus side, you don't need to give them your money, nor your password.
On the minus side ... there is no minus side (I mean, who needs to travel anyway?)
Posting anonymously as I have moderated in this thread (that, and they already know where I live).
Re: (Score:2)
Retrieving the backups can be problematic.
MyDisk (Score:1)
Try Manent (Score:2)
It is extremely efficient in backing up a local repository. A 2GB working set should be a non-issue for it. I'm doing hourly backups of my 40-G home dir.
Disclaimer: I am the author
Shared hosting (Score:5, Interesting)
Re: (Score:2, Insightful)
there are dozens of legitimate reasons that someone could be saddled with this kind of web host.
No, sorry. Not a single one.
Re: (Score:2)
Re: (Score:3)
Re: (Score:3, Insightful)
I love when I ask a question, and the question gets totally ignored and people insist on the exact thing that I specifically excluded as an answer.
I agree that there are countless legitimate reasons why you would be "saddled" with a Control Panel based webhost. There are also countless legitimate reasons not to continue using that host, having backup requirements that the webhost doesn't support is one of them. Maybe not so explicitly stated by the flowering examples of conventional wisdom above, but if you
Re: (Score:2)
I think you might be missing their point.
You're asking something like "how do I get my 1972 Pinto to go 120 mph?" Admittedly you could put in a new engine, new brakes, new transmission, etc. But you'd end up spending more time and money than if you just bought a car that goes 120 mph, and has less problems. That's what people are trying to tell you -- it's really not worth the effort. It's fine if you don't want to take their advice, and just want to make your Pinto go 120. But don't complain when they don'
Re: (Score:2, Insightful)
Re: (Score:2)
If you're storing the website *only* on a hosting provider that won't give you a shell, and don't have a complete copy of the entire site in your hands at all times, you've got a much bigger problem.
That is a very good sign that you're at a fly-by-night hosting company that's going to lose all your data. If you're worried about backup, you should pony up and get a decent hosting provider.
But that is probably something worth addressing anyway. Fortunately, there are many things similar to rsync that will w
Re:Shared hosting (Score:4, Interesting)
Write PHP or ASP code to generate your backups as a tar or zip and get the files that way.
When you pay for the economy hosting, you gotta write your own solutions.
Re:Shared hosting (Score:4, Informative)
> OK, I keep hearing "use rsync" or other software. What
> about those of us who use shared web hosting, and
> don't get a unix shell, but only a control panel?
As long as you've got scriptability on the client, you should be able to cobble something together. Like, in OS X, you can mount an FTP volume in the finder (Go -> Connect to Server -> ftp ://name:password@ftp.example.com) and then just
(Interestingly, it shows up as user@ftp.example.com in the Finder but the user name isn't shown in /Volumes/.)
AFAIK, pretty much any modern OS (even Windows since 98, AFAIK) can mount FTP servers as volumes. OS X mounts them as R/O, which I always thought was lame, but that's another rant.
> Or who have a shell, but uncaring or incompetent
> admins who won't or can't install rsync?
If you've got shell (ssh) access, you can use rsync. (Not over telnet, natch. If that's all you've got, look at the workaround above.) Rsync works over ssh with nothing more than
Use SSH keys to make life perfect.
Or, google for 'site mirroring tool'. Many have an option to only download newly-changed files.
To get your databases, make a page like
and download that every so often.
For the original poster, who was complaining about downloading many gigs over a slow link, just rsync over and over until its done--if it drops a connection, the next attempt will start at the last good file.
And if you've got a control panel, look for a button labeled 'backup'! My host uses CPanel and there's a magic button.
Final option: how did the data get onto the www server in the first place? Isn't there already a "backup" on your local machine in the form of the original copies of all the files you've uploaded? If you haven't been backing up in the first place, well, yeah, making up for that might be a little painful. (Note: if your site hosts lots of user-uploaded content, ignore any perceived snarkiness. :-) )
Re: (Score:2)
Re: (Score:2)
Thanks. Glad I could help. That's why we all come to slashdot--for the occasional spike in the S:N ratio. :-) Reply here if you've got any other questions. I've got many years of experience stringing together surprisingly useful things from sub-optimal components.
Re: (Score:2)
Nobody should ever be using ftp anyway. You have the ssh solution, which is great. But there are folks who insist on using the ftp thing. I've got to plug for MacFusion [macfusionapp.org], which tosses a GUI at FUSE and sshfs. It makes OSX use ssh mounted volumes transparently. For those with more hosted disk space than $DEITY (I'm looking at you, Dreamhost), this is a great way to do offline file storage for people who are more of the "dra
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Thanks for your comments... (Score:5, Informative)
Re: (Score:2)
Hi there,
for NGOs, cash will always be a problem.
Tip: the rsync option some mentioned is really easy to get going once you set it up. The first time you synch to your home PC, you have to download the whole thing, so expect to take a while for that to happen. The next time, only the changed bits are downloaded, so it will really happen rather quickly.
How to implement it, non-techie style: get a linux geek from the local university labs to help you out, shouldn't be hard do find someone knowledgeable who y
TarSnap (Score:2)
Online Backup in the UK (Score:2)
Here in the UK we use an online backup company called Perfect Backup [perfectbackup.co.uk]. You just install a bit of software on each machine and it backups according to your own schedule. The best thing is, it does a binary diff of each file and only sends the changed parts of the file so conserving bandwidth. It's pretty configurable. The pricing seems pretty good too compared to some other providers. It's more expensive than people like Carbonite, but then this is a *business* grade product with support for things like Exch
Backup at hosting provider? (Score:2)
brackup (Score:2)
I looked into this recently. There are a lot of commercial offerings. However, the only thing I found was 1) FLOSS, 2) had S3 support out of the box, 3) had a storage format that was documented and simple enough to restore without the software or even the docs, and 4) didn't use "mystery crypto" was some software called "brackup" from Brad Fitzpatrick of LiveJournal and memcached fame ("brackup" = "brad's backup.")
Brackup is written in perl using good OO practices and is very hackable. The file format is
Dead meat special?! (Score:2)
I can normally resist this, but this is too much:
"the closest I found was a site that... had a special deal for customers that had expired in June!"
What about customers that died in May? Are they screwed again?
Do they expect much repeat business from the recently departed?
Is this a way to get around the memory erasure of the River Lethe (in Hades)? A way around the memory erasure during Buddhist reincarnation? If so, how would we know they successfully restored their memories? (I guess we would know the
Use web based online backup providers. (Score:2)
There are many sites that give you tons of storage for backing up files, with various ways of storing the data, many of which are free. Google and http://www.adrive.com/ [adrive.com] are two that come to mind. No need to deal with your slow connection.