Remus Project Brings Transparent High Availability To Xen 137
An anonymous reader writes "The Remus project has just been incorporated into the Xen hypervisor. Developed at the University of British Columbia, Remus provides a thin layer that continuously replicates a running virtual machine onto a second physical host. Remus requires no modifications to the OS or applications within the protected VM: on failure, Remus activates the replica on the second host, and the VM simply picks up where the original system died. Open TCP connections remain intact, and applications continue to run unaware of the failure. It's pretty fun to yank the plug out on your web server and see everything continue to tick along. This sort of HA has traditionally required either really expensive hardware, or very complex and invasive modifications to applications and OSes."
Already done by VMware (Score:5, Interesting)
Re: (Score:3, Insightful)
Re: (Score:1, Insightful)
IIRC, to get this kind of functionality from ESX or vSphere you have to pay licenses numbering in the thousands of dollars for each VM host as well as a separate license fee for their centralized Virtual Center management system. I'm glad to see that this is finally making it into the Xen mainline.
Re: (Score:1, Insightful)
Re: (Score:1)
I think your forgetting academic institutions, Startups, research groups and all the other organizations that would MUCH rather spend their money on other things than VMware when a free alternative is available.... Or any place that just wants to keep a pure open source environment.
For that matter why would anyone NOT want HA if they can get it easy and cheap?
Just because VMware has it does not in anyway reduce the significance of Remus making it easily available in Xen.
Re: (Score:2)
To anyone who actually needs this kind of uninterrupted HA the cost of a VMware license is an insignificant irrelevance.
But now, we who don't actually need completely uninterrupted HA can have it anyway and as a bonus it will probably be easier to setup and maintain than a semi-custom "only one minute downtime"-HA solution. This is a good thing indeed.
Re:Who doesn't? (Score:2)
Remember when virtualization was only something for companies with highly specialized needs? And RAID? And cooled CPUs? And hard drives? and computers?
When a solution like this comes along, it generally starts out being used only by a few people (nerds and people who REALLY need it)
Then it filters down into the rest of the market as a nice solution to a common problem.
Then it becomes something which nobody can imagine living without.
Then it becomes unthinkable to design a system which doesn't have this abil
Re: (Score:2)
Re: (Score:2)
Eventually we had to... retired the servers and the SAN it was connected to - problems never recurred. Great support, VMware.
Re: (Score:3, Insightful)
They bought a particular version of vmware, and paid vmware to support the setup they had bought and paid for...
VMware's method of providing support was to tell them to buy new expensive products... They failed to provide adequate support for the version they were actually being supported for...
If their product fails, then an upgrade to a working version should be free at the very least.
Re: (Score:2)
That's at the point where in-house support works? (Score:2)
Re:That's at the point where in-house support work (Score:2)
Nope (Score:4, Insightful)
Remus presented their software well before VMware came out with their product.
What's different now is that the Remus patches have finally been incorporated into the Xen source tree.
If VMware has any patents, they'll have to jump over the hurdle of being before the Remus work was originally published, which was a while ago.
Besides, Remus can be used in more ways than what VMware offers, since you have the source code.
Re: (Score:2)
What's different now is that the Remus patches have finally been incorporated into the Xen source tree.
Hear, hear! I spent my summer research internship this year incorporating Remus patches into the Xen source tree for use on a departmental project. It was two months of bloody hacking to make the patched source, the build system, and the use environment cooperate well enough to actually get a Remus system running and backing up its VMs over the network. We never got it perfect.
Re: (Score:2)
the remus paper references vmware's high availibility. (also was published in 2008 about 1.5 years ago, though dont know when it first started to be used, possibly before then)
however, incremental checkpoint precedes both. See (pulling from my bibtex for paper I helped write)
author = "J. S. Plank and J. Xu and R. H. B. Netzer",
title = "{Compressed Differences: An Algorithm for Fast
Incremental C
Re:Already done by VMware (Score:4, Interesting)
Re: (Score:2)
We use our product with Marathon's everRun FT. Just starting to do load testing using the Xen with their 2g product. It looks nice, but the second layer of management gets to be a pain.
Re: (Score:2)
And it didn't require any "really expensive hardware, or very complex and invasive modifications" to do it. Not saying its going to run on some old beat up Pentium Pro from 10 years ago, but the hardware i see it run on every day isn't out of line for a modern data-center.
And it requires ZERO changes to the OS.
( at risk here of sounding like a Vmware fanboy, but come on.. at least they can present facts when tooting their horn )
Re: (Score:2)
Re: (Score:2)
This sort of stuff is far older; it goes back to mainframe days and supercomputing.
Furthermore, the idea of running two machines in lockstep and failing over shouldn't be patentable at all. Specific, particularly clever implementations of it might be, but those shouldn't preclude from others being able to create other implementations of the same functionality.
Re: (Score:2)
Granted real hardware, as opposed to software, but perhaps?
Re: (Score:2)
And, IIRC, NonStop SQL wasn't one of those applications - that amused me.
Re: (Score:2)
It's pretty fun (Score:2)
It's pretty fun to yank the plug out on your web server and see everything continue to tick along. "
Or an ordinary, every day run of the mill 'off the shelf' plain jane beige UPS. or a Ghetto one [dansdata.com], if you'd like.
Still its pretty cool, just wondering how much overhead there is by setting up this system
Re: (Score:2, Insightful)
Re:It's pretty fun (Score:5, Informative)
Uuum... session management? Transaction management? The server dying in the process of something that costs money?
Even if it's something as simple as losing the contents of your shopping cart just before you wanted to buy, and then becoming angry at the stupid ass retarded admins and developers of that site.
Or losing the server connection in your flash game, right before saving the highscore of the year.
Webservers are far less stateless than you might think. Nowadays they practically are app servers. (Disclosure: I did web applications since 2000, so I know a bit about the subject.)
When 5 minutes downtime mean over a hundred complaints in your inbox and tens of thousands of dropped connections, which your boss does not find funny at all, you don't do that error again.
Re:It's pretty fun (Score:4, Insightful)
Webservers are far less stateless than you might think. Nowadays they practically are app servers. (Disclosure: I did web applications since 2000, so I know a bit about the subject.)
Webservers have no business being the sole repository for these things - the whole point of separating out web from app is that web boxes are easily replaceable with no state.
Session mgmt: store the session in a distributed way at least after each request. Transactions: they fail if you die half way through. Shopping cart: this doesn't live on a web server.
If you require all that state, how do you ever do load balancing? Add a web server and it's another SPOF.
When 5 minutes downtime mean over a hundred complaints in your inbox and tens of thousands of dropped connections, which your boss does not find funny at all, you don't do that error again.
That's right, you move the state off the webserver so nobody ever sees the downtime and tell your boss that you promised 99.9 and damnit, you're delivering it!
Re: (Score:2)
"Session mgmt: store the session in a distributed way at least after each request."
Bingo. With your solution, a submitted page request will fail. In fact, every page request and connection being handled by that server when it fails will fail.
With the article's solution, things automagically switch over and everyone gets the data they requested. Users notice nothing.
"... so nobody ever sees the downtime..."
Except all of the users that clicked register or buy and get nothing at all.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Web servers are stateless and sit in front of app servers, which are stateful but which have their sessions propagated to at least one other instance. When a web server dies no-one cares, if an app server dies you just need to have some logic that allows the box which gets the next request in the session to either (a) redirect the request to the app server which was the back up for that session or (b) pull the session into it's own cache from the backup.
Re: (Score:2)
I don't know about you, but my web apps don't let the web server handle session and transaction management. Thats what I have a database server for, thats capable of dealing with thoses issues in a known way, that I can recover from to some extent. My important web apps use clusters of databases that take care of each other. Theres a reason Oracle costs a fortune, and MySQL is free. I can't stand working with Oracle, but theres a reason it exists. Of course you don't have to use Oracle, thats just one
Re: (Score:2)
"I can turn off one of my web servers or database servers, literally killing tens of thousands of connections, and the worst case is a half a second of delay or so while the cluster removes it from the loop. The most the user sees is some web pages don't load some content."
So if that server is running a shopping cart, then "thousands" of users might just have had their credit card submissions fail. They don't get confirmations and they don't know if the order went through or not. And I'd almost guarantee th
Re: (Score:2)
It depends. You can engineer a system to be very stateful, and you have to route the same client to the same webserver in order to maintain functionality. Or, you can build a totally stateless webserver, with all data stored on db servers and/or memcache installs. It's not hard to do this with many different web frameworks. So I disagree with you on the facts: many, many webservers these days are totally stateless. Perhaps you program in .NET? I have no idea how they do things.
Re:It's pretty fun (Score:4, Insightful)
In many cases, the webserver IS the app server.
This sort of feature could be very useful for those smaller shops and cheap shops who haven't yet created a dedicated Web tier, or for all those internal webservers which host the Wiki, etc.
Webservers also help with capacity. Run 4 and if 1 drops off, not a big problem. But what if half the webservers drop off because the circuit which powers that side of the cage went down? And the 'redundant' power supplies on your machines weren't really 'redundant' (Thanks Dell)?
Re: (Score:2)
Re: (Score:2)
If they are smaller/cheaper shops, they probably aren't playing around with heavy virtualization to begin with.
My point is, this is a great virtualization feature which is very accessible and affordable for smaller shops. It may not be as nice as some of the solutions offered by VMware, Citrix, etc. but it's not as expensive either.
Get a better UPS setup.
Even your 'better UPS setup' will fail, sometimes. I'm specifically thinking of several power outages at major datacenters in Northern California, which we
Re: (Score:2)
Re: (Score:2)
No, it doesn't.
This sort of solution protects from a limited subset of faults.
It protects 100% from any fault that causes instant death.
It does not protect from any fault that causes data corruption, where the system continues to run.
Undetected bit-errors cause the states across the machines to differ.
If these bit errors are replicated, you've got a machine in a copied, but corrupt state - the original and the copy may crash at exactly the same point.
If they aren't, then you may get 'lucky', and have it fai
Re: (Score:2)
Re: (Score:2)
Server class hardware should never have hardware faults either.
Yes, server class hardware is usually more robust than consumer grade in terms of some bit errors.
However, in the last minutes or seconds before a crash due to hardware failure, something is obviously going way out of spec.
If this is detectable - it's a no-brainer - you simply failover if thresholds are breached, but before the crash occurs. (and you can afford to be a _lot_ more critical if you've got spare hardware)
But a fair proportion of cra
Re: (Score:2)
Or an ordinary, every day run of the mill 'off the shelf' plain jane beige UPS. or a Ghetto one, if you'd like.
Sure, but power failure isn't the only thing that can stop your server from running -- it's just the easiest one to reproduce without permanently damaging anything. If you'd like a better example, yank the CPU out of your web server's motherboard instead. Your UPS won't save you then! :^)
Himalaya (Score:3, Interesting)
How does this compare to a "big iron" solution like Tandem/Himalaya/NonStop/whatever-it's-called-nowadays.
Re: (Score:2)
Re: (Score:3, Informative)
I was just thinking that...
Tandems may still have other advantages, though; back in the day, we built a database on Himalayas/NSK because, availability aside, it outperformed Sybase, Oracle, and other solutions. (They implemented SQL down at the drive controller level; it was ridiculously efficient.) No idea if that's still the case.
But Tandem required you to build their availability hooks into your app; it wasn't transparent. OTOH, Stratus's approach is;a Stratus server is like having RAID-1 for every com
Re:Himalaya (Score:5, Interesting)
By the time you get all the components that provide the processing and I/O throughput of those high-end boxes, the x86/64 commodity hardware cost advantage has evaporated.
Re: (Score:2)
Re: (Score:1, Informative)
The IO bottleneck in this case is the interconnect between the two machines, not disk, so the SAN isn't relevant. VMware FT needs at least a dedicated GbE NIC for replay/lockstep traffic, I think the recommendation is 10Gb, and is still limited to using a single vCPU in the VM.
Re: (Score:2)
Exchange is not a "high IO application". A high IO application is something like all the ATM transactions for Chase bank in North America. If you can have 20 servers on a single physical host you're doing it wrong: your apps aren't heavy by a long shot.
Re:Himalaya (Score:4, Insightful)
Were you replying to my comment? Because it doesn't sound like you read my comment. I specifically said there are cut-off points where virtual infrastructure doesn't make sense.
Also, the fact that you think the IO of SAN is any different than that of an HP Non-Stop setup is where things get really comical because you're talking about Infiniband which is used in x86 hardware as well. As I said, the threshold is moving into higher and higher workloads.
I'm also not sure where you get your information about Exchange not being IO intensive. Exchange setups easily handle billions of transactions just like the big RDBMS out there. That's why when you evaluate virtual platforms they always ask you about your Exchange environment as well as your database environment. They are both considered to be high IO applications as all they do practically is read and write from disk.
I find the whole concept of your argument funny considering the Non-stop setups were early attempts at abstraction from the hardware to handle failure and be able to spread the load. In essence it was the start of virtual infrastructure. There is a reason Non-Stop isn't primarily part of HP's business anymore, people are achieving what they need to with commodity hardware. Sorry, but you do indeed save a lot of money that way too. Enterprise crap used to cost boat loads, now it is accessible to much smaller players with smaller workloads but the same demands for up-time.
Re: (Score:2)
Re: (Score:2)
By the time you get all the components that provide the processing and I/O throughput of those high-end boxes, the x86/64 commodity hardware cost advantage has evaporated
I think the potential savings comes not so much from the hardware as from not having to redesign/re-write your low-availability (tm) software from scratch in order to make it highly-available. Instead you just slap your existing software in to the new Remus VM environment, connect the backup machine, and call it done.
(Whether or not that m
Re: (Score:3, Interesting)
Re: (Score:2)
You forgot to account for the time it took you to re-write the app. Porting 700kLOC in an obscure language doesn't sound like one guy did it in a week.
Without the data, I'll still assume it's cheaper. It would take a couple man years to make up the difference. But it's not a 98% cost savings.
Re: (Score:2)
Re: (Score:2)
One word: scaling
Re: (Score:2)
> Unless you move to infiniband you're not going to touch something like a Stratus
I don't know who makes the infiniband, but the Stratus in only a V6 at best. It's not *that* fast.
Re: (Score:3, Informative)
Actually, after reading the paper, this is no threat to Stratus or other players in the space like Marathon or VMWare's FT. The performance impact is pretty significant - by their own benchmarks there was a 50% perf hit in a kernel compile test, and 75% in a web server benchmark.
This is an interesting approach and seems to handle multiple vCPU's in the VM which I haven't seen done by the software approaches like Marathon and VMware FT, but I think it will mainly be used in applications that would have never
Re: (Score:1)
How does this compare to a "big iron" solution like Tandem/Himalaya/NonStop/whatever-it's-called-nowadays.
Precisely.
It's actually pretty cool from a computing history aspect. Once upon our time, the mainframes were the bad-assed machines. Hot-swapping power supplies and core modules. Several nines of uptime. Now we're doing it in software.
I see it as a mirror to what's happening with data storage and the whole "cloud computing" thing. Going back and fourth between big hosted machines and dumb clients to smaller smarter machines. It's like we flip back and fourth every few years when it comes to computer ideolog
Re: (Score:2)
I'm not comparing this to mainframes in general, only to the "redundant" types.
This isn't going to compare to a general mainframe simply because it doesn't have the massive resources (cpu's, disk space, memory, bandwidth, etc).
A lot of the those Tandems aren't used like a typical mainframe though. Sure, they may offer more resources than this Remus project solution, but many Tandem applications don't need those resources, they only need the redundancy and as-near-to-100%-as-possible-at-any-expense uptime.
An
Intact? (Score:5, Informative)
Re: (Score:1, Informative)
Infact, you're right!
Re: (Score:2)
Your complaint shows a lack of tact ;)
Re: (Score:3, Funny)
That was before someone gave Romulus a shovel!
state transfer (Score:4, Insightful)
... Of course, this ignores the fact that if it's a software glitch, it'll happily replicate the bug into the copy. Also, there are certain hardware bugs that will also replicate: Mountain dew spilled on top of the unit, for example. There's this huge push for virtualization, but it only solves a few classes of failure conditions. No amount of virtualization will save you if the server room starts on fire and the primary system and backup are colocated. Keep this in mind when talking about "High Availability" systems.
On a different note, nothing that's claimed to be transparent in IT ever is. Whenever I hear that word, I usually cancel my afternoon appointments... Nothing is ever transparent in this industry. Only managers use that word. The rest of us use the term "hopefully".
Re:state transfer (Score:4, Funny)
Mountain dew spilled on top of the unit, for example.
FTFS:
Remus provides a thin layer that continuously replicates a running virtual machine onto a second physical host.
Wow! This software is *incredible* if mountain dew spilled on top of one machine is instantly replicated on the other machine! I'm gonna go read the source immediately, this has huge ramifications! In particular, if an officemate gets coffee and I also want coffee, only one of us needs to actually purchase a cup!
Re: (Score:3, Funny)
Wow! This software is *incredible* if mountain dew spilled on top of one machine is instantly replicated on the other machine! I'm gonna go read the source immediately, this has huge ramifications! In particular, if an officemate gets coffee and I also want coffee, only one of us needs to actually purchase a cup!
I told them quantum computing was a bad idea, but nobody listened...
I told them quantum computing was a bad idea, but nobody listened...
I told them...
Re:state transfer (Score:4, Interesting)
If your primary and secondary systems are physically located next to each other then they aren't in the category of highly available. Furthermore with storage replication and regular snapshotting you can have your virtual infrastructure at your DR site on the cheap while gaining enterprise availability and most importantly, business continuity.
I'll agree with being skeptical about transparency although how many people already have this? I went with XenServer and Citrix Essentials for it, I already have this fail-over and I can tell you that it works. I physically pulled a blade out of the chassis and sure enough, by the time I got back to my desk the servers were functioning having dropped a whole packet. Further tweaking of the underlying network infrastructure resulted in keeping the packet with just a momentary rise in latency.
Enterprise availability is fast coming to the little guys.
Re: (Score:3, Informative)
Re: (Score:2)
How much bandwidth is needed for the connection on a per-machine basis? Asked another way - if I had 10 machines that I wanted to use this approach on, how fast of a connection would I need? At what levels of latency do problems start?
Re:state transfer (Score:5, Informative)
Re: (Score:2)
Cool. Thanks for the info.
Re: (Score:2)
Plenty of room for a Riverbed or Cisco WAAS in between to accelerate transfers as well. Sounds like you and I want to use the tech in similar ways.
For me, I don't mess with BGP yet, I can accomplish what I need through virtual links with OSPF. Won't be as smooth as my per site fail-over since I have two locations on site. It's a temporary setup so I have three locations, a primary at our event, a secondary at our event, and a third back at HQ with a fourth on its way for DR purposes. Sucks moving your netw
Re: (Score:3, Interesting)
"If your primary and secondary systems are physically located next to each other then they aren't in the category of highly available."
High availability covers more than just distributed data centers. Load-balancing, fail-over, clustering, mirroring, reduntant switches, routers, and other hardware: all are zero-point-of-failure, high availability solutions.
Re: (Score:2)
You're confusing high availabilty with disaster recovery. Don't worry, my managers can't get it right either.
How does it deal with replication latency? (Score:3, Interesting)
I'm pretty sure that if I just yank the cable, not everything will be replicated. :-)
Re:How does it deal with replication latency? (Score:5, Informative)
Re: (Score:2)
How does remus handle things if it mispredicts the packets?
Supposing that it sends packet X, crashes, and then when it's restored from checkpoint it decides to send packet Y instead?
Schroedinger
Re:How does it deal with replication latency? (Score:5, Informative)
Re: (Score:3, Interesting)
No it won't.
VMWare claims the same crap and its simply not true.
You have a 50ms window between checkpoints that can be lost, in your example . The only way to ensure no lost is to ensure that every change, every instruction, every microcode executed in the CPU on machine A is duplicated on B before A continues to the next one. You simply can't do that without specialized hardware since you don't even have access to the microcode as its executed on standard hardware.
50ms on my hardware/software can mean th
Re:How does it deal with replication latency? (Score:5, Insightful)
Re: (Score:2)
Do large memory operations cause the network buffer to stall until the memory changes are synchronized?
Re: (Score:2)
this isn't true. a fully recoverable abstraction can be maintained without digging into
the architecture. you just need a point periodically where you flush everything and define a
consistent checkpoint
personally i prefer doing this in the database, or operating system, or application, but suggesting
that you cant do this underneath is simply wrong. it just comes down to performance
Re:How does it deal with replication latency? (Score:5, Insightful)
If your application cannot tolerate a 50 msec pause in outbound traffic (which is what Remus seems to introduce, similar to VMWare switchovers) then you have no business running it over a network, much less over the Internet as a whole. Similar pauses are introduced in core switching and core routers on a fairly frequent basis, and are entirely unavoidable.
There are certainly classes of application sensitive to that kind of issue: various "real-time-programming" and motor control sensor systems require consistently low latency. But for public facing, high-availability services, it seems useful, and much lighter to implement than VMWare's expensive solutions.
Re: (Score:2)
Re: (Score:2)
Its not 'one 50ms pause' thats the problem, its 'one 50ms pause for every sort of communication with external hosts of any sort'
Open a database connection, for instance:
VM sends start request, wait for checkpoint (50ms)
DB responds to packet with ACK
VM sends response ACK
VM sends DB handshake start, wait for checkpoint (50ms)
Server responds with server info
VM sends DB protocol version requested, wait for checkpoint (50ms)
Server responds.
VM sends transaction start request, wait for checkpoint (50ms)
Server resp
Re: (Score:2)
From a Remus whitepaper:
http://www.usenix.org/events/nsdi/tech/full_papers/cully/cully_html/index.html [usenix.org]
Re: (Score:2)
Hi bcully
Mind if i ask you something.
Currently i am running a xen setup where we replicate the storage between two machines using drbd.
Live migration is supported in this scenario and failover is said to be as well though i haven't come around to check that out yet.
1. Are there any advantages to using Remus over such a setup. ( other then being much easier to setup :p )
2. Would it be possible to use proven solutions like drbd with remus or does this simply miss the point?
I'll be sure to check it out when it
Re: (Score:2)
Thanks for the feedback!
I am really looking forward to it.
You know it's actually quite easy to have multiple block devices using drbd and make them available to your VM? You can specify as many drbd devices as you like in your config. I am currently using one for root and one for swap.
disk = [
'drbd:drbd-server1-root,xvda1,w',
'drbd:drbd-server1-swap,xvda2,w',
]
I am no expert at this but couldn't you use a third drbd block for storing the journal that keeps track of all the changes and use that
Wrong place to put a failsafe? (Score:4, Insightful)
Surely there is a strong possibility of a failure where both VMs run at once- the original image thinking it has lost touch with a dead backup, and the backup thinking the master is dead, and so starting to execute independently? If they're connected to the same storage / network segment, it could cause data loss, bring down the network service and so on. I've not investigated these types of lockstep VMs, but it seems you have to make some pretty strong assumptions about failure modes, which always break eventually commodity hardware (I've seen bad backplanes, network chips, CPU caches, RAM of course, switches...). How can you possibly handle these cases to avoid having to mop up after your VM is accidentally cloned?
Re:Wrong place to put a failsafe? (Score:4, Informative)
Re:Wrong place to put a failsafe? (Score:5, Interesting)
This is something that the much simpler Linux-HA environment deals with by using something they call STONITH, which basically means to Shoot The Other Node In The Head. STONITH peripherals are devices that can completely shut down a server physically, e.g. a power strip that can be controlled via a serial port. If you wind up with a partitioned cluster, which they more colorfully call a 'split brain' condition, where each node thinks the other one is dead, each of them uses the STONITH device to make sure, if it is able. One of them will activate the STONITH device before the other, and the one which wins keeps on running, while the one that loses really kicks the bucket if it isn't fully dead. I imagine that Remus must have similar mechanisms to guard against split brain conditions as well. I've had several Linux-HA clusters go split brain on me, and I tell you it's never pretty. The best case is that they only both try to grab the same IP address and get an IP address conflict, in the worst case, they both try to mount and write to the same fiberchannel disk at the same time and bollix the file system. If a Remus-based cluster split brains, I can imagine that you'll get mayhem just as awful unless you have a STONITH-like system to prevent it from happening.
Re: (Score:2)
Sounds like a godawful mess, glad I've never had to deal with a split-brain. We manage mostly Solaris clusters and they're pretty good about panicking a node when there's a chance the cluster risks becoming inconsistent (loss of quorum). If you're already syncing disks like in this case it shouldn't be too difficult to set up a quorum device or HACMP-like disk heartbeats. Doesn't Linux-HA support this type of setup ?
Re: (Score:2)
I ran some cluster software (Veritas) on Solaris and later Linux. The Solaris version was great. If a node lost sync, it paniced, rebooted, and attempted to rejoin. If it couldn't join the quorum, it didn't do anything. The Linux version had frequent single-node splits. If a node lost sync, it would dump a kernel stack trace to the serial console (taking several minutes), and then pick up where it left off.
Technically, the Solaris cluster needed the same STONITH system that the Linux cluster needed. P
I don't know how Dr. Breen is doing it. . . (Score:2)
Re: (Score:2)
I'd think that'd be the easy part, much easier than having shared storage. The synchronization to make sure writes against shared storage happened exactly once would be much harder.
Answer (Score:5, Informative)
I've worked with Remus, so I can answer your question.
It's not "constantly going" into live migration. The backup image is constantly kept in a "paused" state. It doesn't come out of the paused state until communication with the original is broken.
Until the backup goes live, the shadow pages for memory are updated, via checkpoints. The checkpointing interval is somewhat variable, but it's actually hardcoded into the Xen software (at present - this will change), regardless of what the user level utility tells you.
As it is, the subsecond checking doesn't work too well. But intervals of about 1-2 seconds works great. Getting subsecond checkpointing can be done (I've done it), but you need extra code than what Remus currently provides.
Similar comments are applicable to the storage updating. This works absolutely superbly if you're using something like DRBD for the storage replication.
Remus is pretty cool technology, and it serves as a very solid foundation for taking things to the next level.
The folks at UBC have done a superb job here, and should be well congratulated.
Re: (Score:2)
From http://www.usenix.org/events/nsdi/tech/full_papers/cully/cully_html/index.html [usenix.org]