Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Technology

Remus Project Brings Transparent High Availability To Xen 137

An anonymous reader writes "The Remus project has just been incorporated into the Xen hypervisor. Developed at the University of British Columbia, Remus provides a thin layer that continuously replicates a running virtual machine onto a second physical host. Remus requires no modifications to the OS or applications within the protected VM: on failure, Remus activates the replica on the second host, and the VM simply picks up where the original system died. Open TCP connections remain intact, and applications continue to run unaware of the failure. It's pretty fun to yank the plug out on your web server and see everything continue to tick along. This sort of HA has traditionally required either really expensive hardware, or very complex and invasive modifications to applications and OSes."
This discussion has been archived. No new comments can be posted.

Remus Project Brings Transparent High Availability To Xen

Comments Filter:
  • by Lurching ( 1242238 ) on Wednesday November 11, 2009 @06:50PM (#30066950)
    They may have a patent too!!
    • Re: (Score:3, Insightful)

      I'll bet a paycheck that prior art in various incarnations would handily dispatch any such patent. As for it already being done by VMware, a lot of organizations prefer a purely open source solution, and Xen works extremely well for many companies.
    • Re: (Score:1, Insightful)

      Yeah, and at a great price point. *rolleyes*

      IIRC, to get this kind of functionality from ESX or vSphere you have to pay licenses numbering in the thousands of dollars for each VM host as well as a separate license fee for their centralized Virtual Center management system. I'm glad to see that this is finally making it into the Xen mainline.
      • Re: (Score:1, Insightful)

        by Anonymous Coward
        To anyone who actually needs this kind of uninterrupted HA the cost of a VMware license is an insignificant irrelevance. Of course, it's nice that people can play around with HA at home now for free.
        • by Anonymous Coward

          I think your forgetting academic institutions, Startups, research groups and all the other organizations that would MUCH rather spend their money on other things than VMware when a free alternative is available.... Or any place that just wants to keep a pure open source environment.

          For that matter why would anyone NOT want HA if they can get it easy and cheap?

          Just because VMware has it does not in anyway reduce the significance of Remus making it easily available in Xen.

        • To anyone who actually needs this kind of uninterrupted HA the cost of a VMware license is an insignificant irrelevance.

          But now, we who don't actually need completely uninterrupted HA can have it anyway and as a bonus it will probably be easier to setup and maintain than a semi-custom "only one minute downtime"-HA solution. This is a good thing indeed.

        • Remember when virtualization was only something for companies with highly specialized needs? And RAID? And cooled CPUs? And hard drives? and computers?

          When a solution like this comes along, it generally starts out being used only by a few people (nerds and people who REALLY need it)
          Then it filters down into the rest of the market as a nice solution to a common problem.
          Then it becomes something which nobody can imagine living without.
          Then it becomes unthinkable to design a system which doesn't have this abil

    • Nope (Score:4, Insightful)

      by Anonymous Coward on Wednesday November 11, 2009 @07:35PM (#30067410)

      Remus presented their software well before VMware came out with their product.

      What's different now is that the Remus patches have finally been incorporated into the Xen source tree.

      If VMware has any patents, they'll have to jump over the hurdle of being before the Remus work was originally published, which was a while ago.

      Besides, Remus can be used in more ways than what VMware offers, since you have the source code.

      • What's different now is that the Remus patches have finally been incorporated into the Xen source tree.

        Hear, hear! I spent my summer research internship this year incorporating Remus patches into the Xen source tree for use on a departmental project. It was two months of bloody hacking to make the patched source, the build system, and the use environment cooperate well enough to actually get a Remus system running and backing up its VMs over the network. We never got it perfect.

      • by spotter ( 5662 )

        the remus paper references vmware's high availibility. (also was published in 2008 about 1.5 years ago, though dont know when it first started to be used, possibly before then)

        however, incremental checkpoint precedes both. See (pulling from my bibtex for paper I helped write)

        author = "J. S. Plank and J. Xu and R. H. B. Netzer",
        title = "{Compressed Differences: An Algorithm for Fast
        Incremental C

    • by TheRaven64 ( 641858 ) on Wednesday November 11, 2009 @08:00PM (#30067622) Journal
      I know that a company called Marathon Technologies owns a few patents in this area. A few of their developers were at the XenSummit in 2007 where the project was originally presented.
      • We use our product with Marathon's everRun FT. Just starting to do load testing using the Xen with their 2g product. It looks nice, but the second layer of management gets to be a pain.

    • by nurb432 ( 527695 )

      And it didn't require any "really expensive hardware, or very complex and invasive modifications" to do it. Not saying its going to run on some old beat up Pentium Pro from 10 years ago, but the hardware i see it run on every day isn't out of line for a modern data-center.

      And it requires ZERO changes to the OS.

      ( at risk here of sounding like a Vmware fanboy, but come on.. at least they can present facts when tooting their horn )

    • by smash ( 1351 )
      beaten. ESX 4.0 has vmware FT, and "lockstep" is patented i believe...
      • by jipn4 ( 1367823 )

        This sort of stuff is far older; it goes back to mainframe days and supercomputing.

        Furthermore, the idea of running two machines in lockstep and failing over shouldn't be patentable at all. Specific, particularly clever implementations of it might be, but those shouldn't preclude from others being able to create other implementations of the same functionality.

      • Prior art by HP which used to do this in Pentium-based Netservers?

        Granted real hardware, as opposed to software, but perhaps?
    • I'd be surprised if the whole field isn't absolutely blanketed with patents by IBM. Mainframes have this since the 80s or 90s, I think.
  • It's pretty fun to yank the plug out on your web server and see everything continue to tick along. "

    Or an ordinary, every day run of the mill 'off the shelf' plain jane beige UPS. or a Ghetto one [dansdata.com], if you'd like.

    Still its pretty cool, just wondering how much overhead there is by setting up this system

    • Re: (Score:2, Insightful)

      if it's a webserver, what's the big deal? Run 4 and if 1 drops off, stop sending it requests. For an app server, I can see the advantages.
      • Re:It's pretty fun (Score:5, Informative)

        by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Wednesday November 11, 2009 @07:28PM (#30067328)

        Uuum... session management? Transaction management? The server dying in the process of something that costs money?
        Even if it's something as simple as losing the contents of your shopping cart just before you wanted to buy, and then becoming angry at the stupid ass retarded admins and developers of that site.
        Or losing the server connection in your flash game, right before saving the highscore of the year.

        Webservers are far less stateless than you might think. Nowadays they practically are app servers. (Disclosure: I did web applications since 2000, so I know a bit about the subject.)

        When 5 minutes downtime mean over a hundred complaints in your inbox and tens of thousands of dropped connections, which your boss does not find funny at all, you don't do that error again.

        • Re:It's pretty fun (Score:4, Insightful)

          by Fulcrum of Evil ( 560260 ) on Wednesday November 11, 2009 @07:50PM (#30067536)

          Webservers are far less stateless than you might think. Nowadays they practically are app servers. (Disclosure: I did web applications since 2000, so I know a bit about the subject.)

          Webservers have no business being the sole repository for these things - the whole point of separating out web from app is that web boxes are easily replaceable with no state.

          Session mgmt: store the session in a distributed way at least after each request. Transactions: they fail if you die half way through. Shopping cart: this doesn't live on a web server.

          If you require all that state, how do you ever do load balancing? Add a web server and it's another SPOF.

          When 5 minutes downtime mean over a hundred complaints in your inbox and tens of thousands of dropped connections, which your boss does not find funny at all, you don't do that error again.

          That's right, you move the state off the webserver so nobody ever sees the downtime and tell your boss that you promised 99.9 and damnit, you're delivering it!

          • by shmlco ( 594907 )

            "Session mgmt: store the session in a distributed way at least after each request."

            Bingo. With your solution, a submitted page request will fail. In fact, every page request and connection being handled by that server when it fails will fail.

            With the article's solution, things automagically switch over and everyone gets the data they requested. Users notice nothing.

            "... so nobody ever sees the downtime..."

            Except all of the users that clicked register or buy and get nothing at all.

            • it's a choice between reliability and complexity, and complexity has its own reliability problems. Ideally, the HA solution is best, but it relies on a lot more than the simple solution. The users that get an error can try again and it will work. I did say that it's mostly useful for the app server layer, right?
              • by jon3k ( 691256 )
                Yeah and one webserver would be even simpler and less reliable. And no webserver would be even simpler. Good argument.
                • actually, yes. One webserver means no loadbalancing hardware to fail. LB is a mature tech and means that you can treat your N web servers as independent and also scale boxes out individually instead of in pairs.
        • by radish ( 98371 )

          Web servers are stateless and sit in front of app servers, which are stateful but which have their sessions propagated to at least one other instance. When a web server dies no-one cares, if an app server dies you just need to have some logic that allows the box which gets the next request in the session to either (a) redirect the request to the app server which was the back up for that session or (b) pull the session into it's own cache from the backup.

        • I don't know about you, but my web apps don't let the web server handle session and transaction management. Thats what I have a database server for, thats capable of dealing with thoses issues in a known way, that I can recover from to some extent. My important web apps use clusters of databases that take care of each other. Theres a reason Oracle costs a fortune, and MySQL is free. I can't stand working with Oracle, but theres a reason it exists. Of course you don't have to use Oracle, thats just one

          • by shmlco ( 594907 )

            "I can turn off one of my web servers or database servers, literally killing tens of thousands of connections, and the worst case is a half a second of delay or so while the cluster removes it from the loop. The most the user sees is some web pages don't load some content."

            So if that server is running a shopping cart, then "thousands" of users might just have had their credit card submissions fail. They don't get confirmations and they don't know if the order went through or not. And I'd almost guarantee th

        • It depends. You can engineer a system to be very stateful, and you have to route the same client to the same webserver in order to maintain functionality. Or, you can build a totally stateless webserver, with all data stored on db servers and/or memcache installs. It's not hard to do this with many different web frameworks. So I disagree with you on the facts: many, many webservers these days are totally stateless. Perhaps you program in .NET? I have no idea how they do things.

      • Re:It's pretty fun (Score:4, Insightful)

        by stefanlasiewski ( 63134 ) <slashdotNO@SPAMstefanco.com> on Wednesday November 11, 2009 @08:11PM (#30067730) Homepage Journal

        In many cases, the webserver IS the app server.

        This sort of feature could be very useful for those smaller shops and cheap shops who haven't yet created a dedicated Web tier, or for all those internal webservers which host the Wiki, etc.

        Webservers also help with capacity. Run 4 and if 1 drops off, not a big problem. But what if half the webservers drop off because the circuit which powers that side of the cage went down? And the 'redundant' power supplies on your machines weren't really 'redundant' (Thanks Dell)?

    • by smash ( 1351 )
      a UPS does not protect against CPU/motherboard/ram hardware failure. This sort of HA does.
      • No, it doesn't.
        This sort of solution protects from a limited subset of faults.

        It protects 100% from any fault that causes instant death.

        It does not protect from any fault that causes data corruption, where the system continues to run.

        Undetected bit-errors cause the states across the machines to differ.

        If these bit errors are replicated, you've got a machine in a copied, but corrupt state - the original and the copy may crash at exactly the same point.

        If they aren't, then you may get 'lucky', and have it fai

        • by afidel ( 530433 )
          Silent bit errors on current server class hardware should be vanishingly small, the buses and memory are protected by ECC.
          • Server class hardware should never have hardware faults either.

            Yes, server class hardware is usually more robust than consumer grade in terms of some bit errors.

            However, in the last minutes or seconds before a crash due to hardware failure, something is obviously going way out of spec.

            If this is detectable - it's a no-brainer - you simply failover if thresholds are breached, but before the crash occurs. (and you can afford to be a _lot_ more critical if you've got spare hardware)

            But a fair proportion of cra

    • by Jeremi ( 14640 )

      Or an ordinary, every day run of the mill 'off the shelf' plain jane beige UPS. or a Ghetto one, if you'd like.

      Sure, but power failure isn't the only thing that can stop your server from running -- it's just the easiest one to reproduce without permanently damaging anything. If you'd like a better example, yank the CPU out of your web server's motherboard instead. Your UPS won't save you then! :^)

  • Himalaya (Score:3, Interesting)

    by mwvdlee ( 775178 ) on Wednesday November 11, 2009 @06:57PM (#30067042) Homepage

    How does this compare to a "big iron" solution like Tandem/Himalaya/NonStop/whatever-it's-called-nowadays.

    • It doesn't. HP Non-Stop is a beast.
    • Re: (Score:3, Informative)

      by Jay L ( 74152 ) *

      I was just thinking that...

      Tandems may still have other advantages, though; back in the day, we built a database on Himalayas/NSK because, availability aside, it outperformed Sybase, Oracle, and other solutions. (They implemented SQL down at the drive controller level; it was ridiculously efficient.) No idea if that's still the case.

      But Tandem required you to build their availability hooks into your app; it wasn't transparent. OTOH, Stratus's approach is;a Stratus server is like having RAID-1 for every com

      • Re:Himalaya (Score:5, Interesting)

        by teknopurge ( 199509 ) on Wednesday November 11, 2009 @07:22PM (#30067282) Homepage
        VM replication like this still has an IO bottleneck. This isn't magic: unless you move to infiniband you're not going to touch something like a Stratus or NonStop machine. By the time you add in the cost of the high-perf interconnects, you're on-par with the real-time boxes. All this convergence going on with people redesigning the mainframe but ass-backward with client/server gear. Makes little sense to me other than it being a gimmick.

        By the time you get all the components that provide the processing and I/O throughput of those high-end boxes, the x86/64 commodity hardware cost advantage has evaporated.
        • Huh? We have a SAN son, you need more throughput? Add another 4 or 8gig trunk and bam you've added significant bandwidth. With individual blades having dual 8gig HBAs you have quite a bit of IO available to you assuming proper PCI-E. There is a upper limit where you shouldn't be virtualizing infrastructure but that limit is moving ever higher. I don't know about you, but I have a NetApp based storage array with redundant switching gear that is more than capable of keeping up with the IO of having 20 servers
          • Re: (Score:1, Informative)

            by Anonymous Coward

            The IO bottleneck in this case is the interconnect between the two machines, not disk, so the SAN isn't relevant. VMware FT needs at least a dedicated GbE NIC for replay/lockstep traffic, I think the recommendation is 10Gb, and is still limited to using a single vCPU in the VM.

          • The fact you're comparing NonStop/Stratus to the IO of a SAN is comical. There's a reason you don't virtualize large RDBMS in production environments: they fall over.

            Exchange is not a "high IO application". A high IO application is something like all the ATM transactions for Chase bank in North America. If you can have 20 servers on a single physical host you're doing it wrong: your apps aren't heavy by a long shot.
            • Re:Himalaya (Score:4, Insightful)

              by Vancorps ( 746090 ) on Thursday November 12, 2009 @12:32AM (#30069348)

              Were you replying to my comment? Because it doesn't sound like you read my comment. I specifically said there are cut-off points where virtual infrastructure doesn't make sense.

              Also, the fact that you think the IO of SAN is any different than that of an HP Non-Stop setup is where things get really comical because you're talking about Infiniband which is used in x86 hardware as well. As I said, the threshold is moving into higher and higher workloads.

              I'm also not sure where you get your information about Exchange not being IO intensive. Exchange setups easily handle billions of transactions just like the big RDBMS out there. That's why when you evaluate virtual platforms they always ask you about your Exchange environment as well as your database environment. They are both considered to be high IO applications as all they do practically is read and write from disk.

              I find the whole concept of your argument funny considering the Non-stop setups were early attempts at abstraction from the hardware to handle failure and be able to spread the load. In essence it was the start of virtual infrastructure. There is a reason Non-Stop isn't primarily part of HP's business anymore, people are achieving what they need to with commodity hardware. Sorry, but you do indeed save a lot of money that way too. Enterprise crap used to cost boat loads, now it is accessible to much smaller players with smaller workloads but the same demands for up-time.

              • by afidel ( 530433 )
                Exchange hasn't been high I/O since 2007 and when 2010 launches it gets even better. A big enough environment might still see some decent IOPS but nothing like the same organizations DB environment in all likelihood.
        • by Jeremi ( 14640 )

          By the time you get all the components that provide the processing and I/O throughput of those high-end boxes, the x86/64 commodity hardware cost advantage has evaporated

          I think the potential savings comes not so much from the hardware as from not having to redesign/re-write your low-availability (tm) software from scratch in order to make it highly-available. Instead you just slap your existing software in to the new Remus VM environment, connect the backup machine, and call it done.

          (Whether or not that m

        • Re: (Score:3, Interesting)

          We had a 700 kline app written in some Tandem specific application language. the smallest server we could get from HP was 400 K$. we re-wrote the app in python to use pairs of servers replicating via DRDB over ethernet and a load balancer in front. DRBD is slow, but with the new app I could just add pairs of nodes. We already had such a configuration for another application, and we combined the two, so the hardware cost was just adding two nodes in this cluster, at about 4 K$ per server node. 400 K$ -
          • by lewiscr ( 3314 )

            You forgot to account for the time it took you to re-write the app. Porting 700kLOC in an obscure language doesn't sound like one guy did it in a week.

            Without the data, I'll still assume it's cheaper. It would take a couple man years to make up the difference. But it's not a 98% cost savings.

            • funny you should mention the app. We replaced 750 klines of application in TAL, with 20,000 lines of python, which is precisely a 98% reduction in code size. Yes, it took a couple of years, this is mission critical stuff. tackled one functionality facet at a time.
        • by jon3k ( 691256 )
          "By the time you get all the components that provide the processing and I/O throughput of those high-end boxes, the x86/64 commodity hardware cost advantage has evaporated."

          One word: scaling
        • by bl8n8r ( 649187 )

          > Unless you move to infiniband you're not going to touch something like a Stratus

          I don't know who makes the infiniband, but the Stratus in only a V6 at best. It's not *that* fast.

      • Re: (Score:3, Informative)

        by Cheaty ( 873688 )

        Actually, after reading the paper, this is no threat to Stratus or other players in the space like Marathon or VMWare's FT. The performance impact is pretty significant - by their own benchmarks there was a 50% perf hit in a kernel compile test, and 75% in a web server benchmark.

        This is an interesting approach and seems to handle multiple vCPU's in the VM which I haven't seen done by the software approaches like Marathon and VMware FT, but I think it will mainly be used in applications that would have never

    • by Anonymous Coward

      How does this compare to a "big iron" solution like Tandem/Himalaya/NonStop/whatever-it's-called-nowadays.

      Precisely.

      It's actually pretty cool from a computing history aspect. Once upon our time, the mainframes were the bad-assed machines. Hot-swapping power supplies and core modules. Several nines of uptime. Now we're doing it in software.

      I see it as a mirror to what's happening with data storage and the whole "cloud computing" thing. Going back and fourth between big hosted machines and dumb clients to smaller smarter machines. It's like we flip back and fourth every few years when it comes to computer ideolog

      • by mwvdlee ( 775178 )

        I'm not comparing this to mainframes in general, only to the "redundant" types.

        This isn't going to compare to a general mainframe simply because it doesn't have the massive resources (cpu's, disk space, memory, bandwidth, etc).

        A lot of the those Tandems aren't used like a typical mainframe though. Sure, they may offer more resources than this Remus project solution, but many Tandem applications don't need those resources, they only need the redundancy and as-near-to-100%-as-possible-at-any-expense uptime.

        An

  • Intact? (Score:5, Informative)

    by Glock27 ( 446276 ) on Wednesday November 11, 2009 @07:00PM (#30067078)
    Intact is one word, O ye editors...
  • state transfer (Score:4, Insightful)

    by girlintraining ( 1395911 ) on Wednesday November 11, 2009 @07:03PM (#30067110)

    ... Of course, this ignores the fact that if it's a software glitch, it'll happily replicate the bug into the copy. Also, there are certain hardware bugs that will also replicate: Mountain dew spilled on top of the unit, for example. There's this huge push for virtualization, but it only solves a few classes of failure conditions. No amount of virtualization will save you if the server room starts on fire and the primary system and backup are colocated. Keep this in mind when talking about "High Availability" systems.

    On a different note, nothing that's claimed to be transparent in IT ever is. Whenever I hear that word, I usually cancel my afternoon appointments... Nothing is ever transparent in this industry. Only managers use that word. The rest of us use the term "hopefully".

    • by Garridan ( 597129 ) on Wednesday November 11, 2009 @07:36PM (#30067424)

      Mountain dew spilled on top of the unit, for example.

      FTFS:

      Remus provides a thin layer that continuously replicates a running virtual machine onto a second physical host.

      Wow! This software is *incredible* if mountain dew spilled on top of one machine is instantly replicated on the other machine! I'm gonna go read the source immediately, this has huge ramifications! In particular, if an officemate gets coffee and I also want coffee, only one of us needs to actually purchase a cup!

      • Re: (Score:3, Funny)

        Wow! This software is *incredible* if mountain dew spilled on top of one machine is instantly replicated on the other machine! I'm gonna go read the source immediately, this has huge ramifications! In particular, if an officemate gets coffee and I also want coffee, only one of us needs to actually purchase a cup!

        I told them quantum computing was a bad idea, but nobody listened...

        I told them quantum computing was a bad idea, but nobody listened...

        I told them...

    • Re:state transfer (Score:4, Interesting)

      by Vancorps ( 746090 ) on Wednesday November 11, 2009 @07:38PM (#30067442)

      If your primary and secondary systems are physically located next to each other then they aren't in the category of highly available. Furthermore with storage replication and regular snapshotting you can have your virtual infrastructure at your DR site on the cheap while gaining enterprise availability and most importantly, business continuity.

      I'll agree with being skeptical about transparency although how many people already have this? I went with XenServer and Citrix Essentials for it, I already have this fail-over and I can tell you that it works. I physically pulled a blade out of the chassis and sure enough, by the time I got back to my desk the servers were functioning having dropped a whole packet. Further tweaking of the underlying network infrastructure resulted in keeping the packet with just a momentary rise in latency.

      Enterprise availability is fast coming to the little guys.

      • Re: (Score:3, Informative)

        by bcully ( 1676724 )
        FWIW, we have an ongoing project to extend this to disaster recovery. We're running the primary at UBC and a backup a few hundred KM away, and the additional latency is not terribly noticeable. Failover requires a few BGP tricks, which makes it a bit less transparent, but still probably practical for something like a hosting provider or smallish company.
        • How much bandwidth is needed for the connection on a per-machine basis? Asked another way - if I had 10 machines that I wanted to use this approach on, how fast of a connection would I need? At what levels of latency do problems start?

          • Re:state transfer (Score:5, Informative)

            by bcully ( 1676724 ) on Wednesday November 11, 2009 @08:23PM (#30067844)
            It depends pretty heavily on your workload. Basically, the amount of bandwidth you need is proportional to the number of different memory addresses your application wrote to since the last checkpoint. Reads are free -- only changed memory needs to be copied. Also, if you keep writing to the same address over and over, you only have to send the last write before a checkpoint, so you can actually write to memory at a rate which is much higher than the amount of bandwidth required. We have some nice graphs in the paper, but for example, IIRC, a kernel compilation checkpointed every 100ms burned somewhere between 50 and 100 megabits. By the way, there's plenty of room to shrink this through compression and other fairly straightforward techniques, which we're prototyping.
            • Cool. Thanks for the info.

            • Plenty of room for a Riverbed or Cisco WAAS in between to accelerate transfers as well. Sounds like you and I want to use the tech in similar ways.

              For me, I don't mess with BGP yet, I can accomplish what I need through virtual links with OSPF. Won't be as smooth as my per site fail-over since I have two locations on site. It's a temporary setup so I have three locations, a primary at our event, a secondary at our event, and a third back at HQ with a fourth on its way for DR purposes. Sucks moving your netw

      • Re: (Score:3, Interesting)

        by shmlco ( 594907 )

        "If your primary and secondary systems are physically located next to each other then they aren't in the category of highly available."

        High availability covers more than just distributed data centers. Load-balancing, fail-over, clustering, mirroring, reduntant switches, routers, and other hardware: all are zero-point-of-failure, high availability solutions.

      • You're confusing high availabilty with disaster recovery. Don't worry, my managers can't get it right either.

  • by melted ( 227442 ) on Wednesday November 11, 2009 @07:33PM (#30067398) Homepage

    I'm pretty sure that if I just yank the cable, not everything will be replicated. :-)

    • by bcully ( 1676724 ) on Wednesday November 11, 2009 @07:41PM (#30067480)
      Hello slashdot, I'm the guy that wrote Remus. It's my first time being slashdotted, and it's pretty exciting! To answer your question, Remus buffers outbound network packets until the backup has been synchronized up to the point in time where those packets were generated. So if you checkpoint every 50ms, you'll see an average additional latency of 25ms on the line, but the backup _will_ always be up to date from the point of view of the outside world.
      • How does remus handle things if it mispredicts the packets?

        Supposing that it sends packet X, crashes, and then when it's restored from checkpoint it decides to send packet Y instead?

        Schroedinger

      • Re: (Score:3, Interesting)

        by BitZtream ( 692029 )

        No it won't.

        VMWare claims the same crap and its simply not true.

        You have a 50ms window between checkpoints that can be lost, in your example . The only way to ensure no lost is to ensure that every change, every instruction, every microcode executed in the CPU on machine A is duplicated on B before A continues to the next one. You simply can't do that without specialized hardware since you don't even have access to the microcode as its executed on standard hardware.

        50ms on my hardware/software can mean th

        • by bcully ( 1676724 ) on Wednesday November 11, 2009 @09:19PM (#30068236)
          I think you're missing the point of output buffering. Remus _does_ introduce network delay, and some applications will certainly be sensitive to it. But it never loses transactions that have been seen outside the machine. Keeping an exact copy of the machine _without_ having to synchronize on every single instruction is exactly the point of Remus.
        • this isn't true. a fully recoverable abstraction can be maintained without digging into
          the architecture. you just need a point periodically where you flush everything and define a
          consistent checkpoint

          personally i prefer doing this in the database, or operating system, or application, but suggesting
          that you cant do this underneath is simply wrong. it just comes down to performance

        • by Antique Geekmeister ( 740220 ) on Wednesday November 11, 2009 @11:08PM (#30068950)

          If your application cannot tolerate a 50 msec pause in outbound traffic (which is what Remus seems to introduce, similar to VMWare switchovers) then you have no business running it over a network, much less over the Internet as a whole. Similar pauses are introduced in core switching and core routers on a fairly frequent basis, and are entirely unavoidable.

          There are certainly classes of application sensitive to that kind of issue: various "real-time-programming" and motor control sensor systems require consistently low latency. But for public facing, high-availability services, it seems useful, and much lighter to implement than VMWare's expensive solutions.

          • Indeed. With the right (or more accurately, wrong) file system, IO scheduler, RAID layout, and workload, you can push your disk latency to well over 50 ms before it has a chance to get to the wire's buffer. The objective is to avoid hours of latency, not milliseconds. TCP/IP will take care of the road bumps if you make sure that the road doesn't stop at the edge of a cliff.
          • Its not 'one 50ms pause' thats the problem, its 'one 50ms pause for every sort of communication with external hosts of any sort'

            Open a database connection, for instance:
            VM sends start request, wait for checkpoint (50ms)
            DB responds to packet with ACK
            VM sends response ACK
            VM sends DB handshake start, wait for checkpoint (50ms)
            Server responds with server info
            VM sends DB protocol version requested, wait for checkpoint (50ms)
            Server responds.
            VM sends transaction start request, wait for checkpoint (50ms)
            Server resp

            • From a Remus whitepaper:

              http://www.usenix.org/events/nsdi/tech/full_papers/cully/cully_html/index.html [usenix.org]

              We then evaluate the overhead of the system on application performance across very different workloads. We find that a general-purpose task such as kernel compilation incurs approximately a 50% performance penalty when checkpointed 20 times per second, while network-dependent workloads as represented by SPECweb perform at somewhat more than one quarter native speed. The additional overhead in this case is

      • by msh104 ( 620136 )

        Hi bcully

        Mind if i ask you something.

        Currently i am running a xen setup where we replicate the storage between two machines using drbd.
        Live migration is supported in this scenario and failover is said to be as well though i haven't come around to check that out yet.

        1. Are there any advantages to using Remus over such a setup. ( other then being much easier to setup :p )
        2. Would it be possible to use proven solutions like drbd with remus or does this simply miss the point?

        I'll be sure to check it out when it

  • by mattbee ( 17533 ) <matthew@bytemark.co.uk> on Wednesday November 11, 2009 @09:18PM (#30068232) Homepage

    Surely there is a strong possibility of a failure where both VMs run at once- the original image thinking it has lost touch with a dead backup, and the backup thinking the master is dead, and so starting to execute independently? If they're connected to the same storage / network segment, it could cause data loss, bring down the network service and so on. I've not investigated these types of lockstep VMs, but it seems you have to make some pretty strong assumptions about failure modes, which always break eventually commodity hardware (I've seen bad backplanes, network chips, CPU caches, RAM of course, switches...). How can you possibly handle these cases to avoid having to mop up after your VM is accidentally cloned?

    • by bcully ( 1676724 ) on Wednesday November 11, 2009 @09:26PM (#30068270)
      Split brain is a possibility, if the link between the primary and backup dies. Remus replicates the disks rather than requiring shared storage, which provides some protection over the data. But there are already a number of protocols for managing which replica is active (e.g., "shoot-the-other-node-in-the-head") -- we're worried about maintaining the replica, but happy to use something like linux-HA to control the actual failover.
    • by dido ( 9125 ) <dido&imperium,ph> on Wednesday November 11, 2009 @09:51PM (#30068462)

      This is something that the much simpler Linux-HA environment deals with by using something they call STONITH, which basically means to Shoot The Other Node In The Head. STONITH peripherals are devices that can completely shut down a server physically, e.g. a power strip that can be controlled via a serial port. If you wind up with a partitioned cluster, which they more colorfully call a 'split brain' condition, where each node thinks the other one is dead, each of them uses the STONITH device to make sure, if it is able. One of them will activate the STONITH device before the other, and the one which wins keeps on running, while the one that loses really kicks the bucket if it isn't fully dead. I imagine that Remus must have similar mechanisms to guard against split brain conditions as well. I've had several Linux-HA clusters go split brain on me, and I tell you it's never pretty. The best case is that they only both try to grab the same IP address and get an IP address conflict, in the worst case, they both try to mount and write to the same fiberchannel disk at the same time and bollix the file system. If a Remus-based cluster split brains, I can imagine that you'll get mayhem just as awful unless you have a STONITH-like system to prevent it from happening.

      • Sounds like a godawful mess, glad I've never had to deal with a split-brain. We manage mostly Solaris clusters and they're pretty good about panicking a node when there's a chance the cluster risks becoming inconsistent (loss of quorum). If you're already syncing disks like in this case it shouldn't be too difficult to set up a quorum device or HACMP-like disk heartbeats. Doesn't Linux-HA support this type of setup ?

        • by lewiscr ( 3314 )

          I ran some cluster software (Veritas) on Solaris and later Linux. The Solaris version was great. If a node lost sync, it paniced, rebooted, and attempted to rejoin. If it couldn't join the quorum, it didn't do anything. The Linux version had frequent single-node splits. If a node lost sync, it would dump a kernel stack trace to the serial console (taking several minutes), and then pick up where it left off.

          Technically, the Solaris cluster needed the same STONITH system that the Linux cluster needed. P

  • but taking transparent high-availability to Xen [wikipedia.org] can't bode well for Gordon or the Vortigaunts. . .

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...