Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Operating Systems Software IT

First Look At VMware's vSphere "Cloud OS" 86

snydeq writes "InfoWorld's Paul Venezia takes VMware's purported 'cloud OS,' vSphere 4, for a test drive. The bottom line: 'VMware vSphere 4.0 touches on almost every aspect of managing a virtual infrastructure, from ESX host provisioning to virtual network management to backup and recovery of virtual machines. Time will tell whether these features are as solid as they need to be in this release, but their presence is a substantial step forward for virtual environments.' Among the features Venezia finds particularly worthwhile is vSphere's Fault Tolerance: 'In a nutshell, this allows you to run the same VM in tandem across two hardware nodes, but with only one instance actually visible to the network. You can think of it as OS-agnostic clustering. Should a hardware failure take out the primary instance, the secondary instance will assume normal operations instantly, without requiring a VMotion.'"
This discussion has been archived. No new comments can be posted.

First Look At VMware's vSphere "Cloud OS"

Comments Filter:
  • Instantly? (Score:4, Insightful)

    by whereizben ( 702407 ) on Friday May 22, 2009 @04:19PM (#28059403) Journal
    With no delay at all? Somehow I don't believe it - there is always delay, but I wonder if it is "significant" enough to be noticed by an end-user.
    • by Inakizombie ( 1081219 ) on Friday May 22, 2009 @04:23PM (#28059437)
      Sure its instant! There's just an item in the hardware requirements that states "Quantum processing required."
      • Re: (Score:3, Funny)

        by BSAtHome ( 455370 )

        Quantum processing,... hm, that means it will first happen when you look. That is definitely not a good idea. It should just work and I should not be required to walk down to the cellar to find the damn hardware box I am using. Next I will be required to locate the processor before Heisenberg is satisfied.

    • by Amouth ( 879122 )

      yes it is instant - i'm not sure exactly how they are doing it at the moment but basicly the boxes work in tandom to sync move on a per cycle basis.. meaning not a single cpu cycle is lost..

      • by ergo98 ( 9391 )

        yes it is instant - i'm not sure exactly how they are doing it at the moment but basicly the boxes work in tandom to sync move on a per cycle basis.. meaning not a single cpu cycle is lost..

        There probably are scenarios where there is a delay while it tries to figure out if indeed the other participant is down. However the OP's question -- instant -- was best responded to by the quantum processing response, because "instant" is in the mind of the assessor, and one woman's instant is another man's forever.

        Whi

        • Re: (Score:1, Funny)

          and one woman's instant is another man's forever...

          I see that you too have been forced to wait for "an instant" while your girlfriend/wife does some "quick" shopping.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Basically the two nodes will be in constant communication with each other. All data sent to the primary node is also sent to the secondary node, and the primary and secondary have a constant link between each other. Both nodes will perform the same computations on the data, but only the primary will reply to the user.

        If the secondary node notices that the primary is not responding it will immediately send what the primary was supposed to have been sending back to the user.

      • So in a nutshell, its like raid-1 (mirroring) for your box? Interesting. Anyone know if an operation crashes the system, will it bsod both systems? or is it only physical/network faults that are tolerated?
        • Yep, if an operation takes out the system then the mirrored system will also die. They are in total lockstep so there is no protection of the application. It does give you near instant protection should the hardware or the hypervisor fails, but not if your app or OS should fail.

    • Re:Instantly? (Score:5, Informative)

      by lightyear4 ( 852813 ) on Friday May 22, 2009 @05:12PM (#28059959)
      Instantly? Of course not. But the time required is equivalent to vmotion/live migration in bog-standard virtualization. How long? "That depends." To throw numbers at you, 30-100ms -- variance largely dependent upon how quickly your network infrastructure can react to MACs changing locations, whether in-flight TCP streams are broken as a result, etc. To help switches cope, people usually send a gratuitous ARP to jumpstart the process.
    • by RulerOf ( 975607 ) on Friday May 22, 2009 @05:13PM (#28059971)
      One of the statistics measured by virtualcenter is the lag you're asking about.

      The first hit on google images [ntpro.nl] should give you a good idea.

      In practice, I don't know... I imagine that the secondary instance will still receive network traffic bound for the cluster, so it'd probably be perceived as a hiccup when the primary one goes down, which is good enough for most services.
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      It actually is instant. I cannot elaborate as to how this works since I work at VMW, but there are videos of demos online, and I've seen it work. It's incredible. ---- these opinions are mine and not vmware's. i do not represent their opinions, etc....
      • by JSG ( 82708 )

        Instant eh? Define instant in this context.

        To be honest I have found vMotion to be pretty much "instant". Is this thing *more* instant and if so is it more instant enough to justify its existence over vMotion?

        I present my opinions as a non AC - if you are afraid of speaking freely then you work for the wrong firm.

    • Re: (Score:3, Informative)

      by Mista2 ( 1093071 )

      It keeps a running copy on the failover host, reading from the same storage as the active host. It's as if the server were about to complete VMotion without having just done the final step. outage time is a small hiccough, less than a second. Current running sessions just carry on. If its uploading a file to someone, it just carries on. The outage is well withing the tollerance of typical TCP sessions.

    • Re: (Score:3, Informative)

      by Thumper_SVX ( 239525 )

      It's close enough. I played with this feature at VMworld last year, and when running SQL transactions along with a ping, we dropped one packet but the SQL transactions didn't miss a beat.

      It's impressive enough... the two systems are working in lockstep, such that even memory is duplicated between the two systems. It's an extension on the existing VMotion function in VMware today. However, bear in mind it has some limitations; only one CPU is possible at the moment and you still have the overhead of really t

  • Xen did it first (Score:3, Informative)

    by lightyear4 ( 852813 ) on Friday May 22, 2009 @04:40PM (#28059641)
    Check out the Kemari and Remus projects, which allow precisely the same in Xen environments. In essence, it's a continual live migration (vmware people, think continual vmotion) that resumes virtual machine execution on the backup node if the origin node dies. Very cool tech. The demonstration involved pulling the plug on one of the nodes. For more information just search, there are code and papers and presentation slides galore.
    • Re: (Score:1, Informative)

      by Anonymous Coward

      VMware FT is not based on continuous memory snapshotting, it uses deterministic record/replay to do simultaneous record and replay. You could find a overview of this technology at http://www.vmware.com/products/fault-tolerance [vmware.com]

      Also VMware demonstrated a working prototype as early as in 2007
      http://www.vmworld.com/community/conferences/2007/agenda/ [vmworld.com]

      w.r.t to Xen, doing a proof of concept is one thing but implementing it and supporting it in production quality with sufficient performance is another thing.

      • Xen live migration does not involve 'continuous memory snapshotting' -- the referenced Kemari utilizes a combination of i/o triggers and observation of shadow page tables (nested page tables, ideally, if the hardware supports it. AMD's RVI and Intel's EPT). Kemari's equivalent of a lockstep vm gets only hot updates on dirtied pages, not a full memory snapshot. The alternative would of course be a rather inefficient design.
      • Re: (Score:2, Informative)

        by qnetter ( 312322 )

        Marathon has had it working on Xen for quite a while.

    • Re: (Score:3, Informative)

      by ACMENEWSLLC ( 940904 )

      We have both vMotion and XEN.

      vMotion is very noticeable. Some things fail when it happens. Zenworks 6.5 is an example.

      With Xen, we setup a VNC mirror. EG the guest was VNC Viewing itself. We were moving a window around and then we moved the guest from Xen server 1 to 2 (we have iSCSI BTW.) There was a noticeable affect that lasted for less than a second, but then we were on XEN #2.

      It's nice to see VMWare getting this feature right with vSphere.

      • Sounds like a delay on the switch. Add a gratuitous arp using arping in whatever vif-* script you're employing for virtual machine network interfaces and that problem will disappear.
  • I had an idea at some point of a distributed app, similar to SETI@Home, that people would run on their computer. These computers would form a cloud which would support creating VMs that could run arbitrary code. If one app is currently running your code, and the computer it's on goes down, your code would continue to run on another one. If everyone runs it, it would be a huge pool of computational power. Then you could run crazy things on it. Then, profit! Anyway, is this a step in that direction?
    • and then if we could get these on to peoples computers with out letting them know it. Maybe with an E-mail or a web page...
      I bet we could come up with a network of these robotic slave CPUs....

      {insert sky-net reference here}

    • This is not a terribly new idea -- it's been around ever since Sun coined the phrase "The network is the computer."

      The biggest problem with it is, of course, that I don't trust you, and you don't trust me. Why should I trust your computer to run my VM?

      It only works for things like SETI because the data is not private, and the results can be verified, both by having multiple nodes run the same workload (and comparing the results), and by re-running the results yourself if you see something that looks like a

  • by moosesocks ( 264553 ) on Friday May 22, 2009 @05:09PM (#28059939) Homepage

    How many hardware failures are actually characterized by a complete 100% loss of communication (as you'd get by pulling the plug)?

    Don't CPU and Memory failures tend to make the computer somewhat unstable before completely bringing it down? How would vSphere handle (or even notice) that?

    Even hard disk failures can take a small amount of time before the OS notices anything is awry (although you're an idiot if you care enough about redundancy to worry about this sort of thing, but don't have your disks in a RAID array)

    • Don't CPU and Memory failures tend to make the computer somewhat unstable before completely bringing it down?

      Yes, and this is one of the key dividing lines between true HA mainframes, and every software implementation of HA services. The latter are what 99% of people seeking HA want (for cost reasons), but the former has been great business for IBM and Sun.

    • Re: (Score:3, Interesting)

      by Mista2 ( 1093071 )

      Several Brand new servers with VI3 installed 2 weeks ago, left to run to burn in, first production guests moved onto them on Friday, Saturday sees CPU voltage regulator in one go pop, dead server. It would have been nice to just have the the Exchange server keep on rocking until Monday when we could replace the hardware, but no, now I've spent my Saturday morning going into work and fixing it.
      However thanks to VM, the HighAvailbility service did restart the guests automatically, but I did have to repair a

      • Re: (Score:3, Interesting)

        by Thumper_SVX ( 239525 )

        This is one reason I run Exchange 2007 with a clustered PHYSICAL mailbox server, and all the CAS and HT roles I run on virtual machines. I don't run database type apps on VMware for exactly these reasons... I am a big VMware supporter, but I also specify for our big apps that we use big SQL and Exchange clusters for HA... not VMware. Yes, it's a bit more expensive that way, but our Exchange cluster now hasn't been "down" in over a year, despite the fact that each node gets patched once a month and rebooted.

  • by RobiOne ( 226066 ) on Friday May 22, 2009 @05:16PM (#28060003) Homepage Journal

    Like everyone else pointed out, it's a VM in lockstep with a 'shadow' VM. This is not just 'continuous VMotion'.

    If something happens to the VM, the shadow VM goes live instantly (you don't notice a thing if you're doing something on the VM).

    Right after that, the system starts bringing up another shadow VM on another host to regain full FT protection.

    This can be network intensive, depending on the VM load, and currently only works with 1 vCPU per VM. Think 1-2 FT VMs per ESX host + shadow VMs.

    You'll need recent CPUs that support FT and have an VMware HA / DRS Cluster set up.

    So if you've got it, use it wisely. It's very cool.

    • by jo42 ( 227475 )

      So, if the software running in the primary VM has a problem causing it to go down pretty hard (think near or BSOD class), and the lockstep mechanism is keeping things synchronized really well to the shadow VM(s), how many microseconds after the shadow VM comes up, does it go the way of the primary VM, as in totally tits up?

      Or is the definition of FT (fault tolerance) "when some marketing droid pulls out a network cable or shuts down a server during a demonstration while trying to sell tens of thousands of d

      • by RobiOne ( 226066 )

        Please familiarize yourself with the difference between hardware fault tolerance and software fault tolerance.

        • by jo42 ( 227475 )

          So, if the hardware is failing, corrupting memory or data on external storage, and the lockstep mechanism is keeping things synchronized really well to the shadow VM(s), how many microseconds after the shadow VM comes up, does it go the way of the primary VM, as in totally tits up?

      • by TheLink ( 130905 )
        If the primary VM BSODs due to a software problem, the odds are the shadow VM would too.
  • The article mentions an inability (for the "pre-released" version) to PXE boot. If he's talking about booting for installation, then he's 100% wrong. The ESX beta/RC (build 140815) will, indeed, boot and install over a network. It's different from 3.5 so you'll have to adjust your commandline and/or kickstart. They use "weasel" instead of "anaconda" and that runs inside the service console. Short answer... "method=" becomes "url=" -- with a properly formated URL, eg. url=nfs://server/path/to/esx/. It'

    • by RulerOf ( 975607 )

      The ESX beta/RC (build 140815) will, indeed, boot and install over a network

      Has anyone done a PXE boot of the ESX OS itself yet, though?

      AFAIK, the only "diskless" ESX deployments rely on flash storage.

      • Yes. It's quite trivial, actually and I seem to recall there's a VMware whitepaper on it.

        OK, it's ESXi, not ESX... but the difference is small enough to make no odds. Oh, and on the flip side of that, I do find it easier to have ESXi on an internal flash in case my PXE server is down. I would host it virtually and on HA if it weren't for the fact that I have that whole "chicken and egg" problem :D

        • by RulerOf ( 975607 )
          Chicken and egg indeed! On that note, perhaps you would put the PXE service on your SAN, no?

          It's of sincere interest to me because we're turning some whitebox servers whose raid controllers aren't on the ESX HCL into hosts. I read that VMWare is moving ESXi as their premeir hypervisor to replace ESX, so this kind of setup would be interesting to explore, though i imagine that a flash based local datastore would be more... robust.
          • You know, that's an excellent idea if your SAN is capable... not all are. Our current production SAN is an HP EVA 4200... if we had a 4400 we could do it, but with the older 4200 it doesn't even have a direct network connection of its own; instead it has a dedicated storage server for managing the SAN (actually a DL380 running Windows Storage Server).

            The ESXi HCL is a lot tighter than ESX, but I've had few problems so long as I stick with the "common" solutions. I buy almost exclusively HP servers for virtu

      • by Cramer ( 69040 )

        ESX isn't designed to be run "diskless". It has to have somewhere to put it's VMFS -- which in 4.0 also contains swap and a few other things.

        (That doesn't mean one cannot bend it into a shape that will run diskless.)

        • by RulerOf ( 975607 )

          It has to have somewhere to put it's VMFS

          Well, not in the sense of having no datastore, i simply mean without rotating hard disks present in the server. Flash storage accomplishes that, but you could use gPXE [etherboot.org] to connect it to an iSCSI target and remotely access its VMFS datastore from there.... if it were possible to do so with ESX, of course. You might not want to swap to it, and it'd really demand another NIC and so on, but that's what I really meant.

  • So what they're saying is that they don't believe in operating systems, but acknowledge that they might exist?
  • I'm on the FT team at VMware and just wanted to provide some additional information on FT requirements. You can also find out more about FT at: http://www.vmware.com/products/fault-tolerance/ [vmware.com]

    VMware collaborated with AMD and Intel in providing an efficient VMware Fault Tolerance (FT) capability on modern x86 processors. The collaboration required changes in both the performance counter architecture and virtualization hardware assists from processor vendors. These changes could only be included in recent

No spitting on the Bus! Thank you, The Mgt.

Working...