Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Sun Microsystems Software

Sun Bare Metal Hypervisors Now GPLv3 154

ruphus13 writes with some more news for people foretelling the death of VMware. Sun has open sourced their xVM server, their bare-metal hypervisor virtualization solution. What used to once be the cash cow for VMware is now coming under increased threat, and Sun is once again turning to the Open Source community as a weapon. "Sun xVM Server is an outgrowth of the Xen project — which raises the question of why a company would go with Sun's version rather than the Xen one. Apart from its support for SPARC and Solaris (as well as other chips and operating systems), Sun is also building a services and sales organization around a commercial version of xVM server... If you want to kick the tires or cut your costs, you can hop over to xVMServer.org, download the source (GPL 3) and join the community. But Sun is betting that, as deployments move from an initial testing phase to active usage, large organizations will be willing to pay for guaranteed support (starting at $500 per year per physical server)."
This discussion has been archived. No new comments can be posted.

Sun Bare Metal Hypervisors Now GPLv3

Comments Filter:
  • not a milk cow (Score:5, Informative)

    by michalk0 ( 1362753 ) on Friday September 12, 2008 @10:47AM (#24978349)
    vmware does not make its money on bare metal hypervisor. It makes a fortune, and is actually doing pretty good, on enterprise products like vmware infrastructure or virtual desktop environment.
    Actually their bare metal hypervisor - ESXi comes for free as well (although not GPLed, but we're not talking about ideology here are we)
  • Re:cheap (Score:5, Informative)

    by PunkOfLinux ( 870955 ) <mewshi@mewshi.com> on Friday September 12, 2008 @10:52AM (#24978461) Homepage

    Truth be told, the new xVM from sun (they bought VirtualBox) is pretty good. Certainly better than VB used to be, since it'll now actually boot windows XP and stuff. If their bare metal stuff is as good, I may just jump ship here.

  • Quality (Score:1, Informative)

    by gentimjs ( 930934 ) on Friday September 12, 2008 @10:57AM (#24978551) Journal
    I tried this out a month ago on opensolaris, linux, and solaris10 (both on x86). I've historically been a big supporter of Sun, but .. well .. It just 'didnt work' with solaris as the 'guest' OS. The guest would start up, launch X, and freeze. Default options for host&guest&xVM... Not a good start.
  • Re:ZFS (Score:5, Informative)

    by BrainInAJar ( 584756 ) on Friday September 12, 2008 @11:02AM (#24978645)
    Oh, and one more thing. Read that wiki article you posted. The CDDL isn't the problem, it's that Linux's license doesn't permit linking. Not the other way around.

    So, why not quit complaining about the permissive license ZFS is under, and start complaining about the restrictive license Linux is under? ( your post should read "Please put Linux under a new license" )
  • Re:cheap (Score:3, Informative)

    by rufus t firefly ( 35399 ) on Friday September 12, 2008 @11:13AM (#24978797) Homepage

    I don't think it will be as successful as they hoped. Sun is far too late to this x86 virtualization game. LDOM's and Containers, and Xen are great technologies, but they just haven't been nearly as flexible as VMWare's offering. Management of the environments (LDOM/Containers/Xen guests) has been very kludgy. This is where VMWare has really gained dominance, and I suspect will retain it. They are years ahead in virtualization management.

    Not to nitpick too much, but there's some apples/oranges comparisons here. Xen is a paravirtualization technology, whereas VMWare is a straight-up virtualization technology. Paravirtualization is usually more efficient with like operating systems, so it does play to a different segment.

    It's like saying VMWare is better than Qemu, though Qemu lets me emulate arm and sh4 architecture machines and VMWare doesn't. Different tools for different jobs.

  • Re:cheap (Score:5, Informative)

    by twiddlingbits ( 707452 ) on Friday September 12, 2008 @11:15AM (#24978835)
    Thats not the full costs. I just looked at this the other day for my company. There are a lot of other costs involved if you want support and a 100% Sun solution guaranteed to work. I've also seen no benchmarks versus VMWare.
    Pricing Information
    Sun offers standalone subscriptions for Sun xVM Server software and Sun xVM Ops Center, as well as additional options that offer the combined benefits of the two products, allowing customers to virtualize and manage at Internet scale. Commercial subscriptions are priced annually in four-socket increments and provide premium 24X7 support, access to the latest, up-to-the-minute patches and updates, as well as installation and training. Available pricing options include:
    * Sun xVM Server software: Priced at $500/year per physical server.
    * Sun xVM Infrastructure Enterprise Subscription: Priced at $2000 per physical server per year, the enterprise subscription is designed to simplify the management of large scale virtualized environments and includes advanced features, such as management of live migration and of multiple network storage libraries.
    * Sun xVM Infrastructure Datacenter Subscription: Priced at $3000 per server per year, this option includes all the features in the Sun xVM Infrastructure Enterprise Subscription in addition to physical server monitoring, management and advanced software lifecycle management capabilities.
    * Sun xVM Ops Center: Available from $100 per managed server up to $350 a year, depending on customer selected features, along with a required $10,000 Satellite Server annual subscription for Sun xVM Ops Center.

    There are some significant technical restrictions as well if you dig deep you'll find
    Disk on which xVM server is installed
    * SATA or SAS (serial SCSI) * Fiber Channel to a JBOD * IDE disks are not supported
    Attached storage
    * NFS/TCP/IP/ethernet remote storage * CIFS remote storage
    Networking
    * Ethernet-based NICs supporting the Solaris GLDv3 driver specification * only MTUs of 1500 bytes are supported
    * For Windows guests, customers wanting full Microsoft support should run xVM Server on Windows Server 2008 logo certified hardware.
  • Re:not a milk cow (Score:2, Informative)

    by Ralish ( 775196 ) <sdl@@@nexiom...net> on Friday September 12, 2008 @11:37AM (#24979175) Homepage

    Actually, it makes a huge amount of money on its bare metal hypervisor. I haven't exactly analyzed their profits based on individual products, but I'd be willing to bet that their bare metal hypervisor and associated technologies is where the big money is made for them. For companies like VMware, it's the enterprise market where they traditionally reap the big profits, and VMware has been a major presence, if not THE presence until recently in the enterprise virtualisation market.

    Also, I think you don't quite understand what VMware Infrastructure is. VMware Infrastructure IS their bare metal hypervisor, with various associated technologies included depending on which particular package you choose, all related to the hypervisor featureset; e.g. vSMP, DRS, VMotion, etc...

    Finally, ESXi is just one flavour of their bare metal hypervisor, the newest. It's a stripped down version of ESX, their traditional bare metal hypervisor. ESXi is almost entirely remotely managed, and yes, it is free. ESX, on the other hand, is definitely not free, in any way, shape, or form. It differs, in that it is not designed for embedded hardware, but instead, includes not just the hypervisor but a full fledged local management console through a Linux system based off of RedHat Linux (I forget the specific version it is based off).

    Each has pros and cons, but keep in mind that ESX has been in existence for many years, ESXi is a newcomer, and so, if you want to compare adoption, ESX will dwarf ESXi. I can't see existing companies that use ESX moving to ESXi, and even if they do, the free version of ESXi doesn't include other features such as VMotion, which must be seperately bought and enabled through license keys in ESXi.

  • Re:cheap (Score:5, Informative)

    by WilsonSD ( 159419 ) on Friday September 12, 2008 @11:50AM (#24979385) Homepage

    We've already shipped over 6 million copies of our desktop hypervisor (xVM VirtualBox), which is available under GPL v2 from virtualBox.org. You should go check it out.

    We're putting a lot of resources into virtualization and we're going to surprise people.

    -Steve Wilson

    VP, xVM
    Sun Microsystems
    http://blogs.sun.com/stevewilson

  • by WilsonSD ( 159419 ) on Friday September 12, 2008 @11:56AM (#24979467) Homepage

    xVM Ops Center supports SPARC and xVM Systems. The current version of xVM Server is focused on x86/x64 platforms, but you can use xVM Ops Center to manage Solaris virtualization technologies like Solaris Containers.

    http://wikis.sun.com/display/xvmOC1dot1/Managing+Solaris+Containers+With+Sun+xVM+Ops+Center

    -Steve Wilson
    VP, xVM
    Sun Microsystems
    http://blogs.sun.com/stevewilson

  • by Quikah ( 14419 ) on Friday September 12, 2008 @01:45PM (#24981511)
    Nice FUD.

    No, this is not what happened at all. Simon Crosby (biggest blowhard ever), shot his mouth off proclaiming that VMware are a bunch of idiots, but he can't show it cause of the EULA. Well, unbeknownst to all his readers Xen had submitted their paper to VMware for approval, which they did approve and Xen published. It showed that Xen was competitive in most of the benchmarks, but fell short in a number and beat ESX in only 1, SPECjbb on Linux.

    Good luck finding anything from this whole exchange, Citrix purged there blogs of the entire ordeal. Here [xensource.com] is the paper WITH the data, no redactions. I am not seeing this "everywhere else Xen killed", could you point it out to me?

    As a side note VMware is very liberal with their benchmark policy. As long as you actually benchmark in a sane manner they will let you publish no matter the result.
  • by shutdown -p now ( 807394 ) on Friday September 12, 2008 @02:39PM (#24982399) Journal
    It's "astroturfing" when you try to create an impression of a grassroots support campaign. A post signed by a high-ranking company official with no attempt to hide the fact that he's representing a company is as far from that as it gets. And kudos to Sun for taking /. seriously.
  • by wandazulu ( 265281 ) on Friday September 12, 2008 @02:39PM (#24982403)

    I've played with Xen, we use zones in Solaris, and I've used Microsoft's Virtual Server offering, but only VMware lets me do the one thing that no one else does: Put up a machine *fast*. I mean, from nothing to a fully working Linux/Windows/whatever machine whether it's a clone from an existing guest, or a brand new one.

    I have a lot of projects that are ephemeral; we need a box to test something on and boom, we have a virtual machine that runs pretty darn fast and when the testing is done, we shut it down. No muss, no fuss. No other product on the market is so good about bringing up a machine, throwing additional "hardware" at it when necessary.

    The other thing VMware rocks over everyone else is snapshots; I can create branches of branches of snapshots when my testing goes in all kinds of directions, and I can always roll back to any of them. I described it to a coworker as having the entire machine on top of a Subversion repository.

  • by swb ( 14022 ) on Friday September 12, 2008 @04:55PM (#24984225)

    Right, so every extra frame you send you tack on an extra 14 or 18 bytes.

    A gig of data transmitted with 9000 byte jumbo frames is only about 120,000 frames. Its about 716,000 frames with 1500 byte frames. Even with low-end overhead of 14 bytes its 8 Mbytes of extra data transmitted, and that's on a single gig of data.

    And even with offload engines, there's still other legacy BS at the hardware level that isn't completely eliminated and has to get done more often because of the extra frames being transmitted. And then there's the added latency with the extra frames as well, since it takes longer to send the greater volume of data required.

    It may not matter for most small-transaction size clients, but for a lot of operations that move a large amount of data it really begins to matter.

    Moving more data with lower overhead is ALWAYS better, and not being able to do this when you might otherwise be able to is ALWAYS a liability, even if it doesn't seem like it at the moment (*cough* 640K *cough*).

8 Catfish = 1 Octo-puss

Working...