Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Operating Systems Software Virtualization

Inside VMware's 'Virtual Datacenter OS' 121

snydeq writes "Neil McAllister cuts through VMware's marketing hype to examine the potential impact of VMware's newly pronounced 'virtual datacenter OS' — which the company has touted as the death knell for the traditional OS. Literally an operating system for the virtual datacenter, VDC OS is an umbrella concept to build services and APIs that make it easier to provision and allocate resources for apps in an abstract way. Under the system, McAllister writes, apps are reduced to 'application workloads' tailored through vApp, a tool that will allow developers to 'encapsulate the entire app infrastructure in a single bundle — servers and all.' The concept could help solve the current bugbear of programming, parallel processing, McAllister concludes, assuming VMware succeeds."
This discussion has been archived. No new comments can be posted.

Inside VMware's 'Virtual Datacenter OS'

Comments Filter:
  • by $RANDOMLUSER ( 804576 ) on Saturday September 20, 2008 @10:26AM (#25084731)

    According to VMware execs, VDC OS will not be a product as such. Instead, it is an umbrella concept covering a range of capabilities that VMware will build into the next generation of its Virtual Infrastructure products.

    So it's not just vaporware, it's an "umbrella concept" that will be built into future products.

  • by timeOday ( 582209 ) on Saturday September 20, 2008 @10:26AM (#25084733)
    The whole point of time-sharing operating systems in the first place was to allow many competing applications to get along yet protect them from each other. We have layer upon layer of redundancy built in; a Java VM running on an x86 VM running on a CPU operating in protected mode. Then somebody comes along and says, "hey I have a breakthrough idea, let's just use ONE of those layers!"

    The real nut of my questions is, what would we need to add to more conventional OS's (linux) to get the job done? For my money, the biggest problem is package interdependencies. IMHO much VM usage is actually just to address that issue. We need package management that isolates applications from each other, giving the appearance of a custom chroot environment for each, while silently sharing resources (such as .so's) that just happen to be the same in multiple applications.

    • Re: (Score:2, Interesting)

      by giuntag ( 833437 )
      in short, are you advocating usage of virtuozzo?
      • Re: (Score:2, Insightful)

        by timeOday ( 582209 )

        in short, are you advocating usage of virtuozzo?

        Thanks, it sounds very interesting. Do the virtuozzo containers all share OS files (libraries) to the extent possible? One of my main problems with VMWare is that a VM itself takes so much disk space that it takes a long time to work with (copy, archive etc) and I can't fit many on my laptop. Somewhat paradoxically, it must be possible to snapshot an application with its entire environment so you have a known working version.

        • Re: (Score:3, Interesting)

          by Ralish ( 775196 )

          One of my main problems with VMWare is that a VM itself takes so much disk space that it takes a long time to work with (copy, archive etc) and I can't fit many on my laptop. Somewhat paradoxically, it must be possible to snapshot an application with its entire environment so you have a known working version.

          If I'm understanding you correctly, the solution you are after is already offered by VMware:
          http://vmware.com/products/thinapp/ [vmware.com]

          Make sure to check the features tab for a more summarized and technical overview of what exactly ThinApp does and is capable of. Unfortunately, ThinApp is currently Windows only; I have no idea if they are intending to support Unix OS's in the future.

          Is this the sort of functionality you are thinking about? Apologies if I've misintepreted your comment.

          • That's definitely not what he's after. He's trying to reuse operating system dynamic libraries between host and guest OS instances.

            Thinapp allows running applications without installing. And, these apps have side-by-side installation of important OS dynamic libraries (so as to avoid DLL hell) which means more copies of DLLs, not less.

            The question is whether virtuozzo allows sharing dynamic libraries between the host and guest OS's. [And, I'd be interested in the answer too].
            • That's definitely not what he's after. He's trying to reuse operating system dynamic libraries between host and guest OS instances.

              Correct. I think this is where the market strug between Citrix' XENapps (which I think they're positioning as a replacement for Metaframe?) and Microsoft's application center server is going to weigh in -- both of them are about sharing apps from a server by providing an abstracted presentation layer to a thin (or thinnish) client while optimising the network traffic between the intermediate presentation server and the client. This needs encapsulated, or better versioned DLL's. It has to do it that way t

              • apologies for "alot" -- missed the space bar.
              • This needs encapsulated, or better versioned DLL's.

                In the MS world, the SXS (side-by-side), MSM (merge modules) and App Manifest combo is supposed to take care of this situation for native code (and for managed, it's never really been a problem with SXS and GAC supplying local and global respectively). [As always with MS, nomenclature confuses things more than need be, and it's definitely a lot better than it used to be, but still no magic bullet].

    • by Colin Smith ( 2679 ) on Saturday September 20, 2008 @11:05AM (#25084967)

      Except from IBM of course.

      vmware is simply the logical extension of what the OS should be doing anyway.

      or put another way.

      Those who don't buy IBM kit are condemned to reimpliment (badly, and for the rest of their lives) what IBM have been doing for decades.

      • Re: (Score:1, Troll)

        by the_B0fh ( 208483 )

        See, all these baby trolls who think they know it all. Before you mod parent as troll, go read about LPARs.

        Why can't mod points be given only to intelligent modders?

        • by chez69 ( 135760 )

          It's sort of funny to see 'Mainframes are teh expensive' argument. I miss the old days of slashdot when more then 20% of folks knew there was more to computers then 'hello world' and javascript.

      • Re: (Score:2, Funny)

        by Anonymous Coward

        IBM: the only company who can pack mainframe complexity into 2U.

        No, thanks. Give me Sun over IBM ANY day of the week. Every time I have to deal with IBM/AIX, I wind up with a headache.

        • FYI: You can run SLES on IBM Mainframe instead of AIX.
          • Re: (Score:1, Insightful)

            by Anonymous Coward

            You can't run AIX on a mainframe -- that'd be z/OS.

            But yes, you can run SLES on a mainframe, thus neatly avoiding all that "too many applications to choose from" fuss. You can't take an existing x86 app and run it, you need a special mainframe version. If you've got source, you can recompile, but if you want a supported app from someone else, in most cases you're going to be SOL.

            It's a steal, too, at only $12,000 per "mainframe engine" (core). (That may sound like a lot, but it is apparently a ton cheape

      • Re: (Score:3, Insightful)

        by sphealey ( 2855 )

        > Those who don't buy IBM kit are condemned to reimpliment
        > (badly, and for the rest of their lives) what IBM have been
        > doing for decades.

        First, the troll rating is utterly unjustified. Mod parent up.

        IBM is not without its own faults. Perhaps less so now than in the 1970s and 80s when the push for PCs took root, but it has its own weaknesses. Even taking that into account it _is_ ridiculous to see the Wintel world groping toward the kind if high availability and virtualization that IBM, DEC, CD

        • Of course, the advantage to the x86 way of doing things is that you can buy off-the-shelf hardware (or re-purpose old hardware) for the purposes of doing this kind of high availability. For all intents and purposes the result is a very stable and available system that costs less than what IBM are selling as a packaged solution.

          A little disclaimer; I'm an old UNIX geek, certified in Solaris and AIX, but since I live in a Windows world at work these days I find myself as the SME for VMware at my company (just

      • by lukas84 ( 912874 )

        I'm not sure if i entirely agree with you.

        I was stuck with a side job to administrate IBM POWER running IBM i (formerly known as AS/400), and their virtualization capabilities aren't that grad.

        It took them until V6R1 that was released in the beginning of 2008 to allow for sharing arms with multiple LPARs, something that every x86 virtualization solution could do from the beginning.

        It took them until the POWER 6 Hardware generation to allow NIC sharing using HEAs. POWER 6 Hardware started being offered back

        • >At around 5k-10k per Core, it is extremely expensive.
          Until you actually take into account all the other benefits that come with that system....
          • by lukas84 ( 912874 )

            Most of the applications are extremely dated, still using 5250 as user interfaces, many of them not up-to-date on database technology (Unjournaled, without commitment control, without constraints).

            A heavy set of developers that refuse to stray from platform-specific languages that were created in the 80ies and have only marginally been modernized (RPG). Most of those developers also prefer unjournaled, unnormalized databases made in the 80ies.

            Running modern software on the i is a complete pain in the ass, a

            • You can run all kinds of modern software on a mainframe. You get anything linux can run, but outside of linux you can use java, websphere, DB/2, IMS, messaging (MQ on a mainframe is amazing).

              Yeah, you can use RPG, COBOL, PL/1 and that other crap. but why would you =-)

        • by chez69 ( 135760 )

          iSeries aren't as good for a reason. If you can buy a mini instead of a Mainframe then you won't buy the mainframe. VM on a real mainframe has been doing what these folks have come up with forever. with sysplex you get multiple machines, kick ass uptime, and multiple OSes like VM, Zos, linux, shit, you can run your OS from the 70s on it if you want.

          yeah, it costs an craptastic amount of money, but if you have the cash and need the uptime....

      • Linux can already do it.
        Multiple Linux VMs run fine under z/OS, which of course is IBM kit. z/OS can already share multiple resources accross different physical data centers.

        Oh, and word to the wise Linux system admins running under VM. Remove any stupid uneeded start up scripts. It doesn't make me a happy bastard operator from hell when I vary on the Linux partition and I have 60 servers thinking bootup is a good time to reindex their man pages.

    • by tji ( 74570 ) on Saturday September 20, 2008 @11:29AM (#25085115)

      They are not replacing Linux. You still run your what you want on Linux, but do you run everything on ONE Linux box? If yes, you're not a good candidate for a Datacenter OS. If you run many servers, then there is almost definitely room for efficiency in that structure.

      Rather than dedicating the full bare hardware to your App, you deploy as a VM in your Virtual Datacenter ( mini cloud ). The DCOS takes care of managing the resources, things like:

      - Moving your server VM from compute node to compute node to automatically balance load and optimize performance,
      - Move VMs to work around failures, allow hardware upgrades, etc. without downtime.
      - Expand capacity by dropping another compute node into the cloud (the big difference between the old mainframe world and the new DCOS. This scales easily with cheap powerful nodes)
      - Move the machine images around your storage infrastructure, to allow for management, maintenance, upgrade, expansion, etc.
      - Provide recovery and even fault tolerance of hardware. Servers can automatically move and re-start on hardware failure; or they can even run in lockstep to maintain full operation through a node failure.

      This is VMware's big lead (and big need to leverage, as the revenue from the hypervisor layer dries up). They provide the management layer that enables all the above, and they keep improving it. From a central GUI, I can manage all my VMs, manage the compute resources as a cluster.

      • Re: (Score:3, Informative)

        by ArsonSmith ( 13997 )

        Not exactly. VDC-OS does actually replace Windows and/or Linux. Think of it as a Linux kernel, and instead of InitV startup your app starts up. You don't maintain users and directories or storage or even log into a shell. The OS is reduced to just enough to run 1 application and only 1 application.

        This OS/App bundle is created with a basic config file and is then started just like you'd start a virtual machine on an ESX server or server cluster. ESX can then handle the migration and resources of all th

        • by tji ( 74570 )

          I'm not sure if you're talking about the hypervisor OS, or what.. Yes, ESXi is a very thin OS, but the servers / applications run in a VM which needs a standard OS. This is VMware's "Virtual Appliance" concept. The OS should be a really minimal stripped down build, usually Linux based, but it is a real OS.

          The VDC-OS is just the underlying ESXi thin hypervisor, with VirtualCenter managing the resources. This is what VMware has been doing for quite a while now, the new name is partly some new feat

          • ESX and ESXi are the underlying bare metal OSs. VDC-OS is a stripped down VM. Think of the virtual appliance concept reduced even further.

            • VDC-OS is to Linux/Windows what ESXi is to ESX. Where ESX takes a couple hundred Megs of memory. you have to manage users and have a full Linux OS under the vmware application, ESXi is 16megs, has one user account used to join Virtual Center and the only console interface is where you set the password and ipaddress.

              • by tji ( 74570 )

                Nothing I have seen from VMware (including walking through their booth at VMworld last week and getting a demo of it) was anything like what you are describing.

                What they announced and were showing at VMworld was the next generation of their VMware Infrastructure. It added new features, like VM Fault Tolerance and distributed vSwitch, but did not change what runs in a VM.

                ESX / ESXi is the compute node, vCenter (formerly VirtualCenter) is the management layer.

      • by Viol8 ( 599362 )

        This is a sticking plaster for the lousy PC architecture which today is being forced into places it was never designed for. Read up on what IBM and Tandem were doing back in the 70s and 80s which hardware that was designed for this.

        This isn't hot new tech, its putting lipstick on a turd so companies can save a few pennies.

        • I have and it's the difference between a baby crawling and the star ship enterprise's warp drive. Sure they are both forms of transportation, but one does a lot more. And while PC hardware may not have been designed for virtualization, it has been redesigned for it.

    • (Open) Solaris (Score:3, Informative)

      by d3xt3r ( 527989 )
      Solaris 10 and Open Solaris have the concept of zones and containers. The computer runs a single Solaris instance but can run isolated process trees in zones which share common libraries but can be updated for dependencies independently. The containers concept (in conjunction with zones) allows a fair share scheduler to guarantee a service level for each allocated zone (CPU/memory sharing, etc). IMHO, must better than Virtuozzo, VMware and Xen.
      • by anilg ( 961244 )

        I wrote an article for osnews that talks about Devzones we're using with the Nexenta project. Basically, we have a machine setup and give out normal user accounts. The user can login and run 'devzone_create' and a virtual instance of the OS is created (take a couple seconds) and the user can enter it as root and do anything (even rm -rf). Once he's done using the zone, he can run devzone_free and the virtual OS vanishes.

        We have around 8 instances running right now, used by various developers. The article is

    • Have you considered doing minimal Linux installs inside your VMWare? This way you can store more VMs on one machine.
      You can also take the BSD approach and share your /usr from the host O/S to the guest O/S through NFS or the like, also saving disk space.
      Xen on Linux already gets rid of a few layers by implementing paravirtualization.

      If you combine all 3 measures, you can host several high-performance VMs one a relatively small machine.

      Personally, I would simply buy a bigger disk and more RAM, because that $

    • by darkuncle ( 4925 )

      take a look at ESX using NetApp for backend storage - deduplication at a block level can achieve what you are proposing, and then some (only store one copy of whatever-it-is, e.g. explorer.exe or /usr/bin/vi; every VM that would otherwise have an identical copy of $foo has instead pointers to the one set of blocks on-disk that contains $foo. The more VMs you run, the better your space savings.)

  • Summary (Score:3, Insightful)

    by wombatmobile ( 623057 ) on Saturday September 20, 2008 @10:27AM (#25084739)

    FTFA: "In short, if done properly, a meta-operating system based on networked virtual machines could streamline software development, make IT more flexible, and save customers money."

    It is hard to argue with a truism. But what does "done properly" entail?

  • by funwithBSD ( 245349 ) on Saturday September 20, 2008 @10:31AM (#25084757)

    Getting traditional "silo" orientated programmers to use distributed computing is hard now!

    This server is for chocolate, this one for peanut butter... don't let them touch!

    Even GRID enabled software like Informatica is hard to get them to understand. Don't worry where it runs, don't try to segregate workloads... the software is smarter than you!

    Let it do it's damn job.

    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Saturday September 20, 2008 @11:55AM (#25085315)

      Getting traditional "silo" orientated programmers to use distributed computing is hard now!

      And (for many of them) it's never going to get any easier.

      It is too easy for them to just think of "one program, one OS, one machine".

      Their app takes all the resources it sees from the OS it sees on the machine it sees.

      So VMWare "solves" this by making it easy (for a price) for each app to believe that it has it's own machine. So the programmers can keep working they've always worked.

    • Wish I had mod points this post is very relevant to the resistance to VM where I am now.

  • there is hardware available with XEN, that does just that concept. Of course, it is Linux in there, but each major app has its own set-up. That way, you have a DB, a webserver, a development env, etc.
  • by Anonymous Coward on Saturday September 20, 2008 @10:56AM (#25084921)

    I have used VDC OS. Ultimately it is just a convergence of the existing technologies Vmware has already been developing, upgraded to a new level. I can say, it is very, very nice and clean.

    What it gives a data center manager is abstraction and ease of use. The physical way everything is deployed one-off into a datacenter, you need a new application, it involves buying new servers, racks, power and whatnot. If you need to move those servers to another center, or deal with business continuance and disaster recovery, it is a new discrete project.

    With VDC, no more. You build all of that into the datacenter "OS", and when a new application comes along they are put into the VDC OS and they inherit everything, not just HA but BC, DR and all of the ease of use. If they don't want BC or DR, they don't pay into that bucket.

    Need to move a Datacenter? Use the DR solutions in VDC OS, and you can do it in the middle of the day without your users noticing more than a slight 5-minute bump (or so--largely to let the network routes update).

    VMware is so far beyond everybody else in the virutalization industry, it is almost comical to hear other people shout the battle cry of 'Xen' or 'Hyper-V'. Those are nice toys, but the surrounding tools are klunky and almost non-functional, leaving just the hypervisor. What VMware is trying to say with "VDC OS" is that the game already left the hypervisor, that is why everybody is all but giving the hypervisor away for free now.

    I may sound like a fanboy, but after having worked in the datacenter for 15+ years I can say this technology really works, and its about time. We can now move the datacenter from the hobbiest market it has been in up to now, into the dialtone it should be.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      I have to disagree with AC that "vmware is the only solution", 6 months ago we evaluated both vmware (which we had been using in dev and test for years) and the Citrix Xen product and decided to go for Xen for our production systems based upon performance we saw (yes yes YMMV) cost, and the open nature of the API. The problem was finding a strong partner/integrator to help us swing our server estate from physical to virtual in the time allotted.

      So far the systems have been solid, and required only a couple

      • by kscguru ( 551278 ) on Saturday September 20, 2008 @12:38PM (#25085603)

        6 months ago we evaluated both vmware (which we had been using in dev and test for years) and the Citrix Xen product and decided to go for Xen for our production systems based upon performance we saw (yes yes YMMV) cost, and the open nature of the API. The problem was finding a strong partner/integrator to help us swing our server estate from physical to virtual in the time allotted.

        Then you missed the GP's point. If XenSource (Citrix XenSource : VMware VI as Xen : ESX) satisfies your needs, then you aren't doing anything for which you need a datacenter OS. (And if you evaluated anything more expensive than the cheapest VMware offering, you botched your product search too.)

        For server consolidation and bare-bones start/stop management, there is not much difference between VMware, Xen, and Hyper-V. They all have roughly the same performance; ESX degrades least when overloaded and there's a small premium for an ESX cluster because of it. Go to the next tier where you need automated load-balancing, automated availability solutions, and automated backup, and VMware is the only game in town. (Short of IBM mainframes.)

        Server consolidation != datacenter OS, despite the "me too!" claims of MSFT and Citrix. MSFT's roadmap puts them in the same ballpark in 2-3 years, Citrix 3 years back on the VMware roadmap, and VMware is there right now.

        • Re: (Score:1, Interesting)

          by Anonymous Coward

          Yeah, the price on the hypervisor layer has fallen so much, we're now pricing out options to customers where they don't have to pay for software. Small business looking to do a hardware refresh? Virtualize all your servers on this one system booting from a USB key, and mount this iSCSI array we set up for you over here.

          All the software is free (as in beer) and some is even Free (as in speech) as well, and this is all low end offerings. Everyone else is behind the game once you move past that arena. I'm not

        • by Meorah ( 308102 )

          "Go to the next tier where you need automated load-balancing, automated availability solutions, and automated backup, and VMware is the only game in town. (Short of IBM mainframes.)"

          xenserver5 platinum does everything you listed other than automated backup, but between automated metadata snapshot backups of VMs and automated central SAN backup solutions, vmware's ability to do automated backup isn't a value proposition for anybody other than the largest enterprises.

    • Re: (Score:2, Troll)

      I see VDC OS as a possibly bigger headache for those of us in security. Where I work, we already have issues with the ESX systems. VMWare's virtual switches are more akin to virtual hubs. Efficiently segregating the individual servers from each other within the same virtual network is difficult if not impossible.

      Some solutions may be coming up for this. We've talked to Checkpoint and Reflex about their technologies to address these issues. Even so, I can't help but think that virtualization providers a

      • Re: (Score:2, Informative)

        by mattmarlowe ( 694498 )

        You apparently missed the the announcement from Cisco that they've released their own virtual switch with enterprise features to replace the limited capabilities in VMware's. And, yes, vmware will fully support it and it will be plug and play compatible. Furthermore, on a cluster of ESX hosts, you can have multiple Cisco supervisor appliances running for HA/management, while a Cisco switch configuration/etc is shared across all nodes and ports being logically linked to each vm, regardless of where vm is

        • I did miss that. Thanks for bringing it up. This kind of thing does go a long way towards addressing the risks that we've identified. I wonder how much it will cost, though. So far, pricing doesn't seem to have been announced.

          The supervisor appliances may also be a serious cost issue, as we're not a Cisco shop. In our entire datacenter, we have maybe six or seven Cisco devices. (I work for a California county government, so even a small purchase of a few thousand dollars is a significant cost issue th

    • by cain ( 14472 ) on Saturday September 20, 2008 @12:33PM (#25085565) Journal

      With VDC, no more. You build all of that into the datacenter "OS", and when a new application comes along they are put into the VDC OS and they inherit everything, not just HA but BC, DR and all of the ease of use. If they don't want BC or DR, they don't pay into that bucket

      Yeah, but what about TR, WD, RF, and GH? Not to mention NR, SS, and BD? How could they leave thouse out - I mean WTF?

      • Re: (Score:2, Informative)

        by Ralish ( 775196 )

        To clear up the acronym soup a little:

        HA = High Availability
        Technology that aims to ensure (high) availability of virtual machines across a virtualised cluster through intelligent monitoring of VM's and cluster resources.
        http://www.vmware.com/products/vi/vc/ha.html [vmware.com]

        DR = Distributed Resource Scheduler (I assume that's what parent meant)
        Provides much more advanced and fine-grained control of the available resources in a virtualised cluster.
        http://www.vmware.com/products/vi/vc/drs.html [vmware.com]

        BC = Consolidated Backup (

  • Remember. If this works, NOONE will prefer to keep their own data, their own apps and programs... everyone will ENJOY going back to the days of just using a dumb terminal. Speaking as probably one of the youngest people to have had to use a green lined plaintext terminal from a remote location back when we were moving and my dad had to keep the home computer up... I think I'll stick with having my services local. Nothing worse than not being able to play nethack because the internet wire is down...
  • by infomodity ( 1368149 ) on Saturday September 20, 2008 @11:10AM (#25084997)
    We have IEEE and RFC for standardization of ethernet/switching and routing respectively. What standards exist for virtual environments? As commercial security vendors move into this space, we're headed back into a cycle of supporting multiple architectures. "Security Vendor X" must now understand how VMWare, Hyper-V, Xen, and other VM environments perform their networking. Virtualization of the entire OSI model renders the physical and data link layers obsolete. Why emulate them at that point? Not to say ethernet will disappear, but I can see a point where operating systems evolve branches that run in pure play virtual environments. Those offshoots will shed unnecessary things like MAC addresses as the VM vendors begin defining the new network standards themselves.
    • Technically, the bottom 3 rungs of the OSI ladder remain intact, because Virtual Machines use discrete MAC addresses and all machines are joined with VLANs. The firewall (usually at the border of the VLAN) will not notice if the VM moves from one host to another, because the MAC address stays the same, only the switch in between the VM hosts might notice the MAC has moved.

    • What would be the benefit of introducing a new model of networking?

      Here's some potential questions:

      1. How will virtual machines on different physical hosts communicate?
      2. How will virtual machines on a single physical host communicate using the new protocol?
      3. What can a new networking protocol do better than existing ones when it's only used on one host?
      4. How likely is it that anyone else are going to adopt VMware's (or Xen's, or whomever's) new protocol?
      5. How is it going to be adopted faster than IPv6 and DNSsec?
      6. What
  • Hmm OpenMosix (Score:3, Interesting)

    by Culture20 ( 968837 ) on Saturday September 20, 2008 @11:17AM (#25085021)
    Openmosix project closed earlier this year and suddenly vmware has a way to run one "OS" over multiple computers. Hmmm...
    • Pity that they're not, isn't it?

      They're running multiple OS's... one per computer. They ACT as one because of the management infrastructure that's in place. Kind of like a mainframe.

      Don't get me wrong, they're doing some VERY interesting stuff with that, but an OpenMOSIX rip it ain't.

  • by Lictor ( 535015 ) on Saturday September 20, 2008 @11:24AM (#25085083)

    VM? LPAR? Parallel Sysplex? Haven't IBM mainframes been doing this since the '70s (okay, Parallel Sysplex has only been since the '90s)?

    No doubt a "cloud" of UNIX boxes is harder to marshall than a couple of zSeries though.

    • by image77 ( 304432 ) on Saturday September 20, 2008 @12:09PM (#25085405)

      Maybe, but IBM mainframes don't use cheap off the shelf components that you can pick up at the local Fry's. You can build a small VMware cluster with HA, DRS, etc for a few thousand bucks. How much is an IBM mainframe these days?

      Once you have that VMware cluster you can run your choice of 70+ operating systems and millions of apps on it. Can you run Exchange on a mainframe? Sieble? Your existing billing and accounting app?

      • Re: (Score:2, Insightful)

        by BASICman ( 799037 )

        Once you have that VMware cluster you can run your choice of 70+ operating systems and millions of apps on it. Can you run Exchange on a mainframe? Sieble? Your existing billing and accounting app?

        Well, you can run whatever runs on Linux on top of a mainframe. And if you're a Fortune 500 corporation, chances are your existing billing and accounting applications are *already* running on a mainframe. That is, after all, what the old girl is built for.

        • by image77 ( 304432 ) on Saturday September 20, 2008 @01:14PM (#25085911)

          Well, you can run whatever runs on Linux on top of a mainframe.

          Only if you recompile those apps to run on the special versions of Linux that run on mainframes. Let's see: I can recompile my app to run on some weird offshoot of Linux on expensive, proprietary hardware or I can take it and "P2V" it onto VMware running which ever flavor of mainstream Linux I prefer? Oh, and I can P2V my Windows apps onto that same VMware cluster? And all that for a fraction of the price? Sold.

          Just to be clear I'm not saying that the mainframe has no place in the modern datacenter, I'm just saying that VMware is a better fit in many situations. (And it's certainly an order of magnitude cheaper.)

          And if you're a Fortune 500 corporation, chances are your existing billing and accounting applications are *already* running on a mainframe. That is, after all, what the old girl is built for.

          Not sure where the F500 argument came from, but since 486 out of those 500 already use VMware I think they're already sold. (All 100 of the F100, BTW.) http://www.vmware.com/customers/ [vmware.com]

          In any case, my original point remains. Mainframes are expensive and proprietary whereas VMware is cheap and offers the flexibility to run whatever app on whatever OS you choose. This new VDC-OS stuff just builds on an already good thing. We'll be happy to renew our ELA when it comes up next year.

        • I just migrated a few very important clusters to HP/9000 to Intel Linux, because the HP hardware was seriously out of date and the Intel platform (DL380) provided nearly the same fault-tolerance and seriously more horsepower for 1/10th of the price.
          (Keep in mind, this was done because the App itself tended to provide 99% uptime, so moving from 99.99% hardware to 99.9% hardware goes unnoticed)

      • by RedK ( 112790 )
        If you run a Datacenter off of Hardware you bought at Fry's, I don't want to be near it when it blows up. x86 hardware isn't all cheap, especially if you're thinking of a solid storage solution. Think stuff like HP XP arrays. Disks are the most fragile things, we swap at least a few per week where I work, there's no way we're running 1 SATA drive off the local controller for anything.
        • Re: (Score:2, Insightful)

          by image77 ( 304432 )

          You're missing the point. No matter how you slice it the x86 stuff (even the high end x86 stuff) is WAY cheaper than an IBM mainframe, and if I need some memory or a CPU or something I can find it practically anywhere. That was my only point, and IMHO it's one that really can't be argued.

          As for the point that I think you were trying to make - of course architecting for redundancy is important. VMware makes that easy too. Even if one of the cheap nodes in my VMware cluster unexpectedly melts down the VMs wil

          • by image77 ( 304432 )

            Actually I will say one more thing.

            To VMware: (If you're reading this.) Not everyone lives on the West Coast, and I for one have no desire to go back to San Fran for next year's VMorld. Last year San Fran SUCKED. The venue was too small, the food horible, the party lame. The sessions (the ones that I could get into) were good, but the lines to get in were really frustrating. Vegas this year was 1,000,000 percent better. (Well, the Vegas party was also pretty lame but everything else was great.)

            Even though V

        • If you run a Datacenter off of Hardware you bought at Fry's, I don't want to be near it when it blows up. x86 hardware isn't all cheap, especially if you're thinking of a solid storage solution. Think stuff like HP XP arrays. Disks are the most fragile things, we swap at least a few per week where I work, there's no way we're running 1 SATA drive off the local controller for anything.

          This is a fair comment, but in fairness to the GP, I think he was just reaching to make a point.

          The disparity between the cost of an IBM solution as opposed to a VMware solution for similar needs is quite large.

          Even you bring up the disks. Yes, they're fragile... but nothing a decent array can't fix. We've fired up virtual machines on stand-alone hardware with a nice array controller and some SAS drives... and we've set them up on big, meaty SANs. The cost/performance and reliability ratio for even a decent

    • But people in IT rarely read up on their own history so think everything they haven't seen before is cutting edge tech.

    • Re: (Score:3, Interesting)

      by Comatose51 ( 687974 )
      VMware isn't claiming these ideas are new. IBM and computer science departments around the world has been talking about these ideas for many years. The difference is that VMware has an implementation that will work on x86 hardware that can bring the benefits of these ideas to a large market. In some sense we've come full circle as we moved from mainframes and room size computers to PCs and commodity hardware and now back to computers in a datacenter (a very big room). However, you can't just dismiss the
  • I fail to see how this "solves the parallel programming problem". If you have a monster server, bandwidth and latency are low for process running on it and communicating with one-another, whether they are running in a VM or not. If you have the same server running *nix with all the programs running, the performance can't really be worse than if you use it to host this OS. It would just be harder to maintain. The only useful feature I saw from the article is that it seems to be able to checkpoint guest O

  • As a dinosaur who started cranking code more than 40 years ago, I've been out of touch with things like virtualization for some time. The last word on virtualization in the mainframe world in the 70's and 80s was IBMs VM series of virtual machine operating environments: CP/67, VM/370, VM/SP, etc., coupled with CMS, the Conversational Monitoring System OS for each virtual machine. These were spectacularly useful across a wide spectrum of user profiles. In concept, how do current virtualization strategies dif
    • Indeed on an IBM Mainframe you could run any number of VM's of various flavors and they were all under the control CMS and life was very very good indeed, but those days are ....

      Ohh yeah, they are still here! VMWare is just re-inventing a very well designed wheel that has been rolling for the last few decades so what is the point?

      Is it just a reincarnation of the Not Invented Here syndrome, yet again?

    • Mainframes in the late 80s suddenly became big , nasty and old fashioned systems and desperately untrendy. The PC and unix boxes suddenly became the system de jour and all the supposed hot new talent went in that direction. Unfortunately , not being very good at reading history they had zero clue as to what mainframes actually got up to and so its taken them this long to effectively re-invent the wheel. So endeth this tale.

  • VMWare is neat and has its uses. As a developer, I've found it quite useful for OS development and testing. In the data centre it too can have its uses, but it is also has its limitations. That's one of the reasons why our IT department is exploring the Trigence solution--application virtualization. It gives them better performancee, easier migration of apps to newer OS versions and lower costs (hardware and fewer OS images to maintain).

  • thhis isn't new, amazon's been doing it for a long time
  • by joib ( 70841 ) on Saturday September 20, 2008 @01:05PM (#25085823)

    And after a few years when Microsoft follows VMWare, we'll have Microsoft DataCenter OS, abbreviated MS-DOS.

  • by Natales ( 182136 ) on Saturday September 20, 2008 @01:53PM (#25086213)

    Disclaimer: I work for VMware, and I just came back from VMworld in Vegas (exhausted BTW).

    In all my 5 years in the company, I must say that this is the most comprehensive re-thinking of the long-term strategy for virtualization I've seen to date. It brings a new sense of direction that matches where the markets are going.

    I agree with most of the comments in this thread regarding the benefits of the VDC-OC, but this is just one part of this picture. IMHO, the biggest change is the "Federation with the Cloud" strategy, where a company may choose to use, move or spawn new or existing workloads directly into a service provider on-demand, maintaining the SLAs (from security to capacity) and then bring them back to the internal cloud if needed.

    I mean, go a talk to a CFO or a COO, and they'll [most of the time] politely complain about IT being expensive, and not fast enough to react to the changes the company needs. Shared services are still seen as optional and many business units still prefer to implement their own thing. With this model, IT becomes a true utility, with a pay-as-you-go menu that implements a coherent chargeback model that will bring a smile to the guys in dark suits.

    Even if VMware doesn't succeed in these efforts, the genie is out of the bottle and somebody else will make it happen.

    Really interesting times to be in IT.

    • Someone gives you a binary executable file to run. It's a two thread program compiled and linked for say, an IBM 360 or maybe a MIPS R6000, it doesn't matter. You say "fine", and submit it to your virtual cloundWare, and one thread executes in Greenland, and the other in Malaysia.

      When you're ready for that, it's the real deal.

    • by Jay L ( 74152 ) *

      Here's the thing I don't get:

      For decades, we've seen the promise of "location agnostic" resources. RPC, CORBA, middleware, etc. etc. was all supposed to provide you with a unified way to Do Things, whether you were Doing Things on the same machine or in a different data center.

      No, none of them were as seamless as VDC. But they didn't fail because they were clunky; they failed because they were too slow. For every "abstract out the data repository" groundswell, we've countered it with "stored procedures r


    • I mean, go a talk to a CFO or a COO, and they'll [most of the time] politely complain about IT being expensive... IT becomes a true utility, with a pay-as-you-go menu that implements a coherent chargeback model that will bring a smile to the guys in dark suits.

      Really interesting times to be in IT.


      Or not.

      Everything you've just written indicates that the chores which are currently being performed by 10 IT dudes might, in the near- to mid-term-future, be acccomplished by a single IT dude [who himself m
  • Sounds like something Trigence [trigence.com] already does. No need for OS-level virtualization in which you need to allocate tonnes of memory for an entire OS. Just allocate what the app needs. It encapsulates servers/services and the entire firesystem supporting it on both Windows and Linux. Their online demo is really, really well done. We looked at this product not too long ago because we were sick and tired of having our machines thrash under so many VMs that need X amount of resources (memory, disk space) just s
    • Great... maybe. I just took at look at their website and found a lot of shit written by sales and marketing that I just don't have the patience to try to understand what they are babbling about.

      And, of course the obligatory photos of models pretending to be employees, happy customers, or drunken vagrants; who the fuck knows.

      And why do the marketeers that they hire to advise them on their "onlin presence" insist on that shit?

      Does anyone here get a boner when they see those fucking pictures of happy co

  • I always wondered how you pronounced 'virtual datacenter OS.'

    Now, I wonder if they'll ever announce this as a product.

  • I know Cisco has been trying to flog their VFrame (http://www.cisco.com/en/US/products/ps8463/index.html) technology which sounds very similar to this. Funny thing is, VFrame supports VMware itself, so im not sure how that relationship is going to continue.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...