Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Operating Systems Software

Virtualization Is Not All Roses 214

An anonymous reader writes "Vendors and magazines are all over virtualization like a rash, like it is the Saviour for IT-kind. Not always, writes analyst Andi Mann in Computerworld." I've found that when it works, it's really cool, but it does add a layer of complexity that wasn't there before. Then again, having a disk image be a 'machine' is amazingly useful sometimes.
This discussion has been archived. No new comments can be posted.

Virtualization Is Not All Roses

Comments Filter:
  • Yawn (Score:5, Insightful)

    by dreamchaser ( 49529 ) on Friday March 09, 2007 @12:45PM (#18291078) Homepage Journal
    This is the exact same pattern that almost every computing technology follows. First the lemmings all rush to sound smart by touting it's benefits. Soon it is the be all and end all in "everyone's" mind. Then the honeymoon fades and people realise it's a useful tool, and toss it into the chest with all the other useful tools to be used where it makes sense.
    • Re:Yawn (Score:5, Informative)

      by WinterSolstice ( 223271 ) on Friday March 09, 2007 @12:48PM (#18291134)
      Yes - we have quite a bit that we just put in here at my shop.

      Virtualization good: Webservers, middle tier stuff, etc.
      Virtualization bad: DBs, memory intensive, CPU intensive.

      Biggest issue? "Surprise" systems. You might see a system and notice a "reasonable" load average, then find out once it's on a VM that it was a really horrible candidate because it has huge memory, disk, CPU, or network spikes. VMWare especially seems to hate disk spikes.

      What we learned is it's the not the average as much as the high-water-marks that really matter. A system that's quiet 99.99% of the time, but spikes to 100% for 60 seconds here or there can be nasty.
      • Re: (Score:2, Funny)

        by Anonymous Coward
        re your sig:

        An operating system should be like a light switch... simple, effective, easy to use, and designed for everyone.

        Did you know that in the US, light switches are traditionally installed with "up" being "on", while in England they are traditionally installed with "down" being "on"?

        Perhaps instead operating systems should be like nipples, everyone is born knowing how to use them, and they don't operate differently in different countries ;)
      • Re:Yawn (Score:5, Insightful)

        by vanyel ( 28049 ) * on Friday March 09, 2007 @01:05PM (#18291396) Journal
        Virtualization good: Webservers, middle tier stuff, etc.
        Virtualization bad: DBs, memory intensive, CPU intensive.


        We're starting to do the same. It looks the articles basically says "managing them is more complex, and you can overload the host". Well duh! They're no harder to manage (or not much) than that many physical machines, but it does make it a lot easier (cheaper!) to create new ones. And you don't virtualize a machine that's already using 50% of a real system. Or even 25%. Most of ours sit at 1% though. Modern processors are way overkill for most things they're being used for.
        • Re:Yawn (Score:5, Informative)

          by WinterSolstice ( 223271 ) on Friday March 09, 2007 @01:15PM (#18291568)
          "Modern processors are way overkill for most things they're being used for."

          Right - except like I said - watch those spikes. We took a system that according to our monitoring sat at essentially 0-1% used (load average: 0.01, 0.02, 0.01) and put it on a virtual. Great idea, right?

          Except for the fact that once a day it runs a report that seems fairly harmless but caused the filesystem to go Read Only due to a VMWare bug. The report lasts only about 2 minutes, but it hammers the disk in apparently just the right way.

          It's the spikes you have to be careful of. Just look for your high-water-marks. If the box spikes to 90% or 100% (though the load average doesn't reflect it) it will have some issues.
          • Re:Yawn (Score:5, Informative)

            by cbreaker ( 561297 ) on Friday March 09, 2007 @01:29PM (#18291800) Journal
            Your bug comment is kinda moot - it's not a normal problem with virtualization.

            We have over 120 VM's running on seven hosts with VI3. Most of them, as you can imagine, are not high work-load (although we do have four Terminal Servers handling about 300 terminals total) but sometimes they are, and we've really not had any issues.

            It depends on what you're doing, really. Saying you WILL have problems is any situation isn't really valid.
            • Re: (Score:3, Informative)

              That's really awesome, and obviously your systems are a great use for VMs :D

              Our web stuff virtualized *beautifully*. We had few to no issues, but we ran into major problems when mgmt wanted to virtualize several of the other systems.

              And since when is a warning about an unfixed bug moot? It's an *unfixed* bug in ESX Server Update 3. When it's patched in the standard distribution, then it will be moot.

              VMs are still quite a new science (as opposed to LPARs) so there are lots of bugs still out there.
              • Re: (Score:3, Insightful)

                by cbreaker ( 561297 )
                It's moot because it's a specific issue with a specific hypervisor, not a problem inherent with virtualization itself.

                There's some things I won't virtualize right now, and that's two of our very busy database servers and the two main file servers. Part of it is for the performance of those systems, but part of it is also because I don't want those systems to chew up a high percentage of the overall virtual cluster.

                I don't think x86 VM's are a new science anymore. They're just somewhat new to the Enterpris
          • by vanyel ( 28049 ) *
            Was this on the guest fs or the host fs? One of the things causing us to move slowly is that we've seen this a couple of times on the host fs, and not sure why. Once it happened when pre-allocating a virtual disk for a new system; we've been thinking we were triggering an obscure linux fs bug...
            • Our case was Guest FS on ESX server Update 3 (though we were able to mostly fix it by moving the SAN drivers to Update 2)
          • Re:Yawn (Score:5, Informative)

            by giminy ( 94188 ) on Friday March 09, 2007 @02:16PM (#18292472) Homepage Journal
            We took a system that according to our monitoring sat at essentially 0-1% used (load average: 0.01, 0.02, 0.01) and put it on a virtual.

            Load average is a bad way of looking at machine utilization. Load average is the average number of processes on the run queue over the last 1,5,15 minutes. Programs running exclusively I/O will be on the sleep queue while the kernel does i/o stuff, giving you a load average of near-zero even though your machine is busy scrambling for files on disk or waiting for network data. Likewise, a program that consists entirely of NOOPs will give you a load average of one (+1 per each additional instance) even if its nice value is all the way up and it is quite interruptable/is really putting zero strain on your system.

            Before deciding that a machine is virtualizable, don't just look at load average. Run a real monitoring utility and look at iowait times, etc.

            Reid
            • Load average is a bad way of looking at machine utilization. Load average is the average number of processes on the run queue over the last 1,5,15 minutes.

              This may be wrong, but I've always looked at load as the number of processes waiting for resources (usually disk, CPU, or network).

              I've seen boxes with issues that have had a number of processes stuck in the nonkillable D (disk) wait state that were just stuck, but they had no real impact on the system besides artifically running the load up.

              I've also see
              • Re: (Score:3, Insightful)

                by giminy ( 94188 )
                This may be wrong, but I've always looked at load as the number of processes waiting for resources (usually disk, CPU, or network).

                Yep, it's wrong. Load average is defined as the number of processes that are runnable. Processes waiting for resources (at least, resources made via system called, like trying to talk to disk, network, etc) are put on the WAIT queue and are not runnable. Thus, they do not contribute to load. Processes that have all the data they need and just need a cpu slice contribute to l
          • Re: (Score:2, Informative)

            by T-Ranger ( 10520 )
            Well, disks may not be a great example. VMWare is of course a product of EMC, which makes (drumroll) high end SAN hardware and software management tools. While Im not quite saying that there is a clear conflict of interest here, the EMC big picture is clear: "now that you have saved a metric shit load of cash on server hardware, spend some of that on a shiny new SAN system". The nicer way of that is that both EMC SANs and VMware do the same thing: consolidation of hardware onto better hardware, abstraction
      • Re: (Score:2, Interesting)

        by dthable ( 163749 )
        I could also see their use when upgrading or patching machines. Just take a copy of the virtual image and try to execute the upgrade (after testing, of course). If it all goes to hell, just flip the switch back. Then you can take hours trying to figure out what went wrong instead of being under the gun.
        • Very good point - and one I personally enjoy. Especially good when building a "Reference" system before imaging it out to other servers. Being able to clone 30 web boxes in minutes off a virtual is SO nice :D
      • Mind if I quote you to our server support units?

        We're about to migrate our 500+ server farm (webservers, Exchange and databases) to VMs and I can't seem to get them to understand that not everything can work within a VM.

        • Hehehe - please do :D

          "This dude on this web forum said DBs suck on VMs"

          Let me know how that works for you
        • by afidel ( 530433 )
          Exchange, Database, and busy AD controllers (all forms of database) are the worst candidates for current VM solutions due to the heavy I/O penalty. Besides, most of those systems are busy a good percentage of the time and so are already poor candidates for VM's.
      • by bberens ( 965711 )
        I've found it to be an amazing tool for development and testing. We use free VMWare at work for this sort of thing all the time. It's really a dream and has saved us a ton of cash on hardware.
      • Re: (Score:2, Interesting)

        Virtualization good: Webservers, middle tier stuff, etc.

        Virtualization *insanely* good: development !

        It simply changed my programmer life entirely. How can I keep machines with any flavor and version of the linux boxes I'm working at which can be booted in seconds ? How can I have a (virtual) LAN with dozen machines communicating to each other when developping a failover/balanced service ? How can I multiply the number of machines by a cut'n'paste operation ? I do I rollback a damaging crash or a faulty ope
      • Well. VMWare has issues with IO latency. One has to watch for that, not try to virtualize everything. But. You say "Virtualization bad" for "CPU intensive," and I cannot agree with that. SPECint2006 and SPECfp2006, as well as rates are within 5% of hard metal for ESX. I've run the tests myself. Old school "CPU intensive" applications are a non-conversation in in virtualizaation today.

        It's the network IO and network latency that will kill you if you don't know what you're doing. VMWare has known issues in th
    • No one who understands the technology believes that virtualization can perform all the miracles that the marketing people claim it can.

      Unfortunately, management usually falls for the marketing materials while ignoring the technologists' cautions.

      Remember, if they've never tried it before, you can promise them anything to make the first sale. Once you've sold it, it becomes a tech support issue.
      • Re: (Score:3, Insightful)

        by LinuxDon ( 925232 )
        While I completely agree with you on this at many other areas, I don't in this case.
        The reason for this is that virtualization -simplifies- tech support in every way (except for real-time applications).

        Load problems, especially in a virtualized environment are extremely easy to manage technically.
        You can just add additional servers and move the virtual machine to the new machine while it's running.

        It's the management who will be having a budget problem when this happens, while tech support is not having a t
        • by suggsjc ( 726146 )

          It's the management who will be having a budget problem when this happens, while tech support is not having a technical problem.
          Yeah, and we know who'll win those head-to-head battles.
    • Re:Yawn (Score:4, Informative)

      by Anonymous Coward on Friday March 09, 2007 @12:58PM (#18291296)
      First time I've ever posted anon...

      A vendor just convinced management to move all of our webhosting stuff over to a Xen virtualized environment (we're a development firm that hosts out clients) a few weeks before I hired in. No one here understands how its configured or how it works and this is the first implementation that this vendor has performed, but management believes that they walk on water. No other tech shops in the area have even the slightest bit of expertise with it. So guess what now? Come hell or high water, we can't afford to drop these guys no matter how bad they might screw up.

      Who ever claims that open-source is the panacea to vendor lock-in is smoking crack. Open source gives companies enough "free" rope to hang themselves with if it isn't implemented smartly. Virtualiztion is no different.
    • This is the exact same pattern that almost every computing technology follows.

      For the most part, I agree. The main difference as I see it, is that hardware assisted virtualization hit at the same time as several other trends and it has been applied in ways that are upsetting some long-standing problems and roadblocks. When virtualization was being touted as the next great thing, people were thinking of it for use with flexible servers and Sun and Amazon and other players have brought that to market and it is nice and convenient and cheap, but not the solution to all our problems. W

    • It is called the hype cycle, as popularized by Gartner. See: http://en.wikipedia.org/wiki/Hype_cycle [wikipedia.org]
  • Is this for real? (Score:5, Insightful)

    by Marton ( 24416 ) on Friday March 09, 2007 @12:45PM (#18291082)
    One of the most uninformative articles ever to hit Slashdot.

    "Oh, so now more apps will be competing for that single HW NIC?" Wow. Computerworld, insightful as ever.
  • Waste of time... (Score:3, Insightful)

    by evilviper ( 135110 ) on Friday March 09, 2007 @12:46PM (#18291100) Journal
    I want those 2 minutes of my life back...
  • This just in... (Score:2, Insightful)

    by Anonymous Coward
    Really Cool Thing can have drawbacks. Popular computer technology shown not to be silver bullet. Film at 11.
  • by Anonymous Coward on Friday March 09, 2007 @12:49PM (#18291144)
    I've found that VMware is incredibly useful for testing network booting (PXE) systems. I rolled my own custom Damn Small Linux for PXE booting on our thin client workstations. VMware was great for testing purposes. Everybody loves DSL too, they can listen to streaming audio and MP3s while they work too, since I included mplayer and Flash in Firefox. NX and FreeNX to connect to our terminal server.
  • Virtualization (Score:5, Interesting)

    by DesertBlade ( 741219 ) on Friday March 09, 2007 @12:51PM (#18291170)
    Good story, but I disagree in some areas.

    Bandwidth concerns. You can have more than one NIC installed on the server and have it dedicated to each virtual machine.

    Downtime: If you need to do maintance on the host that may be a slight issue, but I hardly ever have to anything to the host. Also if the host is dying, you can shut donw the Virtual machine and copy it to another server (or move the drive) and bring it up fairly quickly. You also have cluster capability with virtualization.
    • by EvanED ( 569694 )
      Also if the host is dying, you can shut donw the Virtual machine and copy it to another server (or move the drive) and bring it up fairly quickly. You also have cluster capability with virtualization.

      Enterprise VM solutions allow you to migrate with essentially no ( 1 sec) downtime.
    • Re: (Score:3, Insightful)

      by drinkypoo ( 153816 )

      Bandwidth concerns. You can have more than one NIC installed on the server and have it dedicated to each virtual machine.

      or, of course, you can use a faster network connection to the host, simplifying cabling. it might not be cost-effective to even go to GigE for many people at this point with one system per wire. For a lot of things it's hard to max that out, obviously fileserving and the like is not such an application, but those of you who have been there know what I mean. But if you're looking at mult

    • Re: (Score:3, Insightful)

      by jallen02 ( 124384 )
      I would say that every single one of those points in the article are being addressed in the enterprise VM arena. In the end due to the raw extra control you get over virtual machines it very much is the future. There is very little memory overhead. Once virtual infrastructure becomes fully developed and the scene plays out completely I think it will actually make the things in the article easier, not harder. You have to pace yourself in how and where you use virtualization in your organization, but the bene
      • by julesh ( 229690 )
        Say you have 4 VMs of Windows 2003 running. All of the pages of memory that are the same (say, core kernel pages and the like) get mapped to the same physical page. The guest operating systems never know. You can effectively scoop up a lot of extra memory if you have a lot of systems running the same software. All of those common libraries and Windows/Linux processes are only paid for once in memory.

        Does this actually work? I know it's theoretically possible, but I understand that in practice it's quite ha
    • Sometimes it's hard to wrap the mind around new concepts. It's hard to break out of the mindset that a server consists of hardware running an operating system upon which some software services are operating. If that entire server concept -- hardware, OS, software -- is bundled up into one software image that can be running on any piece of hardware on the network, then we have to re-imagine what "downtime" means, or what our hardware requirements are going to be. The ability to zip entire "servers" around
  • by pyite69 ( 463042 ) on Friday March 09, 2007 @12:52PM (#18291188)
    It is great for replacing things like DNS servers that are mostly CPU. However, don't try running two busy database machines on the same disk - you can't divide it up nearly as well as CPU or bandwidth use.

    Also, make sure to try OpenVZ before you try Xen. If you are virtualizing all Linux machines, then VZ is IMO a better choice.
    • by julesh ( 229690 )
      Also, make sure to try OpenVZ before you try Xen. If you are virtualizing all Linux machines, then VZ is IMO a better choice.

      I've not used OpenVZ (or Virtuozzo), but I've spent a while using an alternative system based on the same principles (OpenVSD) and I have to say that the approach is not without its disadvantages, particularly in terms of software that is incompatible due to there not being a real root account available. It also doesn't isolate your virtual servers' memory requirements from each othe
  • by philo_enyce ( 792695 ) on Friday March 09, 2007 @12:52PM (#18291194)
    to sum up tfa: poor planning and execution are the cause of problems.

    how about an article that makes some recommendations on how to mitigate the problems they identify with virtualization, or point out some non obvious issues?

    philo

    • so say we all
    • Re: (Score:3, Funny)

      by ndansmith ( 582590 )

      how about an article that makes some recommendations on how to mitigate the problems they identify with virtualization, or point out some non obvious issues?
      Have it on my desk Monday morning.

    • by julesh ( 229690 )
      to sum up tfa: poor planning and execution are the cause of problems.

      You missed one: proprietary software licenses cause legal difficulties sometimes, too.
  • by QuantumRiff ( 120817 ) on Friday March 09, 2007 @12:52PM (#18291196)
    If your servers become toast, due to whatever reason, you can get a simple workstation, put a ton of RAM in it, and load up your virtual systems. Of course they will be slower, but they will still be running. We don't need to carry expensive 4 hour service contracts, just next business day contracts, saving a ton of money. The nice thing for me with Virtual servers is it is device agnostic, so if I have to recover, worst case, I have only one server to worry about NIC drivers, RAID settings/drivers, etc. After that, its just loading up the virtual server files.
  • excess power (Score:4, Insightful)

    by fermion ( 181285 ) on Friday March 09, 2007 @12:53PM (#18291220) Homepage Journal
    I see virtualization as a means to use the excess cycles in the modern microsprocessors. Like over aggressive GUI and DRM, it creates a need for the ever more expensive and complex processors. I am continuously amazed that while I can run most everything I have on a sub GHZ machine, everyone is clamoring about the need for 3 and 4 GHZ machines. And though my main machine runs at over a GHZ, it still falters at decoding DRM compressed Video, even though a DVD plays fine on my 500 MHZ machine.

    But it still is useful. Like terminals hooked up to big mainframes, it may make sense to run multiple virtual machines off a single server, or even have the same OS run for the same user in different spaces on a single machine. We have been heading to this point for a while, and now that we have the power, it makes little sense not to use it.

    The next thing I am waiting for are very cheap machines, say $150, with no moving parts, only network drivers, that will link to a remote server.

    • Re: (Score:3, Informative)

      by Gr33nNight ( 679837 )
      Here you go: http://www.wyse.com/products/winterm/ [wyse.com] We're buying those next year instead of desktops.
    • by julesh ( 229690 )
      a DVD plays fine on my 500 MHZ machine.

      Only just. My PII-400 cannot play DVDs properly; the audio quickly drifts out of sync with the video (which I'm informed is a symptom of a too-slow processor, even though it sounds like a bug in the playback software).
  • by Anonymous Coward on Friday March 09, 2007 @12:56PM (#18291262)
    The absolute only place it has not been appropriate are locations requiring high amounts of disk IO. It has been a godsend everywhere else. All of our web servers, application servers, support servers, management servers, blah blah blah. It's all virtual now. Approximately 175 servers are now virtual. The rest are huge SQL Server/Oracle systems.

    License controls are fine. All the major players support flexible VM licensing. The only people that bark about change control are those who simply don't understand virtual infrastructure and a good sit-down solved that issue. "Compliance" has not been an issue for us at all. As far as politics are concerned -- if they can't keep up with the future, then they should get out of IT.

    FYI: We run VMware ESX on HP hardware (DL585 servers) connected to an EMC Clariion SAN.
  • by caseih ( 160668 ) on Friday March 09, 2007 @12:57PM (#18291270)
    There's nothing wrong with the technology as such. All of the problems mentioned in the article are not inherent to virtualization, nor are they flaws in the technology. Virtualization just requires some basic planning. What is the average disk utilization (disk bandwidth) of a server you want to virtualize? What about CPU? How about network bandwidth? You need to know this before you start throwing stuff into a VM. VMWare and Xen both allow you to take advantage of multiple hardware NICs in the host, multiple processing units, and also multiple physical disks and buses. Of course running multiple VMs on one host will have to share bandwidth and server throughput. The article is stating the obvious but making it sound like virtualization has an inherent fatal flaw and thus will fall out of favor, which makes the article rather lame.
  • Home Use (Score:2, Insightful)

    by 7bit ( 1031746 )
    I find Virtualization to be great for home use.

    It's safer to browse the web through a VM that is set to not allow access to your main HD's or partitions. Great for any internet activity really, like P2P or running your own server; if it gets hacked they still can't affect the rest of your system or data outside of the VM's domain. It's also much safer to try out new and untested software from within a VM, in case of virus or spyware infection, or just registry corruption or what have you. I can also be usef
  • From the article:

    Increased uptime requirements arise when enterprises stack multiple workloads onto a single server, making it even more essential to keep the server running.

    You don't just move twenty critical servers to one slightly bigger machine. You need to follow the same redundancy rules you should follow with the multiple physical servers.

    Unless you are running a test bed or dealing with less critical servers, where you can use old equipment, you get a pair (at least) of nice, beefy enterprise servers with redundant everything and split the VMs among them. And with a nice SAN between them, you can move the VMs between the servers when needed.

    Even better

  • God damn, that was so not worth the RTFA. I have adblock+ running and there were still more crap panes than individual characters in the article proper. I'll think twice before venturing to craputerworld next time. From the "no shit, Sherlock" dept. would be more appropriate. That article, besides being a waste of time, was so junior admin.

    Most admins have already figured out that; 1) don't put all your "eggs" into one virtual "basket", 2) spread the virts across multiple NICs and keep the global(or mas
  • Hype Common Sense (Score:3, Interesting)

    by micromuncher ( 171881 ) on Friday March 09, 2007 @01:14PM (#18291556) Homepage
    The article mentions a point of common sense that I fought tooth 'n nail about and lost in the Big Company I'm at now.

    For a year I fought against virtualizing our sandbox servers because of resource contention issues. One machine pretending to be many with one NIC and one router. We had a web app that pounded a database... pre virtualization it was zippy. Post virtualization it was unusuable. I explained that even though you can Tune virtualized servers, it happens after the fact, and it becomes a big active management problem to make sure your IT department doesn't load up tons of virtual servers to the point it affects everyone virtualized. They argued, well, you don't have a lot of use (a few users, and not a lot of resource utilization.)

    My boss eventually gave in. The client went from zippy workability in an app being developed, to slow piece of crap because of resource contention, and its hard to explain that an IT change forced under the hood was the reason for SLOW, and in UAT, SLOW = BUSTED.

    That was a huge nail in the coffin for the project. When the user can't use the app on demand, for whatever reason, and they don't want to hear jack about tuning or saving rack space.

    So all you IT managers and people thinking you'll get big bonuses by virtualizing everything... consider this... ONE MACHINE, ONE NETWORK CARD, pretending to be many...

    • sounds like sour grapes and a piss poor implementation to me. why didn't you just install more NICs if that was the problem, or more ram, more CPUs etc if that was the problem?
    • For a year I fought against virtualizing our sandbox servers because of resource contention issues.

      Sandbox, Test, Development. Those are the environments that just scream FOR virtualization. Obviously, your organization needs a lesson in virtual architecture. Sounds like you purchased your services from Andi Mann. Trust me, based on what I read in the article, the guy has no idea what he is doing.

  • by HockeyPuck ( 141947 ) on Friday March 09, 2007 @01:16PM (#18291580)
    Why is it all of a sudden whenever someone says "Virtualization" they imply that it must be Vmware/Xen/windows/x86 platform.

    It's not like these issues haven't existed on other platforms. Mainframes, mini's (as400), Unix (aix/solaris/hpux), heck we've had it on non-computer platforms (VLANs anyone...).

    And yes using partitions/LPARs on those platforms required *GASP* planning, but in the age of "click once to install DB and build website" aka "Instant gratification" we refuse to do any actual work prior to installing, downloading, deploying...

    How about a few articles comparing AIX/HPUX/Solaris partitions to x86 solutions...
    • Sorry, but we aren't buying an AIX mainframe. Just not happening. We lack the budget, the trained people, etc. Never mind the fact that the cost of one mainframe exceeds the cost of all our real, physical servers.

      x86 virtualization is of interest to many people since they are running lots of x86 boxes already and it offers the ability to simplify that and save money. For example one small area that we use VMs for are scanner servers. We have these copier/scanner jobs with crappy software, each one needs its
  • by wiredog ( 43288 ) on Friday March 09, 2007 @01:21PM (#18291658) Journal
    Is some of it crocuses? Or at least daffodils?

    Please tell me it's not daisies.

  • by LodCrappo ( 705968 ) on Friday March 09, 2007 @01:28PM (#18291790)
    Increased uptime requirements arise when enterprises stack multiple workloads onto a single server, making it even more essential to keep the server running. "The entire environment becomes as critical as the most critical application running on it," Mann explains. "It is also more difficult to schedule downtime for maintenance, because you need to find a window that's acceptable for all workloads, so uptime requirements become much higher."

    No, no, no. First of all, in a real enterprise type solution (something this author seems unfamiliar with) the entire environment is redundant. "the" server? You don't run anything on "the" server, you run it on a server and you just move the virtual machine(s) to another server as needed when there is a problem or maintenance is needed. It is actually very easy to deal with hardware failures.. you don't ever have to schedule downtime, you just move the VMs, fix the broken node, and move on. For software maintenance you just snapshot the image, do your updates, and if they don't work out, you're back online in no time.

    In a physical server environment, each application runs on a separate box with a dedicated network interface card (NIC), Mann explains. But in a virtual environment, multiple workloads share a single NIC, and possibly one router or switch as well.

    Uh... well maybe you would just install more nics? It seems the "expert" quoted in this article has played around with some workstation level product and has no idea how enterprise level solutions actually work.

    The only valid point I find in this whole article is the mention of additional training and support costs. These can be significant, but the flexibility and reliability of the virtualized environment is very often well worth the cost.

  • As long as we're on the subject, does anyone have any opinions about whether VMware or Windows Virtual Server is better and why? We're actually in the process of spec-ing out our first virtual server, as we speak, and we're having an argument over which one to use. Are there any other virtualization technologies we should be considering?
    • Are there any other virtualization technologies we should be considering?

      Yes. AIX.

      • by Dadoo ( 899435 )
        Yes. AIX.

        Sorry, I can't agree with you, there. We have a couple of AIX servers here (a pSeries 550 and a 6F1) and I can tell you, unless IBM gets their act together before we need to replace them, they will not be replaced with IBM servers. My experience with IBM is that, if you're not willing to spend $500,000 or more on a machine, they don't want to be bothered.
    • by jregel ( 39009 )
      I've used both VMware Server (the free one) and Windows Virtual Server and they both do the same sort of virtualisation - on top of a host OS. In the case of VMware Server, I've installed on top of a stripped down Linux install and it's working pretty well. Obviously Windows Virtual Server requires a Windows OS underneath it which has a bigger overhead.

      My personal preference is to use VMware Server as the product works incredibly well. That's not to say that Virtual Server doesn't work well, but it just fee
  • I've found that when they work, it's really cool, but it does add layer of complexity that wasn't there before. Then again, having screws hold items together instead of nails is amazingly useful sometimes.

  • A nice buffer zone! (Score:2, Informative)

    by Gazzonyx ( 982402 )
    I've found that virtualization is a nice buffer zone from management decisions! Case in point, yesterday my boss (he's got a degree in comp. sci - 20 years ago...), who's just getting somewhat used to the linux server that I set up, decided that we should 'put /var in the /data directory tree'; I had folded once when he wanted to put /home in /data, for backup reasons, and made it a symlink from /.

    Now when he gets these ideas, before just going and doing it on the production server, I can say "How about

The trouble with being punctual is that nobody's there to appreciate it. -- Franklin P. Jones

Working...