Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Operating Systems Software

Virtualization Is Not All Roses 214

An anonymous reader writes "Vendors and magazines are all over virtualization like a rash, like it is the Saviour for IT-kind. Not always, writes analyst Andi Mann in Computerworld." I've found that when it works, it's really cool, but it does add a layer of complexity that wasn't there before. Then again, having a disk image be a 'machine' is amazingly useful sometimes.
This discussion has been archived. No new comments can be posted.

Virtualization Is Not All Roses

Comments Filter:
  • Re:Yawn (Score:5, Informative)

    by WinterSolstice ( 223271 ) on Friday March 09, 2007 @01:48PM (#18291134)
    Yes - we have quite a bit that we just put in here at my shop.

    Virtualization good: Webservers, middle tier stuff, etc.
    Virtualization bad: DBs, memory intensive, CPU intensive.

    Biggest issue? "Surprise" systems. You might see a system and notice a "reasonable" load average, then find out once it's on a VM that it was a really horrible candidate because it has huge memory, disk, CPU, or network spikes. VMWare especially seems to hate disk spikes.

    What we learned is it's the not the average as much as the high-water-marks that really matter. A system that's quiet 99.99% of the time, but spikes to 100% for 60 seconds here or there can be nasty.
  • by pyite69 ( 463042 ) on Friday March 09, 2007 @01:52PM (#18291188)
    It is great for replacing things like DNS servers that are mostly CPU. However, don't try running two busy database machines on the same disk - you can't divide it up nearly as well as CPU or bandwidth use.

    Also, make sure to try OpenVZ before you try Xen. If you are virtualizing all Linux machines, then VZ is IMO a better choice.
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Friday March 09, 2007 @01:53PM (#18291206)
    No one who understands the technology believes that virtualization can perform all the miracles that the marketing people claim it can.

    Unfortunately, management usually falls for the marketing materials while ignoring the technologists' cautions.

    Remember, if they've never tried it before, you can promise them anything to make the first sale. Once you've sold it, it becomes a tech support issue.
  • Re:excess power (Score:3, Informative)

    by Gr33nNight ( 679837 ) on Friday March 09, 2007 @01:57PM (#18291280)
    Here you go: http://www.wyse.com/products/winterm/ [wyse.com] We're buying those next year instead of desktops.
  • Re:Yawn (Score:4, Informative)

    by Anonymous Coward on Friday March 09, 2007 @01:58PM (#18291296)
    First time I've ever posted anon...

    A vendor just convinced management to move all of our webhosting stuff over to a Xen virtualized environment (we're a development firm that hosts out clients) a few weeks before I hired in. No one here understands how its configured or how it works and this is the first implementation that this vendor has performed, but management believes that they walk on water. No other tech shops in the area have even the slightest bit of expertise with it. So guess what now? Come hell or high water, we can't afford to drop these guys no matter how bad they might screw up.

    Who ever claims that open-source is the panacea to vendor lock-in is smoking crack. Open source gives companies enough "free" rope to hang themselves with if it isn't implemented smartly. Virtualiztion is no different.
  • by Semireg ( 712708 ) on Friday March 09, 2007 @01:59PM (#18291308)
    I'm certified for both VMware ESX 2.5 and VMware VI3. VMware's best practices are to never use a single path, whether it be for NIC or FC HBA (storage). VMware also has Virtual Switches, which not only allows you to team NICs for load balancing and failover, but also use port groups (VLANs). You can then view pretty throughput graphs for either physical NICs or virtual adapters. It's crazy amazing(TM).

    As for "putting many workloads on a box and uptime," this writer should really take a look at VMware VI3 and Vmotion. Not only can you migrate a running VM without downtime, you can "enter maintenance mode" on a physical host, and using DRS (distributed resource scheduler) it will automatically migrate the VMs to hosts and achieve a load balance between CPU/Memory. It's crazy amazing(TM).

    Lastly, just to toot a bit of the virtualization horn... VMware's HA will automatically restart your VMs on other physical hosts in your HA cluster. It's not unusual for a Win2k3 VM to boot in under 20 seconds (VMware's BIOS posts in about .5 seconds compared to an IBM xSeries 3850 which takes 6 minutes). Oh, and there is the whole snapshotting feature, memory and disk, which allows for point in time recovery on any host. Yea... downsides indeed.

    Virtualization is Sysadmin Utopia. -- cvl, a Virtualization Consultant
  • by db32 ( 862117 ) on Friday March 09, 2007 @02:00PM (#18291336) Journal
    From what I have seen and experienced the VM video card is the issue. The virtual machine uses the virtual hardware drivers so the actual hardware is largely irrelevant so long as the host OS can handle it. In a desparate attempt to get FFXI installed on my linux machine I resorted to attempting to use VMware only to find out that VMware does not support any kind of 3d accel stuff (again, virtual hardware vs real hardware).
  • by Anonymous Coward on Friday March 09, 2007 @02:09PM (#18291464)

    Increased uptime requirements arise when enterprises stack multiple workloads onto a single server, making it even more essential to keep the server running. "The entire environment becomes as critical as the most critical application running on it," Mann explains. "It is also more difficult to schedule downtime for maintenance, because you need to find a window that's acceptable for all workloads, so uptime requirements become much higher."


    Absolute rubbish. If you don't know how to buy and install redundant hardware and implement a virtualization platform that allows hot-migration, then you should learn. If you don't want to, then you need to go back to help desk duty.

    Bandwidth problems are also a challenge, Mann says, and are caused by co-locating multiple workloads onto a single system with one network path. In a physical server environment, each application runs on a separate box with a dedicated network interface card (NIC), Mann explains. But in a virtual environment, multiple workloads share a single NIC, and possibly one router or switch as well.


    Ohhh nooo! Sharing a single router! Sharing a single gigabit NIC!

    First, regarding the NICs. When we first started working with VMware ESX, we bought four gigabit NICs thinking we'd need that much bandwidth. Guess what? We don't. We're so far from it. Even with iSCSI operations. Any basic tech article you will read about getting into VMs will explain why two gigabit NICs are probably enough. Before your NIC is flooding, your server will be. And that's not even taking into account 10-gigabit NICs.

    As far as routers are concerned... My God man, what kind of dime store router are you running that this sort of thing becomes a concern?

    This article is clearly written by rank amateurs and should be completely dismissed.
  • by countSudoku() ( 1047544 ) on Friday March 09, 2007 @02:13PM (#18291528) Homepage
    God damn, that was so not worth the RTFA. I have adblock+ running and there were still more crap panes than individual characters in the article proper. I'll think twice before venturing to craputerworld next time. From the "no shit, Sherlock" dept. would be more appropriate. That article, besides being a waste of time, was so junior admin.

    Most admins have already figured out that; 1) don't put all your "eggs" into one virtual "basket", 2) spread the virts across multiple NICs and keep the global(or master) server's NIC separate, 3) use VIPs and clusters to load balance across similar virtual instances on separate physical h/w to keep unexpected downtime in check, 4) don't load up too many dissimilar virts into a single physical server, 5) learn the new environment in dev/qa and do your homework on the new commands and resource/user capping features, and 6) read more /. and less computerworld. WTF, bring something new to the table. That was just weak.
  • Re:Yawn (Score:5, Informative)

    by WinterSolstice ( 223271 ) on Friday March 09, 2007 @02:15PM (#18291568)
    "Modern processors are way overkill for most things they're being used for."

    Right - except like I said - watch those spikes. We took a system that according to our monitoring sat at essentially 0-1% used (load average: 0.01, 0.02, 0.01) and put it on a virtual. Great idea, right?

    Except for the fact that once a day it runs a report that seems fairly harmless but caused the filesystem to go Read Only due to a VMWare bug. The report lasts only about 2 minutes, but it hammers the disk in apparently just the right way.

    It's the spikes you have to be careful of. Just look for your high-water-marks. If the box spikes to 90% or 100% (though the load average doesn't reflect it) it will have some issues.
  • by Professor_UNIX ( 867045 ) on Friday March 09, 2007 @02:24PM (#18291728)

    Not only can you migrate a running VM without downtime
    I'm pretty hard to please sometimes, but Vmotion is probably the single coolest feature of VMware ESX. The first time I sat there on a running VM while it was being migrated to another ESX server and didn't notice a single second of downtime while browsing the web (I had RDP'd to the box) I was in love. I was also pinging the machine from another window and it didn't drop a single packet. I really hope they eventually allow this feature to sneak into the free VMware Server and let you use it on NAS data stores for small businesses or home environments, but I doubt it.
  • Re:Yawn (Score:5, Informative)

    by cbreaker ( 561297 ) on Friday March 09, 2007 @02:29PM (#18291800) Journal
    Your bug comment is kinda moot - it's not a normal problem with virtualization.

    We have over 120 VM's running on seven hosts with VI3. Most of them, as you can imagine, are not high work-load (although we do have four Terminal Servers handling about 300 terminals total) but sometimes they are, and we've really not had any issues.

    It depends on what you're doing, really. Saying you WILL have problems is any situation isn't really valid.
  • by kv9 ( 697238 ) on Friday March 09, 2007 @02:54PM (#18292148) Homepage
  • Re:Yawn (Score:4, Informative)

    by WinterSolstice ( 223271 ) on Friday March 09, 2007 @02:56PM (#18292182)
    That's obviously just an example - uptime doesn't provide high-water marks, etc

    Ahh, slashdot. People just *love* to split hairs :D

    Ok, last time I'm saying this:
    BE CAREFUL. Not every system is an ideal candidate for virtualization, and even the ones that seem perfect at first glance can fail. Don't rely on only "overview" metrics. Do thorough inspection, and make sure you load test.

    VMs rule, but there are gotchas and bugs that can be showstoppers. Just cause someone else has 300 servers running via virtualization doesn't mean you can :D
  • Re:Yawn (Score:3, Informative)

    by WinterSolstice ( 223271 ) on Friday March 09, 2007 @03:02PM (#18292278)
    That's really awesome, and obviously your systems are a great use for VMs :D

    Our web stuff virtualized *beautifully*. We had few to no issues, but we ran into major problems when mgmt wanted to virtualize several of the other systems.

    And since when is a warning about an unfixed bug moot? It's an *unfixed* bug in ESX Server Update 3. When it's patched in the standard distribution, then it will be moot.

    VMs are still quite a new science (as opposed to LPARs) so there are lots of bugs still out there.
  • Re:Yawn (Score:5, Informative)

    by giminy ( 94188 ) on Friday March 09, 2007 @03:16PM (#18292472) Homepage Journal
    We took a system that according to our monitoring sat at essentially 0-1% used (load average: 0.01, 0.02, 0.01) and put it on a virtual.

    Load average is a bad way of looking at machine utilization. Load average is the average number of processes on the run queue over the last 1,5,15 minutes. Programs running exclusively I/O will be on the sleep queue while the kernel does i/o stuff, giving you a load average of near-zero even though your machine is busy scrambling for files on disk or waiting for network data. Likewise, a program that consists entirely of NOOPs will give you a load average of one (+1 per each additional instance) even if its nice value is all the way up and it is quite interruptable/is really putting zero strain on your system.

    Before deciding that a machine is virtualizable, don't just look at load average. Run a real monitoring utility and look at iowait times, etc.

    Reid
  • Re:Yawn (Score:2, Informative)

    by T-Ranger ( 10520 ) <jeffw@NoSPAm.chebucto.ns.ca> on Friday March 09, 2007 @03:19PM (#18292508) Homepage
    Well, disks may not be a great example. VMWare is of course a product of EMC, which makes (drumroll) high end SAN hardware and software management tools. While Im not quite saying that there is a clear conflict of interest here, the EMC big picture is clear: "now that you have saved a metric shit load of cash on server hardware, spend some of that on a shiny new SAN system". The nicer way of that is that both EMC SANs and VMware do the same thing: consolidation of hardware onto better hardware, abstraction of services provided, finer grained allocation of services, shared overhead - and management.

    If spikes on one VM are killing the whole physical host, then you are surely doing something wrong. Perhaps you do need that SAN with very fast disk access. Perhaps you need to schedule migration of VMs from one physical host to another when your report server pegs the hardware. Or, if its an unscheduled spike, you need to have rules that trigger migration if one VM is degrading service to others.
  • by Anonymous Coward on Friday March 09, 2007 @03:55PM (#18293010)
    Greetings,

          Nope. The problem is the virtualization itself. Other than KVM and Xen, you can't dedicate the hardware, which means that the virtualization is unable give direct access to the video hardware, instead, the VM's are given an emulated video device, which bjorks your video performance to the point that even playing the original doom can become downright painful.
  • by 99BottlesOfBeerInMyF ( 813746 ) on Friday March 09, 2007 @04:09PM (#18293210)

    Nope. The problem is the virtualization itself. Other than KVM and Xen, you can't dedicate the hardware, which means that the virtualization is unable give direct access to the video hardware

    Actually VMWare supposedly has direct video card access working on one of their workstation betas and Parallels has announced that they will be including that feature in their next public beta as well. I don't expect video card acceleration to be a major stumbling block at the end of 2007.

  • A nice buffer zone! (Score:2, Informative)

    by Gazzonyx ( 982402 ) <scott,lovenberg&gmail,com> on Friday March 09, 2007 @04:19PM (#18293356)
    I've found that virtualization is a nice buffer zone from management decisions! Case in point, yesterday my boss (he's got a degree in comp. sci - 20 years ago...), who's just getting somewhat used to the linux server that I set up, decided that we should 'put /var in the /data directory tree'; I had folded once when he wanted to put /home in /data, for backup reasons, and made it a symlink from /.

    Now when he gets these ideas, before just going and doing it on the production server, I can say "How about I make a VM and we'll see how that goes over", thinking under my breath the words of Keith Moon, "That'll go over like a lead zeppelin". It give me a technology to leverage where I can show that an idea is a Bad Idea, without having to trash the production server to prove my point.

    I've even set up a virtual network (1 samba PDC and 3 windows machines), to simulate our network on a small scale to set up proof of concepts. If they don't believe that something will work, I can show them without having their blessing to mess with our network. If it doesn't work, I roll back to my snapshots, and I have a virgin virtual network again.

    Does anyone do this? Has it worked out where you can do a proof of concept that otherwise, without virtualization, you would be confined to whiteboard concepts that no one would listen to?

  • Re:Yawn (Score:2, Informative)

    by Sandbags ( 964742 ) on Friday March 09, 2007 @06:11PM (#18294750) Journal
    Here's what I've found virtualization has led to: manufacturers going cheap on on board devices. For example, a lot of low end servers that leverage Intel's VT are starting to ship with what I call "Win-NICs" i.e., devices that leverage the CPU for workload instead of a dedicated silicone chip. Like modems did years ago, now NICs are doing. Soon USB and other controllers will start requiring software application layer drivers where they user to operate entirely in hardware/firmware. This has 1 benefit only, cross platform compatibility. However, it sacrifices device and overall system performance, hardware diagnostics, and nice features like PXE boot.

    People are jumping on this technology like flees to a dog. Why don't we simply standardize the device layers and have everyone comply with it? Sure, there's room for custom high performance devices, but the basic chips that every PC needs should be the same on everyone's boards, then we don't need VT...

    While I'm on the rant... People, PLEASE; DO NOT put all your systems on 1 virtual host server!!!! I've seen a dozen companies consolidate whole farms onto 1 or 2 hosts in VMWare. Before, if you had 1 server down, you had 1 server down. Now you loose half the network! If you want to VM your machines, you need to be using clustered services and redundant NAS systems. I've even seen 4 different moron customers spec that out, but even then put all the cluster nodes ON THE SAME BOX!?!? C'mon, this isn't rocket science. Virtualizing is not to save money, or to limit the number of physical machines you need, it's for system portability across platforms and quick recovery of nodes and point systems through imaging. If you can't afford the licensing to cluster your systems, you can't afford VT.
  • by Anonymous Coward on Friday March 09, 2007 @08:44PM (#18296154)
    Interesting comments. A lot of very good opinions, and I am always happy to see these issues discussed. And I am happy to learn from all of you too.

    However, despite a lot of very valid points and great knowledge of the issues, many of you are missing the point of this particular article.

    The interview I gave to Computerworld was not "Please give a balanced appraisal of virtualization", or even "What can't virtualization do"? It was "What can go wrong that IT people and their managers need to think about or prepare for"?

    These are problems that can and do happen in real world virtualized environments. I have surveyed and interviewed hundreds of real enterprises to come up with empirical data that proves these are *potential* problems. You all point out lots of very valid solutions, many of which I would recommend (given the chance), but I do not agree that the issues I raised are not *potential* problems.

    That they have solutions does not mean that the problems no longer exist. A huge number of enterprises are deploying virtualization without the skills and knowledge that you and other well-informed, well-trained, and experienced IT people have. In fact, in published studies I have found that over 50% of IT organizations that have already implemented virtualization in production say they don't have the appropriate skills to manage it. So they don't know about the solutions; many of them don't even know about the problems.

    Yes, for you and I they are pretty obvious. However, just because we are smart IT people who know what we are talking about, we cannot assume all other people (and their managers) know what they are talking about too. I am sure most of you know this from experience - is everyone in your IT department (managers included) as smart and well-informed as you? Unlikely. And if they are not aware of the potential problems, through education like Computerworld is providing in this article, then they will not even look for solutions. And your job gets even harder.

    So I agree with almost all of these posts to the extent that there are indeed many very good solutions to these problems. Maybe Computerworld needs a longer article to address them too. But I disagree with anyone who says that because we have available solutions, that means there aren't any problems to be solved in the first place.

    Anyway, I have many forums to express my opinions, so I don't want to clutter your /. space with more of my comments. I just wanted to address the perceptions raised in this thread. But my email address is below - feel free to email me if you want to continue this discussion with me directly. You can also check out EMA's website to find out more about me (including my bio - which several of you seem to think somehow affects the facts), and look at some of EMA's research into virtualization (some of which is free, btw).

    Thanks!

    Andi Mann
    Senior Analyst
    Enterprise Management Associates
    amann@enterprisemanagement.com
    http://www.enterprisemanagement.com/ [enterprisemanagement.com]

Today is a good day for information-gathering. Read someone else's mail file.

Working...