Virtualization Is Not All Roses 214
An anonymous reader writes "Vendors and magazines are all over virtualization like a rash, like it is the Saviour for IT-kind. Not always, writes analyst Andi Mann in Computerworld." I've found that when it works, it's really cool, but it does add a layer of complexity that wasn't there before. Then again, having a disk image be a 'machine' is amazingly useful sometimes.
Yawn (Score:5, Insightful)
Re:Yawn (Score:5, Informative)
Virtualization good: Webservers, middle tier stuff, etc.
Virtualization bad: DBs, memory intensive, CPU intensive.
Biggest issue? "Surprise" systems. You might see a system and notice a "reasonable" load average, then find out once it's on a VM that it was a really horrible candidate because it has huge memory, disk, CPU, or network spikes. VMWare especially seems to hate disk spikes.
What we learned is it's the not the average as much as the high-water-marks that really matter. A system that's quiet 99.99% of the time, but spikes to 100% for 60 seconds here or there can be nasty.
Re: (Score:2, Funny)
An operating system should be like a light switch... simple, effective, easy to use, and designed for everyone.
Did you know that in the US, light switches are traditionally installed with "up" being "on", while in England they are traditionally installed with "down" being "on"?
Perhaps instead operating systems should be like nipples, everyone is born knowing how to use them, and they don't operate differently in different countries
Re:Yawn (Score:5, Funny)
Re: (Score:3, Funny)
Re: (Score:3, Insightful)
Andi Mann, senior analyst with Enterprise Management Associates, an IT consultancy based in Boulder, Colo., says that virtualization's problems can include cost accounting (measurement, allocation, license compliance); human issues (politics, skills, training); vendor support (lack of license flexibility); management complexity; security (new threats and penetrations, lack of controls); and image and license proliferation.
How many times is the word "License" in here?
Re: (Score:3, Funny)
Re:Yawn (Score:5, Insightful)
Virtualization bad: DBs, memory intensive, CPU intensive.
We're starting to do the same. It looks the articles basically says "managing them is more complex, and you can overload the host". Well duh! They're no harder to manage (or not much) than that many physical machines, but it does make it a lot easier (cheaper!) to create new ones. And you don't virtualize a machine that's already using 50% of a real system. Or even 25%. Most of ours sit at 1% though. Modern processors are way overkill for most things they're being used for.
Re:Yawn (Score:5, Informative)
Right - except like I said - watch those spikes. We took a system that according to our monitoring sat at essentially 0-1% used (load average: 0.01, 0.02, 0.01) and put it on a virtual. Great idea, right?
Except for the fact that once a day it runs a report that seems fairly harmless but caused the filesystem to go Read Only due to a VMWare bug. The report lasts only about 2 minutes, but it hammers the disk in apparently just the right way.
It's the spikes you have to be careful of. Just look for your high-water-marks. If the box spikes to 90% or 100% (though the load average doesn't reflect it) it will have some issues.
Re:Yawn (Score:5, Informative)
We have over 120 VM's running on seven hosts with VI3. Most of them, as you can imagine, are not high work-load (although we do have four Terminal Servers handling about 300 terminals total) but sometimes they are, and we've really not had any issues.
It depends on what you're doing, really. Saying you WILL have problems is any situation isn't really valid.
Re: (Score:3, Informative)
Our web stuff virtualized *beautifully*. We had few to no issues, but we ran into major problems when mgmt wanted to virtualize several of the other systems.
And since when is a warning about an unfixed bug moot? It's an *unfixed* bug in ESX Server Update 3. When it's patched in the standard distribution, then it will be moot.
VMs are still quite a new science (as opposed to LPARs) so there are lots of bugs still out there.
Re: (Score:3, Insightful)
There's some things I won't virtualize right now, and that's two of our very busy database servers and the two main file servers. Part of it is for the performance of those systems, but part of it is also because I don't want those systems to chew up a high percentage of the overall virtual cluster.
I don't think x86 VM's are a new science anymore. They're just somewhat new to the Enterpris
Re: (Score:2)
Re: (Score:2)
Re:Yawn (Score:5, Informative)
Load average is a bad way of looking at machine utilization. Load average is the average number of processes on the run queue over the last 1,5,15 minutes. Programs running exclusively I/O will be on the sleep queue while the kernel does i/o stuff, giving you a load average of near-zero even though your machine is busy scrambling for files on disk or waiting for network data. Likewise, a program that consists entirely of NOOPs will give you a load average of one (+1 per each additional instance) even if its nice value is all the way up and it is quite interruptable/is really putting zero strain on your system.
Before deciding that a machine is virtualizable, don't just look at load average. Run a real monitoring utility and look at iowait times, etc.
Reid
Re: (Score:2)
This may be wrong, but I've always looked at load as the number of processes waiting for resources (usually disk, CPU, or network).
I've seen boxes with issues that have had a number of processes stuck in the nonkillable D (disk) wait state that were just stuck, but they had no real impact on the system besides artifically running the load up.
I've also see
Re: (Score:3, Insightful)
Yep, it's wrong. Load average is defined as the number of processes that are runnable. Processes waiting for resources (at least, resources made via system called, like trying to talk to disk, network, etc) are put on the WAIT queue and are not runnable. Thus, they do not contribute to load. Processes that have all the data they need and just need a cpu slice contribute to l
Re: (Score:2, Informative)
Re:Yawn (Score:4, Informative)
Ahh, slashdot. People just *love* to split hairs
Ok, last time I'm saying this:
BE CAREFUL. Not every system is an ideal candidate for virtualization, and even the ones that seem perfect at first glance can fail. Don't rely on only "overview" metrics. Do thorough inspection, and make sure you load test.
VMs rule, but there are gotchas and bugs that can be showstoppers. Just cause someone else has 300 servers running via virtualization doesn't mean you can
Re: (Score:2, Interesting)
Re: (Score:2)
Re: (Score:2)
We're about to migrate our 500+ server farm (webservers, Exchange and databases) to VMs and I can't seem to get them to understand that not everything can work within a VM.
Re: (Score:2)
"This dude on this web forum said DBs suck on VMs"
Let me know how that works for you
Re:Yawn (Score:4, Funny)
Re: (Score:2)
Re:Yawn (Score:4, Interesting)
Re:Yawn (Score:4, Insightful)
Indeed, it has become a bit of a unqualified, blanket meme: "Don't put database servers on virtual machines!" we hear. I heard it just yesterday from an outsourced hardware rep for crying out loud (they were trying to display that they "get" virtualization).
Ultimately, however, it's one of those easy bits of "wisdom" that people parrot because it's cheap advice, and it buys some easy credibility.
Unqualified, however, the statement is complete and utter nonsense. It is absolutely meaningless (just because something can superficially get called a "database" says absolutely nothing about what usage it sees, its disk access patterns, CPU and network needs, what it is bound by, etc).
An accurate rule would be "a machine that saturates one of the resources of a given piece of hardware is not a good candidate to be virtualized on that same piece of hardware" (e.g. your aforementioned database server). That really isn't rocket science, and I think it's obvious to everyone. It also doesn't rely upon some meaningless simplification of application roles.
Note that all of the above is speaking more towards the industry generalization, and not towards you. Indeed, you clarified it more specifically later on.
Re: (Score:2)
Re: (Score:2, Interesting)
Virtualization *insanely* good: development !
It simply changed my programmer life entirely. How can I keep machines with any flavor and version of the linux boxes I'm working at which can be booted in seconds ? How can I have a (virtual) LAN with dozen machines communicating to each other when developping a failover/balanced service ? How can I multiply the number of machines by a cut'n'paste operation ? I do I rollback a damaging crash or a faulty ope
Re: (Score:2)
It's the network IO and network latency that will kill you if you don't know what you're doing. VMWare has known issues in th
It's Marketing vs Technologists. (Score:3, Informative)
Unfortunately, management usually falls for the marketing materials while ignoring the technologists' cautions.
Remember, if they've never tried it before, you can promise them anything to make the first sale. Once you've sold it, it becomes a tech support issue.
Re: (Score:3, Insightful)
The reason for this is that virtualization -simplifies- tech support in every way (except for real-time applications).
Load problems, especially in a virtualized environment are extremely easy to manage technically.
You can just add additional servers and move the virtual machine to the new machine while it's running.
It's the management who will be having a budget problem when this happens, while tech support is not having a t
Re: (Score:2)
Re:Yawn (Score:4, Informative)
A vendor just convinced management to move all of our webhosting stuff over to a Xen virtualized environment (we're a development firm that hosts out clients) a few weeks before I hired in. No one here understands how its configured or how it works and this is the first implementation that this vendor has performed, but management believes that they walk on water. No other tech shops in the area have even the slightest bit of expertise with it. So guess what now? Come hell or high water, we can't afford to drop these guys no matter how bad they might screw up.
Who ever claims that open-source is the panacea to vendor lock-in is smoking crack. Open source gives companies enough "free" rope to hang themselves with if it isn't implemented smartly. Virtualiztion is no different.
Re: (Score:2)
This is the exact same pattern that almost every computing technology follows.
For the most part, I agree. The main difference as I see it, is that hardware assisted virtualization hit at the same time as several other trends and it has been applied in ways that are upsetting some long-standing problems and roadblocks. When virtualization was being touted as the next great thing, people were thinking of it for use with flexible servers and Sun and Amazon and other players have brought that to market and it is nice and convenient and cheap, but not the solution to all our problems. W
Hype Cycle (Score:2)
Is this for real? (Score:5, Insightful)
"Oh, so now more apps will be competing for that single HW NIC?" Wow. Computerworld, insightful as ever.
Waste of time... (Score:3, Insightful)
Re: (Score:2)
This just in... (Score:2, Insightful)
Testing PXE terminals (Score:3, Interesting)
Virtualization (Score:5, Interesting)
Bandwidth concerns. You can have more than one NIC installed on the server and have it dedicated to each virtual machine.
Downtime: If you need to do maintance on the host that may be a slight issue, but I hardly ever have to anything to the host. Also if the host is dying, you can shut donw the Virtual machine and copy it to another server (or move the drive) and bring it up fairly quickly. You also have cluster capability with virtualization.
Re: (Score:2)
Enterprise VM solutions allow you to migrate with essentially no ( 1 sec) downtime.
Re: (Score:3, Insightful)
or, of course, you can use a faster network connection to the host, simplifying cabling. it might not be cost-effective to even go to GigE for many people at this point with one system per wire. For a lot of things it's hard to max that out, obviously fileserving and the like is not such an application, but those of you who have been there know what I mean. But if you're looking at mult
Re: (Score:3, Insightful)
Re: (Score:2)
Does this actually work? I know it's theoretically possible, but I understand that in practice it's quite ha
Re: (Score:2)
Because you may not be able to run your apps in a single instance. They may be operated by different individuals who for security reasons are not allowed to access each others' data. They may all need to bind to the same TCP port, and aren't multihomed server aware.
Because separating them into 4 OS instances gives you be
Re: (Score:2)
Disk contention is the big shortcoming (Score:4, Informative)
Also, make sure to try OpenVZ before you try Xen. If you are virtualizing all Linux machines, then VZ is IMO a better choice.
Re: (Score:2)
I've not used OpenVZ (or Virtuozzo), but I've spent a while using an alternative system based on the same principles (OpenVSD) and I have to say that the approach is not without its disadvantages, particularly in terms of software that is incompatible due to there not being a real root account available. It also doesn't isolate your virtual servers' memory requirements from each othe
why are we reading this garbage? (Score:5, Insightful)
how about an article that makes some recommendations on how to mitigate the problems they identify with virtualization, or point out some non obvious issues?
philo
Re: (Score:2)
Re: (Score:3, Funny)
Re: (Score:2)
You missed one: proprietary software licenses cause legal difficulties sometimes, too.
it is all roses for Disaster Recovery (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
excess power (Score:4, Insightful)
But it still is useful. Like terminals hooked up to big mainframes, it may make sense to run multiple virtual machines off a single server, or even have the same OS run for the same user in different spaces on a single machine. We have been heading to this point for a while, and now that we have the power, it makes little sense not to use it.
The next thing I am waiting for are very cheap machines, say $150, with no moving parts, only network drivers, that will link to a remote server.
Re: (Score:3, Informative)
Re: (Score:2)
Only just. My PII-400 cannot play DVDs properly; the audio quickly drifts out of sync with the video (which I'm informed is a symptom of a too-slow processor, even though it sounds like a bug in the playback software).
We're about 95% virtualized and never going back! (Score:3, Interesting)
License controls are fine. All the major players support flexible VM licensing. The only people that bark about change control are those who simply don't understand virtual infrastructure and a good sit-down solved that issue. "Compliance" has not been an issue for us at all. As far as politics are concerned -- if they can't keep up with the future, then they should get out of IT.
FYI: We run VMware ESX on HP hardware (DL585 servers) connected to an EMC Clariion SAN.
Like all technologies, you need a good plan (Score:3, Interesting)
Home Use (Score:2, Insightful)
It's safer to browse the web through a VM that is set to not allow access to your main HD's or partitions. Great for any internet activity really, like P2P or running your own server; if it gets hacked they still can't affect the rest of your system or data outside of the VM's domain. It's also much safer to try out new and untested software from within a VM, in case of virus or spyware infection, or just registry corruption or what have you. I can also be usef
Same old "doing it half-assed" (Score:2, Interesting)
Increased uptime requirements arise when enterprises stack multiple workloads onto a single server, making it even more essential to keep the server running.
You don't just move twenty critical servers to one slightly bigger machine. You need to follow the same redundancy rules you should follow with the multiple physical servers.
Unless you are running a test bed or dealing with less critical servers, where you can use old equipment, you get a pair (at least) of nice, beefy enterprise servers with redundant everything and split the VMs among them. And with a nice SAN between them, you can move the VMs between the servers when needed.
Even better
Worst. Article. Ever. (Score:2, Informative)
Most admins have already figured out that; 1) don't put all your "eggs" into one virtual "basket", 2) spread the virts across multiple NICs and keep the global(or mas
Hype Common Sense (Score:3, Interesting)
For a year I fought against virtualizing our sandbox servers because of resource contention issues. One machine pretending to be many with one NIC and one router. We had a web app that pounded a database... pre virtualization it was zippy. Post virtualization it was unusuable. I explained that even though you can Tune virtualized servers, it happens after the fact, and it becomes a big active management problem to make sure your IT department doesn't load up tons of virtual servers to the point it affects everyone virtualized. They argued, well, you don't have a lot of use (a few users, and not a lot of resource utilization.)
My boss eventually gave in. The client went from zippy workability in an app being developed, to slow piece of crap because of resource contention, and its hard to explain that an IT change forced under the hood was the reason for SLOW, and in UAT, SLOW = BUSTED.
That was a huge nail in the coffin for the project. When the user can't use the app on demand, for whatever reason, and they don't want to hear jack about tuning or saving rack space.
So all you IT managers and people thinking you'll get big bonuses by virtualizing everything... consider this... ONE MACHINE, ONE NETWORK CARD, pretending to be many...
Re: (Score:2)
Re: (Score:2)
Sandbox, Test, Development. Those are the environments that just scream FOR virtualization. Obviously, your organization needs a lesson in virtual architecture. Sounds like you purchased your services from Andi Mann. Trust me, based on what I read in the article, the guy has no idea what he is doing.
Virtualization != x86 (Score:5, Insightful)
It's not like these issues haven't existed on other platforms. Mainframes, mini's (as400), Unix (aix/solaris/hpux), heck we've had it on non-computer platforms (VLANs anyone...).
And yes using partitions/LPARs on those platforms required *GASP* planning, but in the age of "click once to install DB and build website" aka "Instant gratification" we refuse to do any actual work prior to installing, downloading, deploying...
How about a few articles comparing AIX/HPUX/Solaris partitions to x86 solutions...
Because most people don't care? (Score:2)
x86 virtualization is of interest to many people since they are running lots of x86 boxes already and it offers the ability to simplify that and save money. For example one small area that we use VMs for are scanner servers. We have these copier/scanner jobs with crappy software, each one needs its
Virtualization Is Not All Roses? (Score:3, Funny)
Please tell me it's not daisies.
Author is completely uninformed (Score:5, Insightful)
No, no, no. First of all, in a real enterprise type solution (something this author seems unfamiliar with) the entire environment is redundant. "the" server? You don't run anything on "the" server, you run it on a server and you just move the virtual machine(s) to another server as needed when there is a problem or maintenance is needed. It is actually very easy to deal with hardware failures.. you don't ever have to schedule downtime, you just move the VMs, fix the broken node, and move on. For software maintenance you just snapshot the image, do your updates, and if they don't work out, you're back online in no time.
In a physical server environment, each application runs on a separate box with a dedicated network interface card (NIC), Mann explains. But in a virtual environment, multiple workloads share a single NIC, and possibly one router or switch as well.
Uh... well maybe you would just install more nics? It seems the "expert" quoted in this article has played around with some workstation level product and has no idea how enterprise level solutions actually work.
The only valid point I find in this whole article is the mention of additional training and support costs. These can be significant, but the flexibility and reliability of the virtualized environment is very often well worth the cost.
VMware or Windows Virtual Server? (Score:2)
Re: (Score:2)
Yes. AIX.
Re: (Score:2)
Sorry, I can't agree with you, there. We have a couple of AIX servers here (a pSeries 550 and a 6F1) and I can tell you, unless IBM gets their act together before we need to replace them, they will not be replaced with IBM servers. My experience with IBM is that, if you're not willing to spend $500,000 or more on a machine, they don't want to be bothered.
Re: (Score:2)
My personal preference is to use VMware Server as the product works incredibly well. That's not to say that Virtual Server doesn't work well, but it just fee
Screwdrivers (Score:2)
I've found that when they work, it's really cool, but it does add layer of complexity that wasn't there before. Then again, having screws hold items together instead of nails is amazingly useful sometimes.
A nice buffer zone! (Score:2, Informative)
Now when he gets these ideas, before just going and doing it on the production server, I can say "How about
Re:He must be talking about freeware (Score:5, Informative)
As for "putting many workloads on a box and uptime," this writer should really take a look at VMware VI3 and Vmotion. Not only can you migrate a running VM without downtime, you can "enter maintenance mode" on a physical host, and using DRS (distributed resource scheduler) it will automatically migrate the VMs to hosts and achieve a load balance between CPU/Memory. It's crazy amazing(TM).
Lastly, just to toot a bit of the virtualization horn... VMware's HA will automatically restart your VMs on other physical hosts in your HA cluster. It's not unusual for a Win2k3 VM to boot in under 20 seconds (VMware's BIOS posts in about
Virtualization is Sysadmin Utopia. -- cvl, a Virtualization Consultant
Re: (Score:3, Informative)
I'm pretty hard to please sometimes, but Vmotion is probably the single coolest feature of VMware ESX. The first time I sat there on a running VM while it was being migrated to another ESX server and didn't notice a single second of downtime while browsing the web (I had RDP'd to the box) I was in love. I was also pinging the machine from another window and it didn't drop a single packet. I really hope they eventually allow this feature to sneak int
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
You don't mention it, but consolidated backup just rocks. I have some external Linux based NAS machines that use rsync to keep local copi
Re: (Score:2)
I have someone trying to sell me VMware right now, and he implies one of your physical machines can crash and VMware can restart the image on another machine with no downtime, at all. However, I can't believe it can do that without at least dropping existing TCP connections. (I mean, the memory on the two machines would have to be mirrored up to the latest memory access, right?) Can VMware actually do that, and if so,
Re: (Score:2)
The way vmotion works is that it:
1) Locks all running memory for the guest (location A), and updates/changes goto a completely different place (location B)
2) Copies that locked chunk A of memory to the other system
3) Locks location B, creates another location for updates/changes
4) Repeat step 1-3 until the entire move o
Re: (Score:2)
The purpose of HA is to avoid having to reimagt new hardware in response to a hardware failure. It's good, yes. But pretty grossly grained.
VMotion can move your VM over to another blade without losing connections (actual 'downtime' is planned downtime.
Hope this helps.
C//
He must. ESX set up properly avoids most pitfalls (Score:5, Insightful)
Teaming, hot-migrations, resource management, and lots of other great tools make modern x86 virtualization really enterprise caliber.
I think that the people that see it as a toy are people that have never used virtualization in the context of a large environment, being used properly with proper hardware. You can virtualize almost any server if you plan properly for it.
In the end, by going virtual you end up actually removing so much complexity from your systems that you'll never know how you did it before. No longer does each server have it's own drivers, quirks, OpenManage/hardware monitor, etc etc. You can create a new VM from a template in 5 minutes, ready to go. You can clone a server in minutes. You can snapshot the disks (and RAM, in ESX3) and you can migrate them to new hardware without bringing them down. You can create scheduled copies of production servers for your test environment. So much more simple then all-hardware.
I'll admit that you shouldn't use virtual servers for everything (yet) but you will eventually be able to run everything virtual, so it's best to get used to it now.
Re:Question: Do cards have to support it? (Score:4, Informative)
Re: (Score:2)
Also, I suggest trying VirtualBox, it runs really smooth...fast too (xp home intall in 5 minutes), and it supports 3D accell I believe.
Desperate? (Score:2)
Ummm... Exactly how desperate does one have to be to attempt that???
Re: (Score:2)
Re: (Score:2)
The graphics cards do not have to support virtualization because all hardware in a virtual system is virtual. It doesn't really exist. The system is just emulating how a given virtual hardware device would react.
I read about one of the other big virtual system that did allow you to use 3d hardware support, but that had to be assigned to a single virtual sy
Re: (Score:2)
At least as of late last year, neither Xen nor VMware supported 3d acceleration. Although there was some discussion of it on the Xen mailing list, and with the release of Vista I suspect it has become somewhat more of a priority, so perhaps we can expect to see it soon. I think the primary problem was that writing 3d card drivers for Windows is actually pretty tricky.
I su
Re: (Score:2)
VMWare Fusion Beta 2 comes with "Experimental 3D Acceleration" [arstechnica.com]
OK, so it's only for Macs so far. But that's a step in the right direction...
VMGL [toronto.edu] -- this one won't work for Windows guests, but can be used for Linux guests. A similar approach could definitely work for Windows guests, but you'd need to write a DirectX-compatible driver that translates the DX API calls into paravirtualized OpenGL API calls. Tricky, but I imagine possible.
Re: (Score:3, Informative)
Nope. The problem is the virtualization itself. Other than KVM and Xen, you can't dedicate the hardware, which means that the virtualization is unable give direct access to the video hardware
Actually VMWare supposedly has direct video card access working on one of their workstation betas and Parallels has announced that they will be including that feature in their next public beta as well. I don't expect video card acceleration to be a major stumbling block at the end of 2007.
Re: (Score:2)
Re:the sad thing is how much we need virtualizatio (Score:2, Interesting)
Re: (Score:2)
We have at my work some old boxes that should be retired & would be great canidates for virutualization (two tier db app on old xeon500) where the vendor won't support the app in a virtual environment due to resource issues - but the will support it on old crap hardware.
Re: (Score:2)
I just ditched my dual opteron (linux) + shuttle (windows) setup and replaced it with a single Core Duo box with linux virtualized under WinXP. I'm running the VMware free server software (http://www.vmware.com/products/free_virtualizatio n.html [vmware.com]) and i have to say i'm impressed.
The only negatives i've found so far (aside from the obvious ones related to two systems in one computer) are some slowdown in mouse responsiveness in the virtualized linux and th
Re: (Score:2)
I lose graphics acceleration, except for which ever OS acts as host. Anything else lost?
A little performance, again on everything but your host.
You gain flexibility and peek power: need more RAM in one of those three machines? Just shut one of the others down and change the VM config. Want a fast processor? No need to split your budget
This is FUD (Score:5, Insightful)
Examine that quote from the article closely. See anything there that indicates virtualization "doesn't work"? No, nor do I. What they are talking about here has nothing to do with how well virtualization works, what they're complaining about is that a particular tool requires competence to use well in various work environments. Well, no one ever said that virtualization would gift brains to some middle level manager, or teach anyone how to use an office suite, or imbue morals and ethics into those who would steal; virtualization lets you run an operating system in a sandbox, sometimes under another operating system entirely. And it does that perfectly well, or in other words, it works very well indeed. I call FUD.