Virtualization Is Not All Roses 214
An anonymous reader writes "Vendors and magazines are all over virtualization like a rash, like it is the Saviour for IT-kind. Not always, writes analyst Andi Mann in Computerworld." I've found that when it works, it's really cool, but it does add a layer of complexity that wasn't there before. Then again, having a disk image be a 'machine' is amazingly useful sometimes.
Yawn (Score:5, Insightful)
Is this for real? (Score:5, Insightful)
"Oh, so now more apps will be competing for that single HW NIC?" Wow. Computerworld, insightful as ever.
Waste of time... (Score:3, Insightful)
This just in... (Score:2, Insightful)
why are we reading this garbage? (Score:5, Insightful)
how about an article that makes some recommendations on how to mitigate the problems they identify with virtualization, or point out some non obvious issues?
philo
it is all roses for Disaster Recovery (Score:3, Insightful)
excess power (Score:4, Insightful)
But it still is useful. Like terminals hooked up to big mainframes, it may make sense to run multiple virtual machines off a single server, or even have the same OS run for the same user in different spaces on a single machine. We have been heading to this point for a while, and now that we have the power, it makes little sense not to use it.
The next thing I am waiting for are very cheap machines, say $150, with no moving parts, only network drivers, that will link to a remote server.
Re:Virtualization (Score:3, Insightful)
or, of course, you can use a faster network connection to the host, simplifying cabling. it might not be cost-effective to even go to GigE for many people at this point with one system per wire. For a lot of things it's hard to max that out, obviously fileserving and the like is not such an application, but those of you who have been there know what I mean. But if you're looking at multiple cables to each server and the attendant nightmares it may be just the reason you need to justify that new switch purchase.
Home Use (Score:2, Insightful)
It's safer to browse the web through a VM that is set to not allow access to your main HD's or partitions. Great for any internet activity really, like P2P or running your own server; if it gets hacked they still can't affect the rest of your system or data outside of the VM's domain. It's also much safer to try out new and untested software from within a VM, in case of virus or spyware infection, or just registry corruption or what have you. I can also be useful for code developement within a protected environment.
Did I mention portability? Keep back-up's of your VM file and run it on any system you want after installing something like the Free VMWare Server:
http://www.vmware.com/products/server/ [vmware.com]
or VMWare Player:
http://www.vmware.com/products/player/ [vmware.com]
And if your VM gets infected or something, just delete it and make a copy of the backup, rinse & run!
Re:Yawn (Score:5, Insightful)
Virtualization bad: DBs, memory intensive, CPU intensive.
We're starting to do the same. It looks the articles basically says "managing them is more complex, and you can overload the host". Well duh! They're no harder to manage (or not much) than that many physical machines, but it does make it a lot easier (cheaper!) to create new ones. And you don't virtualize a machine that's already using 50% of a real system. Or even 25%. Most of ours sit at 1% though. Modern processors are way overkill for most things they're being used for.
Re:Virtualization (Score:3, Insightful)
As far as current day performance goes: disk access is essentially close to if not at native speeds and CPU speed is generally 70-80% of what the native processor can do. Most instructions aren't touched by a virtual machine monitor at all. Memory is more or less untouched and you actually get memory savings. Say you have 4 VMs of Windows 2003 running. All of the pages of memory that are the same (say, core kernel pages and the like) get mapped to the same physical page. The guest operating systems never know. You can effectively scoop up a lot of extra memory if you have a lot of systems running the same software. All of those common libraries and Windows/Linux processes are only paid for once in memory. The technology is simply awesome. In a few years with more and more powerful multicore systems virtualization will make more and more sense, even on performance critical systems.
It has its problems, but I am a believer.
Virtualization != x86 (Score:5, Insightful)
It's not like these issues haven't existed on other platforms. Mainframes, mini's (as400), Unix (aix/solaris/hpux), heck we've had it on non-computer platforms (VLANs anyone...).
And yes using partitions/LPARs on those platforms required *GASP* planning, but in the age of "click once to install DB and build website" aka "Instant gratification" we refuse to do any actual work prior to installing, downloading, deploying...
How about a few articles comparing AIX/HPUX/Solaris partitions to x86 solutions...
Author is completely uninformed (Score:5, Insightful)
No, no, no. First of all, in a real enterprise type solution (something this author seems unfamiliar with) the entire environment is redundant. "the" server? You don't run anything on "the" server, you run it on a server and you just move the virtual machine(s) to another server as needed when there is a problem or maintenance is needed. It is actually very easy to deal with hardware failures.. you don't ever have to schedule downtime, you just move the VMs, fix the broken node, and move on. For software maintenance you just snapshot the image, do your updates, and if they don't work out, you're back online in no time.
In a physical server environment, each application runs on a separate box with a dedicated network interface card (NIC), Mann explains. But in a virtual environment, multiple workloads share a single NIC, and possibly one router or switch as well.
Uh... well maybe you would just install more nics? It seems the "expert" quoted in this article has played around with some workstation level product and has no idea how enterprise level solutions actually work.
The only valid point I find in this whole article is the mention of additional training and support costs. These can be significant, but the flexibility and reliability of the virtualized environment is very often well worth the cost.
He must. ESX set up properly avoids most pitfalls (Score:5, Insightful)
Teaming, hot-migrations, resource management, and lots of other great tools make modern x86 virtualization really enterprise caliber.
I think that the people that see it as a toy are people that have never used virtualization in the context of a large environment, being used properly with proper hardware. You can virtualize almost any server if you plan properly for it.
In the end, by going virtual you end up actually removing so much complexity from your systems that you'll never know how you did it before. No longer does each server have it's own drivers, quirks, OpenManage/hardware monitor, etc etc. You can create a new VM from a template in 5 minutes, ready to go. You can clone a server in minutes. You can snapshot the disks (and RAM, in ESX3) and you can migrate them to new hardware without bringing them down. You can create scheduled copies of production servers for your test environment. So much more simple then all-hardware.
I'll admit that you shouldn't use virtual servers for everything (yet) but you will eventually be able to run everything virtual, so it's best to get used to it now.
Re:He must be talking about freeware (Score:3, Insightful)
You don't mention it, but consolidated backup just rocks. I have some external Linux based NAS machines that use rsync to keep local copies of both our nightly backups and occasional image backups at both sites.
Thanks to VMWare, it's like I've told management--"Our main facility could burn to the ground and I could have our infrastructure back up and running at our remote site before the remains stop smoldering much less get a check from the insurance company."
Re:Yawn (Score:4, Insightful)
Indeed, it has become a bit of a unqualified, blanket meme: "Don't put database servers on virtual machines!" we hear. I heard it just yesterday from an outsourced hardware rep for crying out loud (they were trying to display that they "get" virtualization).
Ultimately, however, it's one of those easy bits of "wisdom" that people parrot because it's cheap advice, and it buys some easy credibility.
Unqualified, however, the statement is complete and utter nonsense. It is absolutely meaningless (just because something can superficially get called a "database" says absolutely nothing about what usage it sees, its disk access patterns, CPU and network needs, what it is bound by, etc).
An accurate rule would be "a machine that saturates one of the resources of a given piece of hardware is not a good candidate to be virtualized on that same piece of hardware" (e.g. your aforementioned database server). That really isn't rocket science, and I think it's obvious to everyone. It also doesn't rely upon some meaningless simplification of application roles.
Note that all of the above is speaking more towards the industry generalization, and not towards you. Indeed, you clarified it more specifically later on.
Re:It's Marketing vs Technologists. (Score:3, Insightful)
The reason for this is that virtualization -simplifies- tech support in every way (except for real-time applications).
Load problems, especially in a virtualized environment are extremely easy to manage technically.
You can just add additional servers and move the virtual machine to the new machine while it's running.
It's the management who will be having a budget problem when this happens, while tech support is not having a technical problem.
Can't we just be Professionals Anymore? (Score:1, Insightful)
Re:Yawn (Score:3, Insightful)
There's some things I won't virtualize right now, and that's two of our very busy database servers and the two main file servers. Part of it is for the performance of those systems, but part of it is also because I don't want those systems to chew up a high percentage of the overall virtual cluster.
I don't think x86 VM's are a new science anymore. They're just somewhat new to the Enterprise. VMware released their first product in 1998. In the computer world, nine years of development for a product/technology is a good deal of time.
Virtual machines have different performance characteristics then actual physical servers, and different hypervisors can change things as well. You do need to take special precautions when going virtual, but the effort is worth it for the amazing amount of control and ease of use your infrastructure will have.
And the greatest part is when I need a new server, I just click a few buttons on the mouse and hit GO. The VM is ready in moments, joined to the domain and ready to go.
Re:Yawn (Score:3, Insightful)
Andi Mann, senior analyst with Enterprise Management Associates, an IT consultancy based in Boulder, Colo., says that virtualization's problems can include cost accounting (measurement, allocation, license compliance); human issues (politics, skills, training); vendor support (lack of license flexibility); management complexity; security (new threats and penetrations, lack of controls); and image and license proliferation.
How many times is the word "License" in here?
This is FUD (Score:5, Insightful)
Examine that quote from the article closely. See anything there that indicates virtualization "doesn't work"? No, nor do I. What they are talking about here has nothing to do with how well virtualization works, what they're complaining about is that a particular tool requires competence to use well in various work environments. Well, no one ever said that virtualization would gift brains to some middle level manager, or teach anyone how to use an office suite, or imbue morals and ethics into those who would steal; virtualization lets you run an operating system in a sandbox, sometimes under another operating system entirely. And it does that perfectly well, or in other words, it works very well indeed. I call FUD.
Re:Yawn (Score:3, Insightful)
Yep, it's wrong. Load average is defined as the number of processes that are runnable. Processes waiting for resources (at least, resources made via system called, like trying to talk to disk, network, etc) are put on the WAIT queue and are not runnable. Thus, they do not contribute to load. Processes that have all the data they need and just need a cpu slice contribute to load.
I've also seen where load was reported as N/NCPUs and N regardless of the number of CPUs.
The first one is wrong. Out of curiosity, where did you see this?
Even if the real meaning is the average number of processes in the run queue, that does not tell you much.
Yay! You're getting it!
Thinking of it as the number of processes waiting for some piece of hardware seems more accurate.
Oh wait, you're not getting it
To be precise, I would say "number of processes waiting for the CPU". Processes that are waiting for non-cpu hardware are placed in the WAIT queue (they aren't runnable) and do not contribute to the load. They will be placed back on the run queue once the data that they are waiting for is available (at that point, they will contribute to the load again). Going back to my other example, if a process is waiting for network data, or disk data, or [insert special whizbang hardware here] data, it will be in the WAIT state, and will not contribute to load. Instead, it will contribute to IOWAIT time. Hence the need for looking at more numbers than just load average.
Generally, processes that are just waiting for a cpu slice will do okay in a VM. CPUs are fast, and there isn't any competition for a virtual CPU slice (except from processes in the virtualized OS). The VM host will (probably) ensure that each guest OS gets a fair share of time. So CPU-intensive processes in a guest will run slower, but they will run predictably slower. Virtualizing processes that have a lot of I/O time are bad, because an I/O-bound process in a VM is really 'competing' for the resource with processes in *other* VMs. This competition is very difficult to quantify or predict, because we're not used to thinking of systems in this manner yet. Remember that these I/O-bound processes are not contributing to load average on their respective guest OS because they are on the guest OS wait queue while waiting for hardware. Hence my original argument that load averages are wholly inaccurate and it is a bad idea to rely on the measurement for deciding whether a system is virtualizable.
Reid
its an egg head solution (Score:2, Insightful)
Most people forget there was also a term called "outscalling) that is, having multiple cheap machines running your applications.. You might not even use clusters hey try it real cheap and distributed.
Try 8 desktops with no SAN runnig your mail, instead of 1 virtualized cluster (for about the same price) okay one may crash but that only affects 1/8 of your users. Compare risk to money efficiency, and valeu, try to determine your costs. As no crashes with an extreme cost isnt a real solution to my opinion. it is Costs what should decide this. And to be true most companies (altough they claim it can not) can have a server down for a hour. And 1 hour is a long time for restoring 1/8 of your users mail.
I'm focused on mail server but it could have SQL or whatever too.
In my opinion this are nightmare solutions altough they give me lot of work.
But thinking of how much money is spend for it makes me shame
As there are better ways to put your money away.
Oh and i'm not thinking this alone there more specialists silencly talking about this, but afraid to say it out loud. I think it should be told more often.