Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Operating Systems Software

Virtualization Is Not All Roses 214

An anonymous reader writes "Vendors and magazines are all over virtualization like a rash, like it is the Saviour for IT-kind. Not always, writes analyst Andi Mann in Computerworld." I've found that when it works, it's really cool, but it does add a layer of complexity that wasn't there before. Then again, having a disk image be a 'machine' is amazingly useful sometimes.
This discussion has been archived. No new comments can be posted.

Virtualization Is Not All Roses

Comments Filter:
  • Yawn (Score:5, Insightful)

    by dreamchaser ( 49529 ) on Friday March 09, 2007 @01:45PM (#18291078) Homepage Journal
    This is the exact same pattern that almost every computing technology follows. First the lemmings all rush to sound smart by touting it's benefits. Soon it is the be all and end all in "everyone's" mind. Then the honeymoon fades and people realise it's a useful tool, and toss it into the chest with all the other useful tools to be used where it makes sense.
  • Is this for real? (Score:5, Insightful)

    by Marton ( 24416 ) on Friday March 09, 2007 @01:45PM (#18291082)
    One of the most uninformative articles ever to hit Slashdot.

    "Oh, so now more apps will be competing for that single HW NIC?" Wow. Computerworld, insightful as ever.
  • Waste of time... (Score:3, Insightful)

    by evilviper ( 135110 ) on Friday March 09, 2007 @01:46PM (#18291100) Journal
    I want those 2 minutes of my life back...
  • This just in... (Score:2, Insightful)

    by Anonymous Coward on Friday March 09, 2007 @01:48PM (#18291132)
    Really Cool Thing can have drawbacks. Popular computer technology shown not to be silver bullet. Film at 11.
  • by philo_enyce ( 792695 ) on Friday March 09, 2007 @01:52PM (#18291194)
    to sum up tfa: poor planning and execution are the cause of problems.

    how about an article that makes some recommendations on how to mitigate the problems they identify with virtualization, or point out some non obvious issues?

    philo

  • by QuantumRiff ( 120817 ) on Friday March 09, 2007 @01:52PM (#18291196)
    If your servers become toast, due to whatever reason, you can get a simple workstation, put a ton of RAM in it, and load up your virtual systems. Of course they will be slower, but they will still be running. We don't need to carry expensive 4 hour service contracts, just next business day contracts, saving a ton of money. The nice thing for me with Virtual servers is it is device agnostic, so if I have to recover, worst case, I have only one server to worry about NIC drivers, RAID settings/drivers, etc. After that, its just loading up the virtual server files.
  • excess power (Score:4, Insightful)

    by fermion ( 181285 ) on Friday March 09, 2007 @01:53PM (#18291220) Homepage Journal
    I see virtualization as a means to use the excess cycles in the modern microsprocessors. Like over aggressive GUI and DRM, it creates a need for the ever more expensive and complex processors. I am continuously amazed that while I can run most everything I have on a sub GHZ machine, everyone is clamoring about the need for 3 and 4 GHZ machines. And though my main machine runs at over a GHZ, it still falters at decoding DRM compressed Video, even though a DVD plays fine on my 500 MHZ machine.

    But it still is useful. Like terminals hooked up to big mainframes, it may make sense to run multiple virtual machines off a single server, or even have the same OS run for the same user in different spaces on a single machine. We have been heading to this point for a while, and now that we have the power, it makes little sense not to use it.

    The next thing I am waiting for are very cheap machines, say $150, with no moving parts, only network drivers, that will link to a remote server.

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday March 09, 2007 @01:59PM (#18291304) Homepage Journal

    Bandwidth concerns. You can have more than one NIC installed on the server and have it dedicated to each virtual machine.

    or, of course, you can use a faster network connection to the host, simplifying cabling. it might not be cost-effective to even go to GigE for many people at this point with one system per wire. For a lot of things it's hard to max that out, obviously fileserving and the like is not such an application, but those of you who have been there know what I mean. But if you're looking at multiple cables to each server and the attendant nightmares it may be just the reason you need to justify that new switch purchase.

  • Home Use (Score:2, Insightful)

    by 7bit ( 1031746 ) on Friday March 09, 2007 @01:59PM (#18291316)
    I find Virtualization to be great for home use.

    It's safer to browse the web through a VM that is set to not allow access to your main HD's or partitions. Great for any internet activity really, like P2P or running your own server; if it gets hacked they still can't affect the rest of your system or data outside of the VM's domain. It's also much safer to try out new and untested software from within a VM, in case of virus or spyware infection, or just registry corruption or what have you. I can also be useful for code developement within a protected environment.

    Did I mention portability? Keep back-up's of your VM file and run it on any system you want after installing something like the Free VMWare Server:

    http://www.vmware.com/products/server/ [vmware.com]

    or VMWare Player:

    http://www.vmware.com/products/player/ [vmware.com]

    And if your VM gets infected or something, just delete it and make a copy of the backup, rinse & run!
  • Re:Yawn (Score:5, Insightful)

    by vanyel ( 28049 ) * on Friday March 09, 2007 @02:05PM (#18291396) Journal
    Virtualization good: Webservers, middle tier stuff, etc.
    Virtualization bad: DBs, memory intensive, CPU intensive.


    We're starting to do the same. It looks the articles basically says "managing them is more complex, and you can overload the host". Well duh! They're no harder to manage (or not much) than that many physical machines, but it does make it a lot easier (cheaper!) to create new ones. And you don't virtualize a machine that's already using 50% of a real system. Or even 25%. Most of ours sit at 1% though. Modern processors are way overkill for most things they're being used for.
  • by jallen02 ( 124384 ) on Friday March 09, 2007 @02:15PM (#18291576) Homepage Journal
    I would say that every single one of those points in the article are being addressed in the enterprise VM arena. In the end due to the raw extra control you get over virtual machines it very much is the future. There is very little memory overhead. Once virtual infrastructure becomes fully developed and the scene plays out completely I think it will actually make the things in the article easier, not harder. You have to pace yourself in how and where you use virtualization in your organization, but the benefits are huge for the right environments.

    As far as current day performance goes: disk access is essentially close to if not at native speeds and CPU speed is generally 70-80% of what the native processor can do. Most instructions aren't touched by a virtual machine monitor at all. Memory is more or less untouched and you actually get memory savings. Say you have 4 VMs of Windows 2003 running. All of the pages of memory that are the same (say, core kernel pages and the like) get mapped to the same physical page. The guest operating systems never know. You can effectively scoop up a lot of extra memory if you have a lot of systems running the same software. All of those common libraries and Windows/Linux processes are only paid for once in memory. The technology is simply awesome. In a few years with more and more powerful multicore systems virtualization will make more and more sense, even on performance critical systems.

    It has its problems, but I am a believer.
  • by HockeyPuck ( 141947 ) on Friday March 09, 2007 @02:16PM (#18291580)
    Why is it all of a sudden whenever someone says "Virtualization" they imply that it must be Vmware/Xen/windows/x86 platform.

    It's not like these issues haven't existed on other platforms. Mainframes, mini's (as400), Unix (aix/solaris/hpux), heck we've had it on non-computer platforms (VLANs anyone...).

    And yes using partitions/LPARs on those platforms required *GASP* planning, but in the age of "click once to install DB and build website" aka "Instant gratification" we refuse to do any actual work prior to installing, downloading, deploying...

    How about a few articles comparing AIX/HPUX/Solaris partitions to x86 solutions...
  • by LodCrappo ( 705968 ) on Friday March 09, 2007 @02:28PM (#18291790)
    Increased uptime requirements arise when enterprises stack multiple workloads onto a single server, making it even more essential to keep the server running. "The entire environment becomes as critical as the most critical application running on it," Mann explains. "It is also more difficult to schedule downtime for maintenance, because you need to find a window that's acceptable for all workloads, so uptime requirements become much higher."

    No, no, no. First of all, in a real enterprise type solution (something this author seems unfamiliar with) the entire environment is redundant. "the" server? You don't run anything on "the" server, you run it on a server and you just move the virtual machine(s) to another server as needed when there is a problem or maintenance is needed. It is actually very easy to deal with hardware failures.. you don't ever have to schedule downtime, you just move the VMs, fix the broken node, and move on. For software maintenance you just snapshot the image, do your updates, and if they don't work out, you're back online in no time.

    In a physical server environment, each application runs on a separate box with a dedicated network interface card (NIC), Mann explains. But in a virtual environment, multiple workloads share a single NIC, and possibly one router or switch as well.

    Uh... well maybe you would just install more nics? It seems the "expert" quoted in this article has played around with some workstation level product and has no idea how enterprise level solutions actually work.

    The only valid point I find in this whole article is the mention of additional training and support costs. These can be significant, but the flexibility and reliability of the virtualized environment is very often well worth the cost.

  • by cbreaker ( 561297 ) on Friday March 09, 2007 @02:38PM (#18291910) Journal
    Indeed. If you have a proper ESX configuration: At least two hosts, SAN back-end, multiple NIC's, supported hardware - you'll find that almost none of the points are valid.

    Teaming, hot-migrations, resource management, and lots of other great tools make modern x86 virtualization really enterprise caliber.

    I think that the people that see it as a toy are people that have never used virtualization in the context of a large environment, being used properly with proper hardware. You can virtualize almost any server if you plan properly for it.

    In the end, by going virtual you end up actually removing so much complexity from your systems that you'll never know how you did it before. No longer does each server have it's own drivers, quirks, OpenManage/hardware monitor, etc etc. You can create a new VM from a template in 5 minutes, ready to go. You can clone a server in minutes. You can snapshot the disks (and RAM, in ESX3) and you can migrate them to new hardware without bringing them down. You can create scheduled copies of production servers for your test environment. So much more simple then all-hardware.

    I'll admit that you shouldn't use virtual servers for everything (yet) but you will eventually be able to run everything virtual, so it's best to get used to it now.
  • by div_2n ( 525075 ) on Friday March 09, 2007 @03:05PM (#18292314)
    I'm managing VI3 and we use it for almost everything. Ran into some trouble with one antiquated EDI application that just HAD to have a serial port. That is a long discussion, but for reasons I'm quite sure you could guess, I offloaded it to an independent box. We run our ERP software on it and the vendor has tried (unsuccessfully) several times to blame VMWare for issues.

    You don't mention it, but consolidated backup just rocks. I have some external Linux based NAS machines that use rsync to keep local copies of both our nightly backups and occasional image backups at both sites.

    Thanks to VMWare, it's like I've told management--"Our main facility could burn to the ground and I could have our infrastructure back up and running at our remote site before the remains stop smoldering much less get a check from the insurance company."
  • Re:Yawn (Score:4, Insightful)

    by ergo98 ( 9391 ) on Friday March 09, 2007 @03:08PM (#18292350) Homepage Journal

    I know I'm not the only one as I have seen this advice from a number of top professionals that I know and respect.

    Indeed, it has become a bit of a unqualified, blanket meme: "Don't put database servers on virtual machines!" we hear. I heard it just yesterday from an outsourced hardware rep for crying out loud (they were trying to display that they "get" virtualization).

    Ultimately, however, it's one of those easy bits of "wisdom" that people parrot because it's cheap advice, and it buys some easy credibility.

    Unqualified, however, the statement is complete and utter nonsense. It is absolutely meaningless (just because something can superficially get called a "database" says absolutely nothing about what usage it sees, its disk access patterns, CPU and network needs, what it is bound by, etc).

    An accurate rule would be "a machine that saturates one of the resources of a given piece of hardware is not a good candidate to be virtualized on that same piece of hardware" (e.g. your aforementioned database server). That really isn't rocket science, and I think it's obvious to everyone. It also doesn't rely upon some meaningless simplification of application roles.

    Note that all of the above is speaking more towards the industry generalization, and not towards you. Indeed, you clarified it more specifically later on.
  • by LinuxDon ( 925232 ) on Friday March 09, 2007 @03:13PM (#18292426)
    While I completely agree with you on this at many other areas, I don't in this case.
    The reason for this is that virtualization -simplifies- tech support in every way (except for real-time applications).

    Load problems, especially in a virtualized environment are extremely easy to manage technically.
    You can just add additional servers and move the virtual machine to the new machine while it's running.

    It's the management who will be having a budget problem when this happens, while tech support is not having a technical problem.
  • by Anonymous Coward on Friday March 09, 2007 @03:27PM (#18292592)
    It's a toolbox. Regardless of if you are in IT or a mechanics shop, you need the right tools for the right job. I'm so tired of hearing at work how better one language is to another, or how this technique is superior to that technique. It's all a matter of what your trying to accomplish. You should ask your self if this best accomplishes the mission or not. Everything has it's +'s and -'s, just be a Man(Woman)and accomplish the mission. Get the job done, and use the tools available to you that best accomplishes the tasks at hand in the most timely manner.
  • Re:Yawn (Score:3, Insightful)

    by cbreaker ( 561297 ) on Friday March 09, 2007 @03:36PM (#18292716) Journal
    It's moot because it's a specific issue with a specific hypervisor, not a problem inherent with virtualization itself.

    There's some things I won't virtualize right now, and that's two of our very busy database servers and the two main file servers. Part of it is for the performance of those systems, but part of it is also because I don't want those systems to chew up a high percentage of the overall virtual cluster.

    I don't think x86 VM's are a new science anymore. They're just somewhat new to the Enterprise. VMware released their first product in 1998. In the computer world, nine years of development for a product/technology is a good deal of time.

    Virtual machines have different performance characteristics then actual physical servers, and different hypervisors can change things as well. You do need to take special precautions when going virtual, but the effort is worth it for the amazing amount of control and ease of use your infrastructure will have.

    And the greatest part is when I need a new server, I just click a few buttons on the mouse and hit GO. The VM is ready in moments, joined to the domain and ready to go.
  • Re:Yawn (Score:3, Insightful)

    by ShieldW0lf ( 601553 ) on Friday March 09, 2007 @03:52PM (#18292932) Journal
    I thought this was really funny.

    Andi Mann, senior analyst with Enterprise Management Associates, an IT consultancy based in Boulder, Colo., says that virtualization's problems can include cost accounting (measurement, allocation, license compliance); human issues (politics, skills, training); vendor support (lack of license flexibility); management complexity; security (new threats and penetrations, lack of controls); and image and license proliferation.

    How many times is the word "License" in here?

  • This is FUD (Score:5, Insightful)

    by fyngyrz ( 762201 ) * on Friday March 09, 2007 @06:38PM (#18295052) Homepage Journal
    ...virtualization's problems can include cost accounting (measurement, allocation, license compliance); human issues (politics, skills, training); vendor support (lack of license flexibility); management complexity; security (new threats and penetrations, lack of controls); and image and license proliferation.

    Examine that quote from the article closely. See anything there that indicates virtualization "doesn't work"? No, nor do I. What they are talking about here has nothing to do with how well virtualization works, what they're complaining about is that a particular tool requires competence to use well in various work environments. Well, no one ever said that virtualization would gift brains to some middle level manager, or teach anyone how to use an office suite, or imbue morals and ethics into those who would steal; virtualization lets you run an operating system in a sandbox, sometimes under another operating system entirely. And it does that perfectly well, or in other words, it works very well indeed. I call FUD.

  • Re:Yawn (Score:3, Insightful)

    by giminy ( 94188 ) on Saturday March 10, 2007 @04:15AM (#18298202) Homepage Journal
    This may be wrong, but I've always looked at load as the number of processes waiting for resources (usually disk, CPU, or network).

    Yep, it's wrong. Load average is defined as the number of processes that are runnable. Processes waiting for resources (at least, resources made via system called, like trying to talk to disk, network, etc) are put on the WAIT queue and are not runnable. Thus, they do not contribute to load. Processes that have all the data they need and just need a cpu slice contribute to load.

    I've also seen where load was reported as N/NCPUs and N regardless of the number of CPUs.

    The first one is wrong. Out of curiosity, where did you see this?

    Even if the real meaning is the average number of processes in the run queue, that does not tell you much.

    Yay! You're getting it!

    Thinking of it as the number of processes waiting for some piece of hardware seems more accurate.

    Oh wait, you're not getting it :(. Thinking of load average in this way is very precisely incorrect.

    To be precise, I would say "number of processes waiting for the CPU". Processes that are waiting for non-cpu hardware are placed in the WAIT queue (they aren't runnable) and do not contribute to the load. They will be placed back on the run queue once the data that they are waiting for is available (at that point, they will contribute to the load again). Going back to my other example, if a process is waiting for network data, or disk data, or [insert special whizbang hardware here] data, it will be in the WAIT state, and will not contribute to load. Instead, it will contribute to IOWAIT time. Hence the need for looking at more numbers than just load average.

    Generally, processes that are just waiting for a cpu slice will do okay in a VM. CPUs are fast, and there isn't any competition for a virtual CPU slice (except from processes in the virtualized OS). The VM host will (probably) ensure that each guest OS gets a fair share of time. So CPU-intensive processes in a guest will run slower, but they will run predictably slower. Virtualizing processes that have a lot of I/O time are bad, because an I/O-bound process in a VM is really 'competing' for the resource with processes in *other* VMs. This competition is very difficult to quantify or predict, because we're not used to thinking of systems in this manner yet. Remember that these I/O-bound processes are not contributing to load average on their respective guest OS because they are on the guest OS wait queue while waiting for hardware. Hence my original argument that load averages are wholly inaccurate and it is a bad idea to rely on the measurement for deciding whether a system is virtualizable.

    Reid
  • by PermanentMarker ( 916408 ) on Saturday March 10, 2007 @04:59AM (#18298342) Homepage Journal
    basicly you spend a lot i mean a damn lot of money on super fast hardware and expensive SAN's

    Most people forget there was also a term called "outscalling) that is, having multiple cheap machines running your applications.. You might not even use clusters hey try it real cheap and distributed.

    Try 8 desktops with no SAN runnig your mail, instead of 1 virtualized cluster (for about the same price) okay one may crash but that only affects 1/8 of your users. Compare risk to money efficiency, and valeu, try to determine your costs. As no crashes with an extreme cost isnt a real solution to my opinion. it is Costs what should decide this. And to be true most companies (altough they claim it can not) can have a server down for a hour. And 1 hour is a long time for restoring 1/8 of your users mail.

    I'm focused on mail server but it could have SQL or whatever too.

    In my opinion this are nightmare solutions altough they give me lot of work.
    But thinking of how much money is spend for it makes me shame
    As there are better ways to put your money away.
    Oh and i'm not thinking this alone there more specialists silencly talking about this, but afraid to say it out loud. I think it should be told more often.

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...