Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Operating Systems Still Matter In a Containerized World 129

New submitter Jason Baker writes: With the rise of Docker containers as an alternative for deploying complex server-based applications, one might wonder, does the operating system even matter anymore? Certainly the question gets asked periodically. Gordon Haff makes the argument on Opensource.com that the operating system is still very much alive and kicking, and that a hardened, tuned, reliable operating system is just as important to the success of applications as it was in the pre-container data center.
This discussion has been archived. No new comments can be posted.

Operating Systems Still Matter In a Containerized World

Comments Filter:
  • by Urkki ( 668283 ) on Wednesday August 20, 2014 @03:00AM (#47710129)

    First, assumption is that we're talking about the kind of virtual machines people run in VirtualBox etc, using the native CPUs etc. IOW, not talking about emulators like QEMU.

    VM host RAM overhead is essentially static, while VM guest memory sizes go up along with all memory sizes, so actually RAM overhead asymptotically approaches 0%.

    30% CPU, just how do you get that number? Virtual memory page switches etc may have some overhead in VM maybe, I don't know, but normal application code runs at the raw CPU just like code on the host OS.

    And there's normally no emulation of hardware, there's just virtualization of hardware in the normal use cases. Hardware can also be directly connected to the VM at the lowest possible level, bypassing most of the host OS driver layers (non-performance-related, this is very convenient with mice and keyboards in multi-monitor setups, where each monitor can have a VM in full screen with dedicated kb&mouse in front of it, no more looking at one VM while focus is in another).

  • by serviscope_minor ( 664417 ) on Wednesday August 20, 2014 @06:45AM (#47710843) Journal

    Yeah but there's the memory penalty, and the conflicting CPU schedulers.

    If you have 20VMs basically running the same code, then all of the code segments are going to be the same. So, people are doing memory deduplication. Of course that's inefficient, so I expect people are looking at paravirtualising that too.

    That way you'll be able to inform the VM sysrem that you're loading an immutable chunk of code and if anyone else want's to use it their free to. So it becomes an object of some sort which is shared.

    And thus people will have inefficiently reinvented shared objects, and will probably index them by hash or something.

    The same will happen with CPU scheduling too. The guest and host both have ideas who wants CPU when. The guests can already yield. Sooner or later they'll be able to inform the host that they want some CPU too.

    And thus was reinvented the concept of a process with threads.

    And sooner or later, people will start running apps straight on the VM because all these things it provides are basically enough to run a program so why bother with the host OS. Or perhaps they won't.

    But either way people will find that the host OS becomes a bit tied down to a particular host (or not---and thus people reinvent portability layers) and that makes deployment hard so wouldn't it be nice if we could somehow share just the essentials of the hardware between multiple hosts to fully utilise our machines.

    Except that's inefficient and there's a lot of guess work so if we allow the hosts and the host-hosts to share just a liiiiiiiitle bit of information we'll be able to make things much more efficient.

    And so it continues.

  • by philip.paradis ( 2580427 ) on Wednesday August 20, 2014 @07:30AM (#47711005)

    Modern virtualization doesn't have the overhead the GP cited; the 20% RAM loss and 30% CPU capacity loss numbers cited by the AC you responded to are absurd fabrications. I use KVM on Debian hosts to power a large number of VMs running a variety of operating systems, and the loss of CPU bandwidth and throughput with guests is negligible due to hardware virt extensions in modern CPUs (where "modern" in fact means "most 64-bit AMD and Intel CPUs from the last few years, plus a small number of 32-bit CPUs"). Using the "host" CPU setting in guests can also directly expose all host CPU facilities, resulting in virtually no losses in capabilities for mathematically-intensive guest operations. As far as memory is concerned, far from resulting in a 20% loss of available RAM, I gain a significant amount of efficiency in overall memory utilization using KSM [linux-kvm.org] (again, used with KVM). On a host running many similar guests, extremely large gains in memory deduplication may be seen. Running without KSM doesn't result in significant memory consumption overhead either, as KVM itself hardly uses any RAM.

    The only significant area of loss seen with modern virtualization is disk IO performance, but this may be largely mitigated through use of correctly tuned guest VM settings and updated VirtIO drivers. The poster you replied to is ignorant at best, and trolling at worst.

"No matter where you go, there you are..." -- Buckaroo Banzai

Working...