Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Operating Systems Still Matter In a Containerized World 129

New submitter Jason Baker writes: With the rise of Docker containers as an alternative for deploying complex server-based applications, one might wonder, does the operating system even matter anymore? Certainly the question gets asked periodically. Gordon Haff makes the argument on Opensource.com that the operating system is still very much alive and kicking, and that a hardened, tuned, reliable operating system is just as important to the success of applications as it was in the pre-container data center.
This discussion has been archived. No new comments can be posted.

Operating Systems Still Matter In a Containerized World

Comments Filter:
  • by Anonymous Coward

    Remember Matthew 7:26: A foolish man built his house on sand.

    • What does it say about condensed water vapor?

    • Remember Matthew 7:26: A foolish man built his house on sand.

      - and what is silicon made from? ;-)

    • Listen, lad. I've built this kingdom up from nothing. When I started here, all there was was swamp. All the kings said I was daft to build a castle in a swamp, but I built it all the same, just to show 'em. It sank into the swamp. So, I built a second one. That sank into the swamp. So I built a third one. That burned down, fell over, then sank into the swamp. But the fourth one stayed up. An' that's what your gonna get, lad -- the strongest castle in these islands.

    • I spent several minutes reading "What is it?" on docker's website, and I still don't understand what it is. Is it like a JVM?

      • by Agares ( 1890982 )
        That's the impression I got of it. They probably are purposely vague since apparently it isn't anything new and has already been done. Sometimes being vague and talking about how great something is without really explaining it will sometimes get people to beleive it is something new when in fact is isn't.
  • I blame the cloud.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      Servers are for techie losers. The Cloud is the hip shit, bro.

    • by DivineKnight ( 3763507 ) on Tuesday August 19, 2014 @11:35PM (#47709643)

      More along the lines of "they never knew what a server was, and would artfully dodge your phone calls, elevator meetings, and eye contact to avoid accidentally imbibing any knowledge that might furnish them with this understanding; all they know is that the slick salesman with the nice sports car and itemized billing said they'd magically do everything from their end and never bother them, and they believed them."

    • by frikken lazerz ( 3788987 ) on Wednesday August 20, 2014 @12:08AM (#47709745)
      The server is the guy who brings me my food at restaurants. I guess people aren't eating at restaurants anymore because the economy is tough.
    • Deal with client side developers all the time asking for 100MB of data "right now" across an internet pipe (which might be coming from africa or some place with really bad service): why shouldn't we get all the data at the same time? It seems to me that a lot of the performance tuning knowledge is getting lost on a large percentage of devs: the solution is always get someone to get you a fatter internet pipe, bigger server, drop everything and try a new framework etc. Server side developers do it too: "we h

      • by Lennie ( 16154 )

        one way or the other this is going to solve it self, right ?

        Or pipes, etc. get a lot bigger (things like silicon photonics in the data center and NVRAM will help) or people with more knowledge of the problem will find a better job.

      • It seems to me that a lot of the performance tuning knowledge is getting lost on a large percentage of devs

        As a web developer I'd like to care about such things, but I spend all my time four or five layers of abstraction away from the server and all the performance-related backlogs are prioritized so far behind new revenue-producing features that they'll happen sometime between "six decades from now" and "heat death of the universe."

        • My experience is that is generally the case. The problem with performance: you have to convince your boss to spend a month looking into something that "might" help vs doing something they think will get customers. That said often new features are pulled out of their or distributors hats hoping that it is something the customer will actually care about. Web solutions should help with the monitization of performance a bit: you can directly relate people leaving your site to the latency they experienced, can r

  • Of Course They Do! (Score:5, Interesting)

    by Anonymous Coward on Tuesday August 19, 2014 @11:27PM (#47709607)

    Stripped to the bone, an operating system is a set of APIs that abstract the real or virtual hardware to make applications buildable by mere mortals. Some work better than others under various circumstances, so the OS matters no matter where it's running.

    • by DivineKnight ( 3763507 ) on Tuesday August 19, 2014 @11:32PM (#47709629)

      I can't wait for programmers, sometime in 2020, to rediscover the performance boost they receive running an OS on 'bare metal'...

      • by Urkki ( 668283 )

        Except there will be no performance boost. There may be a blip in some benchmark.

        Additionally, programmers are already running *application code* on bare metal when that kind of performance matters, most commonly on GPUs.

      • by gmuslera ( 3436 )

        The point of Docker and containers in general is that they are running at basically native performance. There is no vm, no virtualized OS, you run under the main OS kernel, but it don't let you see the main OS filesystem, network, processes and so on, and don't let you do operations risky for the stability of the main system. There is some overhead in the filesystem access (in the case of docker, you may be running on AUFS, device mapper, or others that will have different kind of impact in several operatio

    • by Urkki ( 668283 )

      No, stripped to the bone, operating system offers no APIs at all, and it will not run any user applications. It will just tend to itself. Then you add some possibilities for user applications to do things, the less the better, from security and stability point of view. Every public API is a potential vulnerability, a potential window to exploit some bug.

      • No, stripped to the bone, operating system offers no APIs at all, and it will not run any user applications.

        Uh, what would be the point of such an operating system?

        • by Urkki ( 668283 )

          No, stripped to the bone, operating system offers no APIs at all, and it will not run any user applications.

          Uh, what would be the point of such an operating system?

          Point would be to have a stripped to the bone OS.

          Actually it's kind of same as having a stripped to the bone animal (ie. skeleton): you can for example study it, put it on display, give it to the kids to play with... ;)

          • How would you even know if it's running?
            • by perpenso ( 1613749 ) on Wednesday August 20, 2014 @02:42AM (#47710297)

              How would you even know if it's running?

              The morse code on an LED

            • by Urkki ( 668283 )

              How would you even know if it's running?

              Well, for the totally barebone version, you could run it in a VM and examine its memory contents there.

              I think even barebone OS would need *some* functionality. It would have to be able to shut itself down, on PC probably by ACPI events. It would probably need to be able to start the first process/program, because I think an OS has to be able to do that, even if that process then wouldn't be able able to do anything due to lack of APIs. Etc. So even barebone, it still needs to do something.

              More practical th

              • It would have to be able to shut itself down, on PC probably by ACPI events.

                Oh, that's communication, then you can hack it.

                • by Urkki ( 668283 )

                  Or

                  It would have to be able to shut itself down, on PC probably by ACPI events.

                  Oh, that's communication, then you can hack it.

                  I don't know, it could be made to be one-time trigger, which starts the shutdown. If there's no way to get altered input through, that will not allow hacking. It should be simple enough to,be made bug-free.

                  • It should be simple enough to,be made bug-free.

                    I have a book that talks about reliable design. On one page, they demonstrate that they have a 4-line program without any bugs.

                    Then in the next paragraph, they admit that the first few versions had bugs.

        • Isn't that what a default OpenBSD installation is about?

    • Exactly. And the limitations of an OS can very much determine how an application can perform and what it can do. With Windows tablets, both RT and Pro, any application that can read files can automatically read shard network folders and OneDrive, because it's been abstracted away properly from the application.

      Contrast that with Android and iOS, where this functionality isn't abstracted away from the application, and any application that wants to access a network drive or the default cloud drive (Google
  • Advert? (Score:5, Insightful)

    by Anonymous Coward on Tuesday August 19, 2014 @11:30PM (#47709623)

    Is this just an advert for Docker?

    • Re:Advert? (Score:5, Interesting)

      by ShanghaiBill ( 739463 ) on Wednesday August 20, 2014 @12:35AM (#47709819)

      Is this just an advert for Docker?

      Yes. They refer to the "rise" of Docker, yet I had never heard of it before. Furthermore, Docker doesn't even fit with the main point of TFA that "the OS doesn't matter". Here is a complete, exhaustive list of all the OSes that Docker can run on:

      1. Linux

      • by jbolden ( 176878 )

        Docker is legit and important. There are a 1/2 dozen of these containerized OSes. Docker is the most flexible (it runs on a wide range of Linuxes while most of them are specific to a particular cloud vendor). It is also the most comprehensive though SoftLayer's and Azure's might pass it in that regard. A Docker container is thicker than a VM, thinner than a full Linux distribution running on a VM. It is more accurate to consider Docker an alternative to VMs and Linux distributions running in each VM.

        • In what sense is a Docker container thicker than a VM? I always thought it was thinner/lighter - e.g. A host can allocate varying amounts of memory to a container (with optional limits). Whereas running a VM will always put you back that much memory on its host.
          • by jbolden ( 176878 )

            The Docker Engine is much thicker than a hypervisor essentially containing the full suite of services of the guest OS.

            • by Anonymous Coward

              I don't understand what you mean.

              Docker is nothing more than a configuration interface for Linux Containers (a feature of the Linux kernel). The engine is not an hypervisor. A "dockerized" VM could be seen as a chrooted directory (or mountpoint) with its own PIDs, FDs, sub-mountpoints, network interfaces, etc.
              It shares the kernel of the "real machine". It's also based on the "real kernel" services for everything.

              I doubt there could be anything lighter.

              It just has its own init, so everything inside the VM is

              • by jbolden ( 176878 )

                Containers play the role of VMs. They are competing paradigms for how to deploy services. The Docker engine is responsible for allocating resources to, terminating and starting containers which is what the hypervisor does.

                • Docker has an engine? I haven't actually used Docker yet because I've already been using LXC for some years and just haven't had a "free day" to play with it. But I've always been under the impression that Docker was just a abstraction around LXC making containers easier to create. Is the Docker "engine" actually LXC?

                  Serious question because the main reason I haven't invested in Docker is my perception that it won't really save (me) time if you already well understand LXC. Is there some other benefit be

                  • by jbolden ( 176878 )

                    No the engine uses LXC as a component. There is a lot more to Docker then just LXC. But this comes up a lot so it is in the FAQ and I'll just quote the FAQ: Docker is not a replacement for LXC. "LXC" refers to capabilities of the Linux kernel (specifically namespaces and control groups) which allow sandboxing processes from one another, and controlling their resource allocations. On top of this low-level foundation of kernel features, Docker offers a high-level tool with several powerful functionalities:

                    • by Anonymous Coward

                      You are mistaken with regard to the relationship Docker has (technically, had) with LXC. When Docker was originally created, it basically sat on top of LXC and used its capabilities for containers. Nowadays, it uses libcontainer underneath its abstractions and doesn't use LXC at all.

        • OK, so it's kind of like Crossover Games -- it creates a wrapper with everything that an app (game) needs from an OS, but doesn't require the entire OS.

          You are basically then allocating memory and feeding tasks to the wrapper without expending resources you don't need. To describe what it can and cannot do, all you can say is; "it depends."

          Is that about right?

          • by jbolden ( 176878 )

            Sort of. Using your analogy the wrapper in this case is full featured and has all the capacities (plus a bit more) of a Linux for almost all applications. Each applications only needs a library set that aren't provided by Docker. And of course Docker can be modified so if you need the same library over and over... it can be added to the wrapper.

      • by Nimey ( 114278 )

        So because you personally have never heard of Docker before, this story must be a slashvertisement?

        That's some interesting logic.

        • So because you personally knew about Docker before, this means everybody should know about it too?

          That's some interesting logic.

          • by Nimey ( 114278 )

            https://yourlogicalfallacyis.c... [yourlogicalfallacyis.com]

            Here's a nickel, kid. Buy yourself a better argument.

            • It was the same argument as your own.

              • by Nimey ( 114278 )

                I suppose it might seem that way to an idiot.

                • I'm glad to know you found out the problem about this argument. Yourself.

                  • by Nimey ( 114278 )

                    You're one of those people who thrive on thinking they're right while being so, so wrong. No wonder you're a libertarian.

                    • And you're so ignorant that you think everyone is an American.

                      By the way, the Internet connects people from all over the world, not just the U.S.A.

                    • by Nimey ( 114278 )

                      I never said you were an American, and somehow you're not denying being a libertarian.

                      Do they not teach you how to read anything in libertopia, or is this level of idiocy an acquired skill?

                    • Will I need to deny anything I'm not to satisfy your curiosity?

                      Alright then. I'm not a rock, a computer, a meat popsicle, a desk, a cloud, a car, a building, a libertarian, a democrat, a republican, a republicrat, a demopublican, a popcorn kernel, a shoe, a plane, an iPod, a granola bar, a microwave oven, an Android phone, an A.I., a door, a truck... do I really need to list everything?

                      This is just getting annoying. Let's put each other on our "enemies" list and be done with it.

                    • by Nimey ( 114278 )

                      I don't believe you.

  • by starfishsystems ( 834319 ) on Tuesday August 19, 2014 @11:40PM (#47709657) Homepage
    "The operating system is therefore not being configured, tuned, integrated, and ultimately married to a single application as was the historic norm, but it's no less important for that change."

    What? I had to read this a couple of times. The historic norm was for a single operating system to serve multiple applications. Only with the advent of distributed computing did it become feasible, and only with commodity hardware did it become cost-effective, to dedicate a system instance to a single application. Specialized systems for special purposes came into use first, but the phenomenon didn't really begin to take off in a general way until around 1995.
    • by Nyder ( 754090 )

      "The operating system is therefore not being configured, tuned, integrated, and ultimately married to a single application as was the historic norm, but it's no less important for that change."

      What? I had to read this a couple of times. The historic norm was for a single operating system to serve multiple applications. Only with the advent of distributed computing did it become feasible, and only with commodity hardware did it become cost-effective, to dedicate a system instance to a single application. Specialized systems for special purposes came into use first, but the phenomenon didn't really begin to take off in a general way until around 1995.

      Going to point out in the DOS days, you've have different memory setups for different stuff. Plenty of apps (mostly games though) required a reboot to get the correct memory manager set up for it. Granted, this was a 640k barrier problem, and the main OS didn't actually load/not load anything different, just the memory managers and possible 3rd party programs.

      Even back on the C64 days you'd have to reboot the computer after playing a game, since you didn't normally have a way to exit it. Granted that w

      • "Personal Computers" not "computers". We also had mainframe, minicomputers and workstations that were pretty good at running multiple programs in parallel.
      • OS2 was real good at running older dos apps / games.

      • by putaro ( 235078 )

        Despite the name, DOS was not an operating system

      • DOS was just a niche, not even a real OS, and only around for a small fraction of the time operating systems have been around. Unless you mean mainframe DOS instead of PC stuff. Even by the standards of a time it wasn't an OS.

      • To clarify a bit, I was referring to the period between 1960 and today, when multiprocessing systems established what could properly be called the "historic norm" for the industry. That's the lineage, starting with mainframes, which led directly to virtualization. In fact we were working on primitive virtualization and hypervisors even then, though for the sake of faster system restarts, live failovers and upgrades rather than anything like the cloud services of today. I hadn't thought to include hobbyis
        • by Nyder ( 754090 )

          To clarify a bit, I was referring to the period between 1960 and today, when multiprocessing systems established what could properly be called the "historic norm" for the industry. That's the lineage, starting with mainframes, which led directly to virtualization. In fact we were working on primitive virtualization and hypervisors even then, though for the sake of faster system restarts, live failovers and upgrades rather than anything like the cloud services of today. I hadn't thought to include hobbyist systems in this account because they're not really part of this lineage. It was a long time before they became powerful enough to borrow from it. What they did contribute was an explosion in commodity hardware, so that when networking became ubiquitous it became economical to dedicate systems to a single application. But that comes quite late in the story.

          Ya, I ended up doing what i said in the last part of my post and didn't think of that fact that computers were around before my kid days. =(

  • by Anonymous Coward

    FreeBSD and Solaris et al have been doing OS level virtualization for years, let's ask them about host security / tuning and build on their experience.
    FreeBSD and illumos are also both open with far more experience in this area.
    Singling out Linux as the operating system of choice makes him look like a tool.

  • Instead of trying to harden an OS, why not use a system designed to be secure from the start, one that supports multilevel security [wikipedia.org]. The technology was created in response to data processing demands during the Viet Nam conflict, and perfected during the 70s and 80s.

  • Was anyone really wondering if operating systems no longer mattered? Might as well have gone with "Nothing is different" as your headline.
    • I question it. When you're running a database implemented in Java on a filesystem in an OS inside a VM on a filesystem inside another OS on virtual memory/paging hardware, that's 8 levels of largely redundant access control / containerization / indirection. It's a supreme mess and imposes a big burden of runtime cost and more importantly the burden of configuring all those layers of access control.
      • by putaro ( 235078 )

        Some people like nested virtual machines, some people like candy colored buttons. What else are you going to do with all those resources? :-)

  • Dear Docker, can you make it work on my windows machine? Your scripts don't work.

  • by Anonymous Coward

    Even if you store your data in "the cloud"; that data is stored on a server someplace, and it has to have an operating system.

  • by Junta ( 36770 ) on Wednesday August 20, 2014 @10:06AM (#47712421)

    So to the extent this conversation does make sense (it is pretty nonsensical in a lot of areas), it refers to a phenomenon I find annoying as hell: application vendors bundle all their OS bits.

    Before, if you wanted to run vendor X's software stack, you might have to mate it with a supported OS, but at least vendor X was *only* responsible for the code they produced. Now increasingly vendor X *only* releases an 'appliance and are in practice responsible for the full OS stack despite having no competency to be in that position'. Let's see the anatomy of a recent example of critical update, OpenSSL.

    For the systems where the OS has applications installed on top, patches were ready to deploy pretty much immediately, within days of the problem. It was a relatively no-muss affair. Certificate regeneration was an unfortunate hoop to go through, but it's about as painless as it could have been given the circumstances.

    For the 'appliances', some *still* do not even have an update for *Heartbleed* (and many more didn't bother with the other OpenSSL updates). Some have updates, but only in versions that also have functional changes in the application that are not desired, and the vendor refuses to backport the relatively simple library change. In many cases, applying an 'update' actually resembles a reinstall. Having to download a full copy of the new image and doing some 'migration' work to have data continuity.

    Vendors have traded generally low amounts of effort in initial deployment for unmaintainable messes with respect to updates.

  • It runs on any machine as long as the core library is installed, right? Sounds a lot like Java to me.

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...