Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Windows Microsoft Programming

Microsoft Creates a Docker-Like Container For Windows 95

angry tapir writes Hoping to build on the success of Docker-based Linux containers, Microsoft has developed a container technology to run on its Windows Server operating system. The Windows Server Container can be used to package an application so it can be easily moved across different servers. It uses a similar approach to Docker's, in that all the containers running on a single server all share the same operating system kernel, making them smaller and more responsive than standard virtual machines.
This discussion has been archived. No new comments can be posted.

Microsoft Creates a Docker-Like Container For Windows

Comments Filter:
  • Finally ... (Score:3, Interesting)

    by nbvb ( 32836 ) on Thursday April 09, 2015 @04:46AM (#49436469) Journal

    Solaris Zones comes to Windows.

    Welcome to 2005.

    Why am I the only one completely unimpressed with Docker? It feels like a hacked together Solaris to me .... no thanks, I'll take the real deal.

    • by aliquis ( 678370 )

      New file system for 2020? ;)

      Then again Windows did some other things better earlier.

      • I think it's interesting to see Microsoft continuing to be inspired by Linux. For a while, it was the other way around.
        • by MrDoh! ( 71235 )
          I recall xenix systems with microsoft copyrights all over them before linux was even a glint in anyone's eye. MS has always looked at unix to figure out how to do it right in the end.
          • by Dunkirk ( 238653 ) *

            I always felt that the domain model was basically a copy of NIS (or yp, depending on your age).

    • I too, liked Solaris zones, but Docker does have some essential differences.

      In any event, Docker has the advantage of running under Linux. Solaris isn't as popular as it used to be.

    • by Anonymous Coward

      Solaris Zones comes to Windows.

      Welcome to 2005.

      Why am I the only one completely unimpressed with Docker? It feels like a hacked together Solaris to me .... no thanks, I'll take the real deal.

      FreeBSD jails user since 2000 [1]: what took you guys so long? :)

      [1] https://www.freebsd.org/releases/4.0R/notes.html

    • When you can run Docker inside Solaris Zones..
      And the datacenter is viewed as an elastic Docker host.
      Things start to become interesting..

      Thanks to Joyent Triton [joyent.com]

    • by jbolden ( 176878 )

      Well first off moving technology down market is an improvement. All sorts of cool features that are available in mainframes become a big deal when they are introduced for PCs. In specific Zones are more like LXC than Docker. Docker itself has a much wider and more complete ecosystem than Zones ever had.

      The two main things that allowed Linux to displace Solaris,
      1) Lower cost of hardware
      2) It frequently requires less man hours to get X to work on Linux than on Solaris (the ecosystem).

      Moreover the Linux e

    • That's great if you're either:
      A. doing startup type work where you don't necessarily care about an enterprise support agreement so you can use one of the illumos derivatives. And when I say enterprise support agreement, I mean things like having the OS on your storage vendor support matrix/application support matrix/HBA support matrix/etc.

      B. you can tolerate dealing with the devil and using Solaris proper with *shudder* Oracle "support".

      For everyone else, while I agree this is a sad, sad imitation o
  • by silviuc ( 676999 ) on Thursday April 09, 2015 @04:47AM (#49436471) Homepage
    Docker is moslty a set of tools to allow simple management of containers. It's not itself a container technology. On Linux, Docker leverages LXC and a bunch of other things. On Windows, the same functionality will be available but using Microsoft's container technology. MS and Docker are actually working on getting the Docker toolset on Windows
    • by Richard_at_work ( 517087 ) on Thursday April 09, 2015 @04:50AM (#49436481)

      Yes, this is in actuality Docker for Windows, as this line from the article says:

      Both Windows Server Containers and Hyper-V Containers can be controlled through the Docker engine, allowing administrators to manage both Docker and Microsoft containers in the same environment.

    • by cshay ( 79326 )

      Right, the headline should replace "Docker-like containers" with "Linux-like containers" and then it would be correct.

  • by jellomizer ( 103300 ) on Thursday April 09, 2015 @05:10AM (#49436533)

    The is to solve the problem is simple. Keep the apps self contained. No shared libraries or dll.
    To move the package you just move the directory containing the app to an other location.
    Some will say that is how Macs do it. But I would go further and say that is how it was done in DOS.
    The shared library is an out of date concept, while sounds good when storage was expensive, today we are virtualizing full platforms just to prevent version incomparably.
    What may be a little bonus is to give application/process level networking settings so you can just virtual network your app from the OS

    • by NoNonAlphaCharsHere ( 2201864 ) on Thursday April 09, 2015 @05:21AM (#49436565)
      Great idea. I'm going to stop using subroutines, too.
      • You are confusing software deployment with coding.
        You can use separate libraries. However they are deployed and connected to your particular instance to your application.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Man, you really know what you're talking about!

      DLL files can be distributed with and loaded from the same directory as EXE files.
      You completely forgot Registry settings, buuuuudddy.

      I got a simple solution! Let's statically link everything and use config files everywhere. That's how things were really done in DOS. Now be honest: you've never actually used DOS, have you?

      • Actually that was one of the "great new things" for .NET applications - you could deploy your application using the DOS copy command!!! Fantastic...

        except then they added the GAC and the crap in the various .net frameworks folders like installutil etc and now they'vbe come up with a new 'great new thing' to let you deploy your applications using the copy command.. I wait with anticipation for it to get a dependency on hyper-v administration console and various Windows Server features and updates.

      • How is having your config file in a config subdirectory off your application so horrible.

        The registry is a bad design. Prone to single point of failure.

        So you want to configure your apache settings go to /usr/apache/etc
        To run the program go to /usr/apache/bin

        You can use proper conventions to prevent a lot of issue that we had in DOS days where the settings were mixed in different areas.

        • by jbolden ( 176878 )

          How is having your config file in a config subdirectory off your application so horrible.
          The registry is a bad design. Prone to single point of failure.

          So you want to configure your apache settings go to /usr/apache/etc
          To run the program go to /usr/apache/bin

          Because configuration isn't often that simple. So I want to change the behavior of a program X that uses PHP on Apache. Are the settings in /usr/apache/etc or /usr/apache/PHP/etc or /usr/PHP/etc or /usr/apache/PHP/X/etc or ....

          See TeX distributions

    • by Anonymous Coward on Thursday April 09, 2015 @05:33AM (#49436605)

      Yeah, right. Every app has its own copy of OpenSSL. With its own set of vulnerabilities. Thankyouverymuch.

      Why does this "industry" have to repeat every error every 5-10 years? Don't we have memories?

      • by DarkOx ( 621550 ) on Thursday April 09, 2015 @07:37AM (#49437161) Journal

        No we don't. The hands on votec schools don't teach industry history and if you look at that stack overflow poll from a few days ago it looks like the majority of people only spend about 15 years in software development. So once every 10 years or so the majority of developers are two young to know better when 'hard learned lesson' is called into question by one of the rockstars.

      • by jellomizer ( 103300 ) on Thursday April 09, 2015 @08:22AM (#49437515)

        Vs. a bunch of people afraid to apply the patch of openSSL in fear that it would break all the applications in one swoop
        of having up Update Open SSL on ever virtual system that is running a different application?

        Compared to a few decades ago. Package update systems have gotten much better.
        So the package system will know that Application A, B, C, D use OpenSSL and there is an open SSLPatch and Application A, B, C, had verififed that it worked, so A,B,C will get the push to fix those parts.

        I stated no Shared Libaries, or DLL... I didn't say we can't use libraries. They are just not shared across the entire OS.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      And then every single application has to run its own security updates.

    • by Misagon ( 1135 ) on Thursday April 09, 2015 @06:03AM (#49436701)

      Shared libraries are shared also so that you would be able to update the library without updating all applications that use it.

      By the way, when virtualizing servers you could also create file system instances using a copy-on-write filesystem, in which case you would be able to get self-contained instances with the least amount of copying necessary.
      Under Linux, you could use FUSE to get CoW on top of a underlying filesystem that doesn't support it.

    • by swb ( 14022 )

      I think static linking makes a lot of sense, but you will get a lot of resistance from people who say that it makes patching harder because some vulnerabilities will now require more patching (eg, SSL) because every application will have their own copy.

      I think this is debatable in some ways, because it assumes every security issue affects a shared library and not part of the core executable. It also ignores the applications that merge shared library code or provide a system available function internally (o

    • "It's Simple. All You Have To Do Is..." FacePalm.

      Whether your app is one big monolithic ugly or a sordid collection of shared resources, if you want to toss it around between machines, you still need to "install" it on each machine. Installation means not only the code, but whatever data files need to be set up, config files, database connections and possibly even nastier things. In the Internet era, few apps live in total isolation.

      So it's not so simple.

      What Docker offers is a way of essentially taking a m

    • by serviscope_minor ( 664417 ) on Thursday April 09, 2015 @07:26AM (#49437079) Journal

      The is to solve the problem is simple. Keep the apps self contained. No shared libraries or dll.

      That's unnecessary. One look in /opt and I can see plenty of packages which have .so files. They install just fine anywhere.

      The shared library is an out of date concept, while sounds good when storage was expensive, today we are virtualizing full platforms just to prevent version incomparably.

      Nope, they're still a good idea which is why VMs are working on memory deduplication. If every tool on Linux was statically linked, you'd massively bloat the RAM foorptint. It makes sense for programs outside the package manager to be self-contained, but having the managed ones abandon .so files would be a massively regressive step.

    • by DarkOx ( 621550 )

      This will come to bad end. Its one thing to think about containers and VMs as being their own little hosts and everything, like patching that goes along with that.

      Thinking of them as 'application bundles' will lead a nightmarish security situation. With the exception of applications that don't really handle external data you don't get the isolation from containers or VMs that many people seem to thing you do. Suppose you bundle up your CMS server and all the customizations written for it, it access a bac

      • No updating A will cause B to break. Turn off all Windows updates and freeze time march 2013. We will just hire more mcses to clean infections as they come as this is too critical to break etc.

        One company a coworker interviewed hasn't ran an update in 5 years as it breaks some add on for exchange. They still run XP too???!

        The more pain you make dependencies the greater to resistance to change. Look at IE 6 as an example?

    • by organgtool ( 966989 ) on Thursday April 09, 2015 @07:58AM (#49437317)
      How was this modded insightful?!

      The shared library is an out of date concept

      No, it is used by every major system today for very good reasons.

      Some will say that is how Macs do it

      Macs do have shared libraries - the files have a .dylib file extension.

      sounds good when storage was expensive

      Statically linked apps don't just take up orders of magnitude more storage, but also significantly more memory. Not only that, but a critical security update to one library requires recompiling and redeploying ALL of the apps that use that library.

      today we are virtualizing full platforms just to prevent version incomparably

      There are tons of reasons to virtualize that have nothing to do with version compatibility or network security.

      Since you seem so committed to statically linking apps, I suggest you go through the Linux From Scratch project and statically compile everything. Then, deploy it to an enterprise environment that requires five-nines uptime as well as all security updates. Be sure to set up a video camera so that I can watch with a bug bucket of popcorn.

      • by jbengt ( 874751 )
        There are advantages to shared libraries, as others have pointed out, but the GP never said anything about statically linked libraries, let alone recommending them.
    • I think it depends on at what level the libraries are shared. Not every windows application should come with its own copy of the entire .NET libraries.

    • by ravyne ( 858869 )
      Its not nearly that simple -- LXC and the windows container technology put applications into their own private namespace -- they can't even see other applications or any resources the underlying OS hasn't given the container access to. This isn't just about isolating software dependencies, it allows you to do things like running two apps on the same host OS that might have mutually-exclusive dependencies, or say, to run one version of the app on a newer version of the same library, where in the past you mig
  • Yet again (Score:1, Insightful)

    by Anonymous Coward

    Microsoft copies someone else. In Microsoft language,

    copying==innovation

    To be fair, every company copies to some extent. It's just that nobody spins it as much as Microsoft.

    • Re: (Score:2, Informative)

      You obviously didn't read the article - its Docker for Windows, the main management system for this is Docker, its just using the existing HyperV virtualisation system rather than expending effort porting Dockers virtualisation subsystem to Windows. Portability doesn't really matter here, because of the way Docker works (sharing kernels, virtual filesystems etc) - so you will rather run a Unix container on a Unix host, and a Windows container on a Windows host. The benefit here is that you can manage both

      • This is actually more than just using Hyper-V. It's extending Hyper-V to offer lightweight containers, like LXC does for Linux.

    • Re:Yet again (Score:4, Interesting)

      by RabidReindeer ( 2625839 ) on Thursday April 09, 2015 @06:39AM (#49436847)

      Microsoft copies someone else. In Microsoft language,

      copying==innovation

      To be fair, every company copies to some extent. It's just that nobody spins it as much as Microsoft.

      Before we got all hung up on patents and copyrights, computer software technologies were freely copied/stolen right and left. Often gaining interesting and useful new capabilities in the process.

      Back then, it was common to repeat Newton's quote that he saw further because he stood on the "shoulders of giants". And to sourly observe that programmers more often stood on each other's feet.

      These days, you often have to, lest lawyers descend upon you and pick your bones.

      It's one reason open-source software is now so popular. For whatever illusory protection against indemnification closed-source products might project, the open-source ones at least won't sure you. Unless you violate the basic terms of sharing, anyway.

      • Indeed... Apple didn't make the first MP3 player... they just ended up making a new one that obsoleted all the others with features.

    • by Anonymous Coward

      You mean like Linux copied UNIX?
      Like GNU copied UNIX?

      Yeah, those were terrible, terrible ideas. Sharing ideas is bad! Always start from scratch!

  • by Viol8 ( 599362 ) on Thursday April 09, 2015 @05:54AM (#49436667) Homepage

    After all, VMs were really only required for Windows where seperation of programs and libraries and process filesystem access restrictions was especially problematic compared to *nix. Now Windows looks like its finally dragged itself into the 1990s could VMs become a solution for niche edge case problems once more?

    • virtual machines might still hold a valuable feature in the future, since they would more strongly compartmentalize running code against exploit based escalation of privileges. Using chroot blocks processes from accessing files outside the jail, but does not prevent a running process from attacking the shared kernel space, and gaining access to the real root filesystem. An honest to goodness virtual machine offers additional layers of protection.

      Given the increasing value in gaining unauthorized exclusive

  • This IS docker for Windows. Misleading headline and article. I like Docker especially after providing support to SELinux.
  • Long before Docker, there was Thinstall/Thinapp [vmware.com]
  • As an ASP.NET developer I am really, really excited about this.

    In the past few years nothing new has come from Microsoft that has really been a big deal. MVC and Razor were great and a pretty big deal, but everything else didn't really affect my day to day job of developing apps.

    Deploying ASP.NET apps has always been a real pain in the neck. Sure, in theory it's as easy as xcopy, but once your apps start growing and your configuration grows it rapidly becomes a bigger thing to maintain. It takes a lot
    • Deploying ASP.NET apps has always been a real pain in the neck. Sure, in theory it's as easy as xcopy, but once your apps start growing and your configuration grows it rapidly becomes a bigger thing to maintain. It takes a lot of time, there's lots of stuffing around, it's very fiddly and generally a PITA.

      Why aren't you using a build and deployment system? We use Team City and Octopus Deploy to deploy ASP.Net websites, and once its set up (5 minutes for TC, a few minutes for OD) deployment is a zero friction issue - it just works.

  • I can guarantee you that MS advocates such as Gates and Balmer will be pointing to this technology too (just like others) saying it exemplifies the famous "Microsoft innovation".

  • "Hoping to build on the success of Docker-based Linux containers, Microsoft has developed a container technology to run on its Windows Server operating system."

    I'm confused, did Microsoft originally develop Docker-based Linux containers and is now cloning the technology to run natively under Windows?
  • I'll admit that I'm too lazy to read TFA but I think its portable app tech. With the acquisition of Softricity, Microsoft bought SoftGrid which it rebranded as Microsoft Application Virtualization for windows desktops and TS/RDS, then App-V for Servers. In reality it is COM/DCOM name space virtualization under the covers as I helped develop it. VMWare responded by acquiring Thinstall, nowcThinApp, which is also a portable app tech.
  • This is a bit misleading.

    What microsoft are offering is virtual machines with a cut-down 64bit windows kernel, without the 32bit support, without the user interface and without all the other guff that bloats a normal windows install.

    It's still a VM running on a hypervisor.

    Containers on the other hand all share the same running kernel (linux kernel) and just include different application or systems files. Well at least that's what it has meant for the last few years with lxc, docker and rocket et al.

    • whoops, looks like a was wrong. it looks like nano server (announced same day) is the cut down windows server SKU. while the microsoft containers use the hyper-v engine to enforce isolation but sharing the kernel.

news: gotcha

Working...