Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software Open Source Linux

CoreOS Announces Competitor To Docker 71

New submitter fourbadgers writes: CoreOS, the start-up making the CoreOS Linux distribution, has announced Rocket, a container management system that's an alternative to Docker. CoreOS is derived from Chrome OS and has a focus on lightweight virtualization based on Linux containers. The project has been a long-time supporter of Docker, but saw the need for a simpler container system after what was seen as scope-creep in what Docker provides.
This discussion has been archived. No new comments can be posted.

CoreOS Announces Competitor To Docker

Comments Filter:
  • by houstonbofh ( 602064 ) on Tuesday December 02, 2014 @12:17AM (#48504203)
    It is not about scope, or features, or development. It is all about who has the most things to install. Unless it runs Docker apps, (Which will be hard if it does not keep up with the feature creep) it is already starting way behind.
    • by Anonymous Coward on Tuesday December 02, 2014 @01:45AM (#48504489)

      As someone that has used Linux vserver for over 10 years and is currently running lxc containers for our business, I would say that Docker is a very confusing project to put into practice. We decided that we could do a better job and have more control over the deployment and operations of our applications.

      We evaluated coreos and decided against it because it used Docker.

      I think this is a good moved from coreos and makes complete sense if you had to use this stuff for anything in the real world.

      • If you have been using LXC for over 10 years and have a custom application already tuned to it, you are not Docker's case, and that is fine. What Docker is about is being able to rapidly download and deploy entire enterprise stacks, with each piece of the stack being totally isolated and thus easily maintainable and upgradeable, making the whole thing easily automated. Want to swap from Postgresql 8 to Postgresql 9? Swap out the container that someone else has already made and tested... done. It is a very u

        • by DUdsen ( 545226 ) on Tuesday December 02, 2014 @07:33AM (#48505379)
          Dockers problem along with most of the web2.0 stuff is that it might look easy now but how are you going to ensure that everything is kept patched. When the installation/development team have fled the building and handed the task of running it over to some "outsourced operations center".

          This is the question that never gets asked and why a lot of this new fancy smart stuff isn't that widely deployed by largish shops whose core business is something other then it and where a 5 year life cycle is considered short and agile.

          Forking a filesystem is smart but remember your also forking and freezing yourself into all of the undiscovered bugs so you need a way to resync and retest every docker container ever deployed for every update in the platform you based it on. and this is something i haven't really seen anyone cover in dept yet.
          • by jbolden ( 176878 )

            but how are you going to ensure that everything is kept patched. When the installation/development team have fled the building and handed the task of running it over to some "outsourced operations center".

            The outsourced operations center is responsible for most of the core containers and your development team responsible for others. The operations center knows what containers need to be replaced everyday. Because automated test environments need to be in place, they just roll out the new container and

          • by nr ( 27070 )

            Yes, all Linux/Unix admins knows there is a constant stream of security patches coming every week. One will need to swap the whole container out for a small updated binary or shared library. Seams a bit inefficient to me to go through the whole dev-staging-test-deploy pipeline every week or even serval times a week for one or more containers.

            Or do you Docker users just skip security updated and leave the holes wide open?

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          He said vserver for over 10 yrs, NOT lxc.

          Believe it or not, linux has had OOT container technology for ages, in the form of vserver and openVZ. We also have used vserver for at least the last decade at my work.

          I was most surprised about the "using lxc in production" bit. We keep looking at it, but lxc still does not offer the isolation or stability to use in production (we had unpriv lxc containers working, then they were not). Semi-priv haven't been broken in quite a while though. Still not convinced l

    • by jbolden ( 176878 )

      Rocket is a specification that the Docker project could include support for.

      But absolutely it is way behind. This only is going to make sense for a system that wants a genuine next generation Docker and is willing to give up all the tools on top of Docker to get there. So far I'm not seeing much advantage except for the fact that a system wouldn't need a registry. Their analogy of DNS for executables could be the killer feature of Rocket but even still that's a lot of different containers before that mad

  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Tuesday December 02, 2014 @12:19AM (#48504217)
    Comment removed based on user account deletion
  • Fuck me. (Score:5, Funny)

    by Anonymous Coward on Tuesday December 02, 2014 @12:21AM (#48504227)

    Oh, fuck. I'm going to in to work tomorrow, and our resident Ruby on Rails weeny will have a fat, stiff boner over this technology. He'll be ranting and raving about how amazing it is, even if he's never used it. The old FreeBSD and Solaris admins will scowl at this rube and how he's getting excited over technology they had available to them 15 to 20 years ago. The even older guys who worked on mainframes will chuckle, thinking back fondly to the days when they first encountered this type of technology, so many decades ago. Six months from now, after the Ruby on Rails dickweed has convinced our manager to use this technology in production, I'll get a call in the middle of the night because it's all fucked up and broken. I'll have to waste my night fixing it, while our resident Ruby on Rails pecker is reading all about the next fad he'll use to fuck over the rest of us with.

  • by mveloso ( 325617 ) on Tuesday December 02, 2014 @12:45AM (#48504307)

    So, Rocket for Android would be called Pocket Rocket?

  • Where Docker failed (Score:5, Interesting)

    by d3xt3r ( 527989 ) on Tuesday December 02, 2014 @12:50AM (#48504339)

    Containers are an interesting beast. Solaris has had Zones (aka containers) [wikipedia.org] since 2005. In Solaris, these Zones are more akin to virtual machines, except much more efficient. All zones shared a single kernel, they just had virtual network interfaces, storage, and could be managed independently. Now, in 2014, Docker brings the same simplicity of Solaris Zones to Linux.

    Sure, we've had CGroups [kernel.org] in Linux since 2004/2006 but Docker finally brought Linux up to speed with a simple to use capability for creating isolated containers on Linux. Only, the implementation brings with it the same flawed approach as Solaris Zones. Do we really need a full OS image running in a container? I don't think so. Docker images are based on a Linux distro (Ubuntu or CentOS, etc). So we look at this and say, "cool, virtualization without the overhead of interrupts for everything from writing to disk to sending packets over the wire." But is that really the best we can do?

    I think what Rocket really represents is a way to do containers right. Containers should run a single process. We shouldn't look at containers as a more efficient VM. We should see containers as a way to increase security and reduce overhead. Do you really want to have to run apt-get or yum inside every container? No. Containers should provide process isolation and application management capabilities. They shouldn't include the OS and the kitchen sink of user land utilities.

    This is where Docker has failed. Instead of simplifying administration and deployment, it's introduced its own nuanced approach to system management. The reason we need a Docker competitor (replacement?) is because Docker has failed to live up to its hype.

    • by steveha ( 103154 ) on Tuesday December 02, 2014 @02:05AM (#48504565) Homepage

      Disclaimer: I'm not super experienced in this stuff. I am open to correction if I have any of these points wrong.

      Do we really need a full OS image running in a container?

      I think we probably do.

      One of the key selling points of Docker is that the container is load-and-go. Do you have some wacky old software that has a hard dependency on particular versions of some libraries? You can build a container with just the right libraries and get your software to work... and, after you do that work, the container is just another container. It may have been a pain for you to get it working, but then anyone can run it on any Docker host as easily as any other container. This seems kind of powerful to me.

      Do you need to see how your software runs on CentOS and Debian? You can set up a container for each, and run the tests on a single host system.

      And if you want maximum security, it's kind of neat that each docker container can use just its own private file system and containers can't affect each others' running state.

      So, if you are content with running an up-to-date system, and always running the latest versions of everything, and upgrading everything together, you could make a security isolation system lighter weight than Docker, but trading off some of the simplicity and flexibility of Docker. You might think it's a good choice, but I don't think you can reasonably claim that it's better in all ways.

      Containers should run a single process. We shouldn't look at containers as a more efficient VM.

      As I understand it, it is considered best practice in Docker to run a single process per container. Some people do use Docker as a sort of lightweight VM [github.io] but not everyone likes it [github.com].

      Are you arguing that Docker is flawed because it doesn't enforce one process per container? Because I'm not seeing it. I would rather have the flexibility; if I want to use Docker as a lightweight VM, the option is there, and I don't see that as a bad thing.

      Do you really want to have to run apt-get or yum inside every container?

      Please correct me if I'm wrong, but my understanding is that you don't have to run a package manager inside every container. You would have a "base system" image, and you would update that image from time to time; then you build your specific containers as layers on top of the base image.

      I believe a container could simply be a script that starts up a service, and config files that configure the service, with the actual packages for the service in the "base system" image. I'm not sure if that is standard practice or what.

      I'm hoping that with Docker I could make micro-servers, like a Docker container with just a web server in it, not even a Bash shell. If someone cracks my server I want him in a desert, with no tools to help him escalate his privileges. I'm not sure how feasible that is now, but I think Docker is at least headed in that direction.

      I'm not opposed to this new Rocket thing, but I'm still not clear on its actual advantages over Docker.

      • by vux984 ( 928602 ) on Tuesday December 02, 2014 @03:28AM (#48504787)

        Do you have some wacky old software that has a hard dependency on particular versions of some libraries? You can build a container with just the right libraries and get your software to work... and, after you do that work, the container is just another container.

        On the flipside, the security of that container has to be managed separately. The operating system and libraries have to be managed separately. Yes, you get the advantages you state... but if the software isn't "wacky old software with wierd dependencies" then its a lot of overhead to setup and maintain.

        A more lightweight approach to the common case makes sense, and you can always fall back to a full VM for the wacky ones.

    • by Anonymous Coward

      > Now, in 2014, Docker brings the same simplicity of Solaris Zones to Linux.

      Wrong. Do your homework. The Linux containers is called LXC. Docker is just a bureaucracy layer on top of that, to abstract the different dialects of containers (among others, the above mentioned LXC). Very much like libvirt on top of Linux KVM abstracts the VM stuff and can be used on top of other VM implementations).

      If you plan on using several different flavors of containers and want to treat them uniformly, Docker might make

      • by brunes69 ( 86786 )

        Docker does a lot more than this. The whole point of docker is to take the LXC stack and use it to build micro-services than can layer on top of each other seamlessly, and to create and maintain a repository of these containers than can be swapped in and out for upgrades with zero hassle. Think of docker like apt-get on lots of steroids.

    • by brunes69 ( 86786 )

      You are looking at things through an overly simplistic viewpoint. Many applications do not run just one process or daemon. Even simple applications like MySQL need many processes that are synchronized to the same version. An application I am working on docker-ifying right now has about 40 processes in total, all with their own init scripts and other things to manage. I doubt this application could even be deployed in Rocket at all the way it is described via this link.

    • by unrtst ( 777550 )

      Please correct me if I'm wrong (I've read loads of docs on Docker, but have not used it yet).
      From what I've read, the problem you describe is not a technical limitation/implementation detail of Docker, but is simply a symptom of how it is generally being used.

      Only, the implementation brings with it the same flawed approach as Solaris Zones. Do we really need a full OS image running in a container? ...

      I think what Rocket really represents is a way to do containers right. Containers should run a single process. We shouldn't look at containers as a more efficient VM. We should see containers as a way to increase security and reduce overhead. ...

      From what I've read, a Docker container can have as few things in it as you want (or as much as you want, up to the everything but the kernel). If you were doing an Apache container, you might put apache, mod_ssl, the ssl libs, mod_php, perl, libperl,

    • by znrt ( 2424692 )

      Do you really want to have to run apt-get or yum inside every container? No. Containers should provide process isolation and application management capabilities. They shouldn't include the OS and the kitchen sink of user land utilities.

      isolation from what? one of the outstanding applications of docker is precisely the ability to recreate the exact execution environment your process needs, including all it's dependencies, in a snap. i wonder how you would do that if you had to separately manage all the dependencies your "isolated" process needs to run.

      if you have to run apt-get or yum inside your container it's probably time to recreate it.

  • Just like Docker. Epic fail. Next!

  • by Anonymous Coward

    We need yet another incompatible re-implementation of a major subsystem to fragment and distract the user base because were such masochists and need 7+ different package managing systems, 10+ desktop window managers, 4 different audio stacks some piled onto the other, 2 different replacements for Xorg, and etc. Someone tries to fix that problem, but were too successful, so now we gotta have two different implementations of it because some competing corporation wants some street cred. But hey, at least we

    • We need yet another incompatible re-implementation of a major subsystem to fragment and distract the user base because were such masochists and need 7+ different package managing systems, 10+ desktop window managers, 4 different audio stacks some piled onto the other, 2 different replacements for Xorg, and etc. Someone tries to fix that problem, but were too successful, so now we gotta have two different implementations of it because some competing corporation wants some street cred. But hey, at least we only have one dominate SSH server because borrowed that from OpenBSD and writing another SSH server is too boring for Mark Shuttleworth.

      And people wonder why companies don't release software for Linux.

    • by znrt ( 2424692 )

      7+ different package managing systems,
      10+ desktop window managers,
      4 different audio stacks some piled onto the other,
      2 different replacements for Xorg

      psst! relax, buddy. you only have to pick *one* of each!

      if you want someone else to decide for you, just grab a distro at random and don't fiddle with it, or go windows or osx ...

      you're welcome.

  • by Anonymous Coward

    Just in case anybody wanted to pick political sides, CoreOS sponsored the development of networkd [coreos.com] for systemd. So, the systemd perspective is the perspective that they're using when they criticize Docker for not being a tool that does one thing well.

  • They are expecting people to change the happy whale for just an ugly rocket. That's not the way the Go community rolls. Everybody knows that cute mascots are the most important part of a good software.
  • Needless confusion (Score:3, Informative)

    by Anonymous Coward on Tuesday December 02, 2014 @07:29AM (#48505369)

    There is significant confusion about Linux containers. The Linux container project (LXC) has been baking since 2009, is perfectly usable and the extensive Linux toolset to support servers and clustered environments work with LXC perfectly.

    There is little need to reinvent unless some value is being added but in many cases its just adding complexity to extract value by funded companies who muddy the waters relentlessly.

    LXC is supported by Ubuntu since 2012 and mainly developed by Stephane Graber and Serge Hallyn of Ubuntu. LXC gives you Linux containers with a complete Linux environment, a wide choice of container OS templaes and advanced features like unprivilged containers that let's non root users run containers. Tools like Docker took this base LXC container, introduced a restrictive container OS environment to give you an app delivery platform with added complexity suitable for PAAS type scenarios, but promote themsevles as 'easier to use'. And in their marketing are vague on their genesis and their value add on top of LXC which would need them to be open about the extact nature of LXC, referring instead to the project as 'low level kernel capabilities' and allowing the misconception to spread that Docker is linux containers.

    Because of the LXC project's low profile and marketing resources of funded companies a lot of folks first introduction to Linux containers is via tools like Docker, who do not know much about LXC beyond the misconceptions. And even stranger a lot of other funded projects spring up around tools like Docker that try to break though the self imposed restrictions of Docker containers to make them more like LXC!

    The end result is unbecoming confusion even on better informed discussion forums like Slashdot, Soylent news, Hackner news and phoronix. So imagine the level of confusion with normal users? All Docker has to do now is lose the needless restrictions on their containers and LXC is basically dead. This looks like embrace, extend and extinguish.

    Here is a timeline of LXC

    2007 - Cgroups patches to the Linux kernel by a couple of Google coders

    2009 - The LXC project by Daniel lezcano, Serge Hallyn supported by IBM with a kernel patch and userland tools to create and manage containers. The LXC code was merged into the kernel in 2.6.32 with userland tools available.

    2012 - The project is now supported by Ubuntu with Stephane Graber and Serge Hallyn on Ubuntu working on it. It was a pretty low profile project.

    2013 - LXC 1.0 stable is released with a lot of new and exciting features.

    2013 - Dotcloud was using LXC containers for their internal PAAS platfrom, and experimented with LXC's support for overlay filesystems like aufs and overlayfs.

    2013 - Based on this they released a tool called Docker. They manage to get a lot of funding and market themsevles aggressively to the devops community.

    2014 - Docker sees huge adotpion and with the 0.9 release drop their dependence on the LXC project and annnounce a new tool called libcontainer that will use cgroup and namaspaces in the kernel directly.

    2014 - Ubuntu finally wakes to the potential of their own supported LXC project and announces LXD which will use uinprivilreged containers by default and manage containers across LXC hosts

    2014 - CoreOS, a linux distribution based around systemd and deploying apps in Docker containers with multi host orchestration tools like etcd and fleet decides to make a competing container format to Docker. Apprently because Docker was going to step in to the multi host orchestration business itself. Of course containers being like VMs don't need any specific container oriented orchestration, and tools that work with bare-metal and VMs work with LXC containers, but if you intentionally make restricted container formats then you will need to create an ecosystem of tools that support your restricted format. So you can't use normal orchestration tools as you would with LXC but need to find a Docker way or CoreOS way of doing things. That sounds fun.

    • by g4sy ( 694060 )
      this. if one more dipshit tells me i should use docker instead of lxc, i'm going to try harder to find out what real value it adds. lxc already did all the heavy lifting. i pray to got that docker folding like a house of cards is the trigger that pops this bubble. there are honest ways to make money in open source, but this sure ain't it
    • There are many "container" architectures sprouting up. LXC, with last year's release of v1.x and introduction of "unprivilieged" containers, nested containers, overlayfs support & snapshots and now recently CRIU... is a great toolset.
      Recently Stephane Graber et al announced LXD (lex-dee) and Stephane put out the following email description of purpose:

      https://lists.linuxcontainers.... [linuxcontainers.org]

      The GitHub site has a directory for specifications which is a really interesting read because it covers things l
  • by Ritz_Just_Ritz ( 883997 ) on Tuesday December 02, 2014 @09:09AM (#48505831)

    I don't understand the need to inject all the platform bloat into Docker. Why not just fold docker functionality into an existing platform such as OpenStack to handle all those "extras" that are being contemplated? The work to integrate the two is already in progress:

    https://wiki.openstack.org/wik... [openstack.org]

    Best,

  • I can't shake the feeling that I've seen this movie before, I think it was called "statically linked executables" where all the code needed to run the application resided in one place. Then as the executables got more complex they got much larger, consumed more resources, and large parts of each executable was redundant with each other. Hence static executables were superceded by "dynamically linked executables" which pulled out the redundancies into general purpose libraries that existed in only one plac

Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari

Working...