Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software

Canonical Introduces High-Availability Micro-Kubernetes (zdnet.com) 24

An anonymous reader quotes a report from ZDNet: If you've been hiding under a rock -- and who could blame you these days? -- you may have missed how totally Kubernetes now dominates container orchestration. One way to quickly get up to speed on Kubernetes is with Canonical's MicroK8s. This is an easy-to-run and install mini-version of Kubernetes. And now Canonical has added autonomous high availability (HA) clustering to it. [...] Now, with HA, MicroK8s is ready to move from Internet of Things (IoT) implementations, testing out Kubernetes implementations on a workstation, or simply learning Kubernetes to bigger, better cloud jobs.

With the new MicroK8s release, HA is enabled automatically once three or more nodes are clustered, and the data store migrates automatically between nodes to maintain a quorum in the event of a failure. Designed as a minimal conformant Kubernetes, MicroK8s installs, and clusters easily on Linux, macOS, or Windows. To work, a HA Kubernetes cluster needs three elements. Here's how it works in MicroK8s:

-There must be more than one worker node. Since MicroK8s uses every node as a worker node, there is always another worker available so long as there's more than one node in the cluster.
-The Kubernetes API services must run on one or more nodes so that losing a single node would not render the cluster inoperable. Every node in the MicroK8s cluster is an API server, which simplifies load-balancing and means we can switch instantaneously to a different API endpoint if one fails.
-The cluster state must be in a reliable datastore. By default, MicroK8s uses Dqlite, a high-availability SQLite, as its datastore.

This discussion has been archived. No new comments can be posted.

Canonical Introduces High-Availability Micro-Kubernetes

Comments Filter:
  • by Berkyjay ( 1225604 ) on Thursday October 15, 2020 @09:05PM (#60612990)

    But I still didn't understand anything that this post said. Next time don't assume everyone is up to date on some esoteric technology. Provide us with some context.

    • by steveha ( 103154 ) on Thursday October 15, 2020 @10:05PM (#60613122) Homepage

      To understand Kubernetes you must first understand Docker.

      The idea of Docker is to use a standard container format to hold an application and all of its needed support libraries. Using Linux "cgroups" (control groups) each container is run as a sort of very lightweight virtual machine. It's not a true VM because all Docker containers are using the same Linux kernel, and a true kernel exploit could possibly allow one container to peek at others. Also, Docker purists hate me now for using the VM analogy, because best practice is to run one thing in each Docker: you would put a web server in its own Docker, put a mail server in its own Docker, etc. But as long as the kernel remains secure, each Docker image has really good isolation. And it means you can run one wacky old app that has a hard dependency on one specific version of one library, right next to all your other apps that want recent libraries; again, all libraries are in the containers and run in isolation. The host system can treat all your apps the same; it runs them at the standard container level.

      Standard shipping containers revolutionized transporting goods. Standard software containers are revolutionizing running software on "cloud" servers.

      So, now that Linux has standardized containers, Google made Kubernetes to manage the containers. Kubernetes (often written "K8s" since it's lazier to write an 8 than write the 8 letters in the middle of the word) manages services at the container level. You can write a K8s config that specifies "Run 5 copies of Container A, and also 1 copy of Container B and 2 copies of Container C". Any config like that is called a "pod". And you can specify how K8s can detect whether a container is running and healthy; if a container fails its status checks, K8s will terminate it and spin up a new one. So K8s not only starts your pod, but also knows how to guarantee that your desired services are running as intended. If one of your containers hits a bug and stops working, the container will be terminated and a fresh one started up, automatically. This is sometimes called "self-healing". Linux has had this for a long time at the single-service level: System V init, upstart, and systemd all can restart a service that dies. But K8s can manage a more complex "pod" setup and use actual readiness checks (HTTP requests against specified port numbers), so it can restart a service that hasn't died yet but has stopped responding. And K8s can manage services on multiple servers; if one server dies and all the containers on it die with it, K8s can bring up services on another server. This high-level management of services at scale can be called "orchestration".

      I work for a company that uses K8s, and the services I'm responsible for run under K8s. What's really neat about it is how painless releases are now. When it's time to do a release, K8s spins up new containers with the newer service, and only once they are responding to their readiness checks it starts shutting down the old containers. After all the new containers are running and the last of the old containers is terminated the release is complete. There is no down time and little risk; if something is wrong with the new release, the new containers will fail their readiness checks and K8s won't kill the old containers, and we can safely roll back the release.

      Another nifty feature is that our pods are set for auto-scaling. We run a minimal number of containers, but if a whole bunch of customers hit our web site at the same time, K8s will spin up additional containers to handle the increased load; then later in the day, when the spike dies down, K8s will terminate the extra containers and go back to the smaller amount. Before K8s we ran enough instances of my services to handle any possible demand; now we trust the auto-scaling and it's saving money.

      So now there is a mini version of K8s suitable for playing around with on a desktop, for learning how to use K8s; or I guess it might be used on lightweight hardwa

      • Re: (Score:2, Insightful)

        by Joce640k ( 829181 )

        To understand Kubernetes you must first understand Docker.

        I never heard of Docker either.

        • by sfcat ( 872532 )

          To understand Kubernetes you must first understand Docker.

          I never heard of Docker either.

          Please turn in your geek card at the door and don't let it hit you on the way out.

        • So you *have* been living under a rock.

          Seriously how could you miss Docker? It has been the most controversial introduction to the FOSS since systemd.

        • by WDubois ( 806116 )
          Thank you for writing the summary that I've been waiting for for years. While I don't personally have a need for this, I didn't really understand what I was missing until now. Well done!
      • MicroK8s is just one of quite a few micro kubernetes distributions that have popped up in the past year - alternatives are minikube and k3s, both of which are very very good.

        • by Anrego ( 830717 )

          Yup, minikube was my gateway from Docker.

          I already babbled on about this in another comment, but I feel like a lot of these technologies are fairly intuitive once you dive in. It can seem like a huge wall if you have been maintaining a c++ monolith for the last 10 years, and thus all these "get up and running quickly" are trying (and I think succeeding) in solving this. Run a few commands and you've got a mostly functioning environment. Over time, you can figure out what is actually going on under there. In

          • by Shaiku ( 1045292 )

            I think you're mixing orthogonal concepts. It's entirely possible to still have to maintain and operate a C++ monolith inside deployable containers.

            • by Anrego ( 830717 )

              Well sure, in fact the ability to package up some of our legacy stuff (and importantly the way we built said legacy stuff) into a box that we could easily work with was one of our main use cases.

              That said, I think its not uncommon for folks maintaining a legacy project to become somewhat disconnected with the cutting edge of technology. That was really the sort of image I was going for.

      • To add to this wonderful answer, Kubernetes can also be run as a multi-tenant cluster. You can create "namespaces" inside it and let devs and testers loose in their own tenancies.

        Massively speeds up development because developers can self-service all the containers they need. Our OpenShift clusters (which makes Kubernetes actually usable for the masses) are even faster for our devs and devops engineers to use than public cloud.

        • by Anrego ( 830717 )

          This was actually one of the first big "oh thats nifty" moments for me when I jumped from Docker.

          Prior to that we had been doing stuff like prefixing our containers with our usernames, but it was clunky and easy to accidentally screw something up for others. We had piles of scripts to automate setting up the appropriate networks/volumes with the right aliases and all that (and yes, we used docker-compose as well). It was a royal mess that is totally solved by what seems like a really obvious feature.

          Docker

      • by rastos1 ( 601318 )

        I partially manage a service where we have a handful of physical servers at a colo center and each server runs a handful docker containers each running a computational engine developed inhouse. All together it is somewhere between 100 and 200 containers. The setup is quite static with occasional update of binaries running in the containers, need to examine logs produced by the containerized application or restarting the container if the application crashes.

        This is all currently managed by hand and it is clu

        • by sfcat ( 872532 )

          After a few days I gave up. It was nowhere near to clean&lean KISS software that I prefer. Would MicroK8s help with that? Do you have any advice?

          Use a managed k8s cloud service. k8s on prim is kinda useless unless you are already a cloud provider. That's how this stuff is meant to be run anyway. microk8s is just for development as far as I'm concerned.

          • by rastos1 ( 601318 )
            We already did analysis and the result was that moving everything to cloud would cost more. What do you mean by "cloud provider"? We provide a web service for math-crunching some specific data for our customers in textile industry. Are we a "cloud provider"? And back to MicroK8s: how much dependencies MicroK8s requires and what are they?
        • by kiwipom ( 920352 )
          It would be nice to have some (Web?) GUI for that, monitoring resource usage etc. Use Prometheus and Grafana, it will give you everything you need including alerting
      • Docker is garbage, still can exhaust the resources of the host OS unless config/tuning is done... and of course might need to be done again and again. A fad inferior to virtual machines on hypervisor. Just let this fad pass, the users are the ones under the rock.

      • This is an awesome explanation. I had a general idea of how Docker/Kubernetes worked before, but this gave a lot of detail I was missing. Thanks!
    • by UPi ( 137083 )

      Kubernetes and Docker-related news have been showing up on Slashdot for some time now. I have been cheerfully unaware of their important and function until I started using them in the latest project at work.

      As with all technologies, there's a lot of good and bad. The pitch is that you can functionally program in a domain-specific language in a YAML syntac the shape of your application's deployment. Then you let the system figure out how actually deploy and maintain your cluster. That is just awesome. The

  • A word of advice if you or your company are just starting using Kubernetes & Containers.
    Stay away from containers using the Docker format -> use the CRI-O ones.
    Docker - large monolithic daemon running as root. Devs can spin up containers with root privileges and do stupid stuff.
    CRI-O - containers run as a normal user. Devs can only do stuff requiring higher privileges IF allowed.
    RedHat OpenShift changed it's default container format from Docker to CRI-O
    VMware Tanzu is also going CRI-O
    Diamanti ( hardw

The only possible interpretation of any research whatever in the `social sciences' is: some do, some don't. -- Ernest Rutherford

Working...