Canonical Introduces High-Availability Micro-Kubernetes (zdnet.com) 24
An anonymous reader quotes a report from ZDNet: If you've been hiding under a rock -- and who could blame you these days? -- you may have missed how totally Kubernetes now dominates container orchestration. One way to quickly get up to speed on Kubernetes is with Canonical's MicroK8s. This is an easy-to-run and install mini-version of Kubernetes. And now Canonical has added autonomous high availability (HA) clustering to it. [...] Now, with HA, MicroK8s is ready to move from Internet of Things (IoT) implementations, testing out Kubernetes implementations on a workstation, or simply learning Kubernetes to bigger, better cloud jobs.
With the new MicroK8s release, HA is enabled automatically once three or more nodes are clustered, and the data store migrates automatically between nodes to maintain a quorum in the event of a failure. Designed as a minimal conformant Kubernetes, MicroK8s installs, and clusters easily on Linux, macOS, or Windows. To work, a HA Kubernetes cluster needs three elements. Here's how it works in MicroK8s:
-There must be more than one worker node. Since MicroK8s uses every node as a worker node, there is always another worker available so long as there's more than one node in the cluster.
-The Kubernetes API services must run on one or more nodes so that losing a single node would not render the cluster inoperable. Every node in the MicroK8s cluster is an API server, which simplifies load-balancing and means we can switch instantaneously to a different API endpoint if one fails.
-The cluster state must be in a reliable datastore. By default, MicroK8s uses Dqlite, a high-availability SQLite, as its datastore.
With the new MicroK8s release, HA is enabled automatically once three or more nodes are clustered, and the data store migrates automatically between nodes to maintain a quorum in the event of a failure. Designed as a minimal conformant Kubernetes, MicroK8s installs, and clusters easily on Linux, macOS, or Windows. To work, a HA Kubernetes cluster needs three elements. Here's how it works in MicroK8s:
-There must be more than one worker node. Since MicroK8s uses every node as a worker node, there is always another worker available so long as there's more than one node in the cluster.
-The Kubernetes API services must run on one or more nodes so that losing a single node would not render the cluster inoperable. Every node in the MicroK8s cluster is an API server, which simplifies load-balancing and means we can switch instantaneously to a different API endpoint if one fails.
-The cluster state must be in a reliable datastore. By default, MicroK8s uses Dqlite, a high-availability SQLite, as its datastore.
I haven't been living under a rock (Score:5, Insightful)
But I still didn't understand anything that this post said. Next time don't assume everyone is up to date on some esoteric technology. Provide us with some context.
Re:I haven't been living under a rock (Score:5, Informative)
To understand Kubernetes you must first understand Docker.
The idea of Docker is to use a standard container format to hold an application and all of its needed support libraries. Using Linux "cgroups" (control groups) each container is run as a sort of very lightweight virtual machine. It's not a true VM because all Docker containers are using the same Linux kernel, and a true kernel exploit could possibly allow one container to peek at others. Also, Docker purists hate me now for using the VM analogy, because best practice is to run one thing in each Docker: you would put a web server in its own Docker, put a mail server in its own Docker, etc. But as long as the kernel remains secure, each Docker image has really good isolation. And it means you can run one wacky old app that has a hard dependency on one specific version of one library, right next to all your other apps that want recent libraries; again, all libraries are in the containers and run in isolation. The host system can treat all your apps the same; it runs them at the standard container level.
Standard shipping containers revolutionized transporting goods. Standard software containers are revolutionizing running software on "cloud" servers.
So, now that Linux has standardized containers, Google made Kubernetes to manage the containers. Kubernetes (often written "K8s" since it's lazier to write an 8 than write the 8 letters in the middle of the word) manages services at the container level. You can write a K8s config that specifies "Run 5 copies of Container A, and also 1 copy of Container B and 2 copies of Container C". Any config like that is called a "pod". And you can specify how K8s can detect whether a container is running and healthy; if a container fails its status checks, K8s will terminate it and spin up a new one. So K8s not only starts your pod, but also knows how to guarantee that your desired services are running as intended. If one of your containers hits a bug and stops working, the container will be terminated and a fresh one started up, automatically. This is sometimes called "self-healing". Linux has had this for a long time at the single-service level: System V init, upstart, and systemd all can restart a service that dies. But K8s can manage a more complex "pod" setup and use actual readiness checks (HTTP requests against specified port numbers), so it can restart a service that hasn't died yet but has stopped responding. And K8s can manage services on multiple servers; if one server dies and all the containers on it die with it, K8s can bring up services on another server. This high-level management of services at scale can be called "orchestration".
I work for a company that uses K8s, and the services I'm responsible for run under K8s. What's really neat about it is how painless releases are now. When it's time to do a release, K8s spins up new containers with the newer service, and only once they are responding to their readiness checks it starts shutting down the old containers. After all the new containers are running and the last of the old containers is terminated the release is complete. There is no down time and little risk; if something is wrong with the new release, the new containers will fail their readiness checks and K8s won't kill the old containers, and we can safely roll back the release.
Another nifty feature is that our pods are set for auto-scaling. We run a minimal number of containers, but if a whole bunch of customers hit our web site at the same time, K8s will spin up additional containers to handle the increased load; then later in the day, when the spike dies down, K8s will terminate the extra containers and go back to the smaller amount. Before K8s we ran enough instances of my services to handle any possible demand; now we trust the auto-scaling and it's saving money.
So now there is a mini version of K8s suitable for playing around with on a desktop, for learning how to use K8s; or I guess it might be used on lightweight hardwa
Re: (Score:2, Insightful)
To understand Kubernetes you must first understand Docker.
I never heard of Docker either.
Re: (Score:3)
To understand Kubernetes you must first understand Docker.
I never heard of Docker either.
Please turn in your geek card at the door and don't let it hit you on the way out.
Re: (Score:3)
So you *have* been living under a rock.
Seriously how could you miss Docker? It has been the most controversial introduction to the FOSS since systemd.
Re: (Score:1)
Re: (Score:3)
MicroK8s is just one of quite a few micro kubernetes distributions that have popped up in the past year - alternatives are minikube and k3s, both of which are very very good.
Re: (Score:2)
Yup, minikube was my gateway from Docker.
I already babbled on about this in another comment, but I feel like a lot of these technologies are fairly intuitive once you dive in. It can seem like a huge wall if you have been maintaining a c++ monolith for the last 10 years, and thus all these "get up and running quickly" are trying (and I think succeeding) in solving this. Run a few commands and you've got a mostly functioning environment. Over time, you can figure out what is actually going on under there. In
Re: (Score:2)
I think you're mixing orthogonal concepts. It's entirely possible to still have to maintain and operate a C++ monolith inside deployable containers.
Re: (Score:2)
Well sure, in fact the ability to package up some of our legacy stuff (and importantly the way we built said legacy stuff) into a box that we could easily work with was one of our main use cases.
That said, I think its not uncommon for folks maintaining a legacy project to become somewhat disconnected with the cutting edge of technology. That was really the sort of image I was going for.
Re: (Score:3)
To add to this wonderful answer, Kubernetes can also be run as a multi-tenant cluster. You can create "namespaces" inside it and let devs and testers loose in their own tenancies.
Massively speeds up development because developers can self-service all the containers they need. Our OpenShift clusters (which makes Kubernetes actually usable for the masses) are even faster for our devs and devops engineers to use than public cloud.
Re: (Score:2)
This was actually one of the first big "oh thats nifty" moments for me when I jumped from Docker.
Prior to that we had been doing stuff like prefixing our containers with our usernames, but it was clunky and easy to accidentally screw something up for others. We had piles of scripts to automate setting up the appropriate networks/volumes with the right aliases and all that (and yes, we used docker-compose as well). It was a royal mess that is totally solved by what seems like a really obvious feature.
Docker
Re: (Score:3)
I partially manage a service where we have a handful of physical servers at a colo center and each server runs a handful docker containers each running a computational engine developed inhouse. All together it is somewhere between 100 and 200 containers. The setup is quite static with occasional update of binaries running in the containers, need to examine logs produced by the containerized application or restarting the container if the application crashes.
This is all currently managed by hand and it is clu
Re: (Score:3)
After a few days I gave up. It was nowhere near to clean&lean KISS software that I prefer. Would MicroK8s help with that? Do you have any advice?
Use a managed k8s cloud service. k8s on prim is kinda useless unless you are already a cloud provider. That's how this stuff is meant to be run anyway. microk8s is just for development as far as I'm concerned.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Docker is garbage, still can exhaust the resources of the host OS unless config/tuning is done... and of course might need to be done again and again. A fad inferior to virtual machines on hypervisor. Just let this fad pass, the users are the ones under the rock.
Re: (Score:3)
Re:I haven't been living under a rock (Score:5, Interesting)
As someone who made the jarring leap, I'll say it isn't as daunting as it seems on the surface.
The core concepts driving all these technologies are fairly easy to wrap your head around once you make some initial inroads, and the tooling has become very intuitive. The driving factor in a lot of these recent trends has been simplification and re-use, with the goal being to let you focus on the part you actually need to and have the rest more or less just work.
Personally, I went (at a very high level), docker -> podman -> k8s -> istio. This still seems like a common progression. Docker is falling out of fashion, but it is very intuitive and user friendly. I view it as kind of the MySQL of the container world. Not as powerful, but you can get going with it really quickly and it is a good way (imo) to learn the basic principles. Podman and k8s are built on the same ideas (and use a lot of the same underlying tools), but are more flexible. Istio looks really daunting at first, but it is actually pretty straight forward once you dive into it.
There is a tonne of terminology around this stuff too, which I think also can make it seem like a huge barrier if you've been hacking away on a c++ monolith for the last 10 years, but once you start using it, it kinda just clicks (or at least that was my experience).
I actually think that is the problem stuff like this tool is trying to solve. Get users up and doing something so they don't feel like they are staring at a wall with no clue where to even start. "Hey, download this, run this, congratulations you have a service mesh!"
Re: (Score:2)
Kubernetes and Docker-related news have been showing up on Slashdot for some time now. I have been cheerfully unaware of their important and function until I started using them in the latest project at work.
As with all technologies, there's a lot of good and bad. The pitch is that you can functionally program in a domain-specific language in a YAML syntac the shape of your application's deployment. Then you let the system figure out how actually deploy and maintain your cluster. That is just awesome. The
Docker -- CRI-O (Score:2)
A word of advice if you or your company are just starting using Kubernetes & Containers.
Stay away from containers using the Docker format -> use the CRI-O ones.
Docker - large monolithic daemon running as root. Devs can spin up containers with root privileges and do stupid stuff.
CRI-O - containers run as a normal user. Devs can only do stuff requiring higher privileges IF allowed.
RedHat OpenShift changed it's default container format from Docker to CRI-O
VMware Tanzu is also going CRI-O
Diamanti ( hardw