Living with CoreOS
Containers are the future
It’s clear the industry is still trying to figure out what to do with them but the sheer number of uses means the invaluable *1. Even Microsoft who historically adopts their own similar yet incompatible technologies has gotten behind Docker with native Server 2016 support. If that’s not a glowing endorsement I don’t know what is. Any modern OS can now run Docker but several have been designed specifically for containers. Two of the most popular being CoreOS and RancherOS though their designs are slightly different. Since the only way to really become familiar with an OS it to use it personally I’ve been kicking the tire on CoreOS for a while now.
CoreOS makes a lot of design decisions that deviate sharply from what most Linux users are familiar with. For starters it leverages the same build system used by Google ChromeOS which in turn leverage build technologies from Gentoo Linux. There is no package manager hence adding or removing packages is not possible with recompiling your own release with the SDK. Since the installed packages does not change it is possible to mount /usr read only. Updates are released via 4 selectable channels – master, alpha, beta, and stable. Several services manage monitoring the channels, pulling the updates, and updating the system. Updates only happen on during reboots by altering the partition used for /usr and is configured by policy. The disk layout is hardwired into the install and not configurable since the update behavior relies on it. Obviously the expectation you are running clustered ephemeral instances and all your applications are properly designed container services.
Enterprise Support and Scale
The primary purpose of CoreOS is to run Docker. To facilitate this use case they’ve created an ecosystem of support tools. The primary services leveraged by any CoreOS install are etcd, fleet, rkt and the usual myriad of systemd services. To support their ecosystem a number of additional projects have been created.
Some of the more interesting ones are:
- flannel for networking
- ignition for system config
- clair for container security analysis
- torus for distributed storage
- omaha for updates
- coreos-baremetal for provisioning.
They also have their enterprise platform Tectonic which add a number of features such ui’s, dashboards, and consistent container & vm api for ease of management.
Unsurprisingly, Kubernetes support is ingrained making it fairly simple to get a cluster up and running. Suffering from the same problem as the rest of the world a large number of install approaches promote both flexibility and confusion. If you know what you are trying to achieve picking the right approach is not a problem but for the novice it can be overwhelming. Their installation approach uniquely leverage rkt to bootstrap the docker hyperkube containers which run the kubernetes services.
There are a couple things have considering related to all this.
- Anything I deploy to a cloud is built on a stripped down OS & exists only until the next application or Operating System update. Conversely the extra /usr partition of CoreOS and update approach seems geared for long lived systems. This makes it much more interesting running on bare metal than a cloud in my opinion.
- The OS running a container doesn’t need any virtualization under it. Certainly CPUs are quite good running VMs days so there’s not much overhead, but if you are trying to squeeze every last CPU cycle from your system why not just lose the overhead? Losing the under cloud means losing the node management, but its clear the supporting tools are attempting to mitigate this.
- Kubernetes doesn’t need any virtualization underneath it. Running multiple tenant networks on top of each other and simultaneously supporting 2 different network models and technologies can be a nightmare. Downsides? In addition to the node management lost there’s also the native load balancer and shared storage functions that Kubernetes leverages.
Service configuration is typically done via cloud-init. Cloud-init configurations can be loaded via remote web server at boot time so its never necessary to modify a running system. Updates will always take place at the next reboot. Native systemd services are limited to whats included via the basic stock install. Its recommended additional services are run as docker containers defined via systemd or fleet. Management tools are also quite limited. The nspawn feature of systemd is wrapped into a toolbox tool that puts all the tools required by admins into a custom container. That container can optionally be executed on login so operators will rarely need to access to the base OS image directly.
So how’s it all work in practice? The cloud-init configuration make managing the setup pretty easy if its adopted. Updates are enabled silently on reboots which also makes changing channels trivial. The downside would be that reboots are inevitable and you applications need to behave ok with this. Since the focus is running containers using something like qemu to run VM’s is possible but not really a straightforward exercise. Fleet is an interesting way to systemd into clusters but now it seems somewhat unnecessary with more full featured technologies such as docker-swarm and Kubernetes available. Their built-in container runtime rkt has an interesting feature set but container technologies besides Docker (LXD, clear containers, etc) have yet to get much market interest.
I think ultimately my opinion has become – If the CoreOS design philosophy aligns with how you want your infrastructure to behave then you will be quite happy with it. Attempting to shoehorn it into another space would be an exercise in futility and pain.
Author: Wil Reichert
Solinea services help enterprises build step-by-step modernization plans to evolve from legacy infrastructure and processes to modern cloud and open source infrastructure driven by DevOps and Agile processes.
Better processes and tools equals better customer (and employee) satisfaction, lower IT costs, and easier recruiting, with fewer legacy headaches.
Solinea specializes in 3 areas:
- Containers and Microservices – Now enterprises are looking for ways to drive even more efficiencies, we help organizations with Docker and Kubernetes implementations – containerizing applications and orchestrating the containers in production.
- DevOps and CI/CD Automation – Once we build the infrastructure, the challenge is to gain agility from the environment, which is the primary reason people adopt cloud. We work at the process level and tool chain level, meaning that we have engineers that specialize in technologies like Jenkins, Git, Artifactory, Cliqr and we build these toolchains and underlying processes so organizations can build and move apps more effectively to the cloud.
- Cloud Architecture and Infrastructure – We are design and implementation experts, working with a variety of open source and proprietary, and have built numerous private, public, and hybrid cloud platforms for globally-recognized enterprises for over three years.