What is container orchestration?

Container orchestration is the process of managing containers in an automated way—freeing engineers from tasks like (re)creating, scaling, and upgrading containers. On top of that, container orchestration also helps with managing networking and storage capabilities for containers.

Containers have become a popular way of building software. For many, containers are a go-to choice for not only new, modern software development but also for migrating older applications.

Part of the reason that containers are so popular is the ease of use. It’s very simple to create and run containers, but there’s a catch. The more containers you have, the more time you spend managing them. If we consider a microservices architecture, even small applications can consist of dozens of containers. Container orchestration platforms enable you to reduce the time you spend managing the life cycle of containers.

Container orchestration platforms

Container orchestration is just a concept. In order to actually implement it, you need a container orchestration platform. These are the tools that you can use for container management and for reducing your operational workload.

Imagine that you have 20 containers, and you need to gradually upgrade all of them. Doing this manually, while possible, would take you quite some time. Instead, you can instruct container orchestration tools via a simple YAML configuration file to do it for you. That’s just one example. Container orchestration platforms can do pretty much everything that’s needed to keep your containerized application up and running.

Container orchestration platforms will restart crashed containers automatically, scale them based on the load, and make sure to deploy all containers evenly on all available nodes. If one of the containers starts misbehaving, for example, eating all the RAM available on the node, the container orchestration platform will make sure to move all the other containers to other nodes. Do you need to implement service discovery or make sure that specific storage volume is accessible to specific containers even if they move to another node? The container orchestration platform will do it for you. Do you need to prevent some containers from talking to the internet while only allowing others to talk to specific endpoints? You’re covered here as well.

Managed vs. self-built (homegrown)

You can create a container orchestration platform from scratch all on your own. Some platforms are also open source. You’ll need to install and configure the platform yourself. This, of course, gives you full control over the platform and allows you to customize it to your needs. You can also use one of the managed platforms.

A managed container orchestration platform is when the cloud provider takes care of installation and operations while you only use its capabilities. In most cases, you don’t need to know how these platforms work under the hood, and you only want them to manage your containers. In such cases, managed offerings come in handy and save you a lot of time. Some examples of managed container orchestration platforms include Azure AKS, Google GKE, Amazon EKS, Red Hat OpenShift, Platform9 or IBM Cloud Kubernetes Service.

Depending on the choice of the platform, the list of the features will differ. However, on top of the platform itself, you’ll also need to think about a few additional components to create a full infrastructure. A container orchestration platform, for example, isn’t responsible for storing your container images. So you’ll need some image registry for that.

Depending on your underlying infrastructure and if you’re running in the cloud or in an on-premises data center, you may need to implement a load balancer separately. In some cases, it can also be managed by the platform itself. For example, most of the managed container orchestration platforms will automatically manage cloud load balancers or other cloud services for you.

"DevOps center"

Because container orchestration platforms can manage every aspect of the container life cycle and some of the extra components can also be integrated and managed by the platform itself, they become the main focus, by default, of all DevOps tasks these days. Well-integrated container orchestration platforms can abstract all the infrastructure, and therefore, DevOps engineers won’t need to touch the raw infrastructure anymore. This is good because they can focus on one platform and spend less time switching between different panels and UIs to achieve the same results.

Container orchestration and microservices

Now that you know how container orchestration platforms work, let’s take a step back and talk about microservices. It’s important to understand the concept of microservices because container orchestration platforms won’t work very effectively with applications that don’t follow basic microservices principles. Now this doesn’t mean that you can only use a container orchestration platform with top-notch microservices-based applications. It will still do its job, but it just won’t be as effective and some features may not be available.

So let’s start from the beginning. What are microservices? Traditional, old-fashioned software is built as one piece. If you, for example, need to make a very simple change to, say, the color of one of the buttons in your application, you would have to redeploy the whole application. The idea behind microservices is to split this monolith into smaller pieces. In order to make these individual pieces work as one application, they need to talk to each other. This is usually done via REST APIs. With microservices, whenever you need to make any change in the application, you only need to test and redeploy one of these small pieces. This gives you the ability to make changes quicker and easier. That’s just one of the many advantages of microservices.

microservices-example-LaunchDarkly

Caption: Example of a content management system (CMS) application running on a microservices architecture. "User management" is one microservice, "Twitter publisher" is another microservice, and so on.

Advantages of microservices

The concept of splitting your application into many small individual pieces brings many more advantages. One application doesn’t need to be written in one language anymore. Each microservice can be built in each developer’s favorite language. Teams can implement features and bug fixes faster since they don’t need to wait for others. You can easily test new features ad hoc by replacing just one microservice. Scaling is way easier and more effective since you can scale only the individual pieces of your application that need scaling. Loads on your application can be distributed more evenly by properly placing microservices.

But with all these benefits comes the complexity too. Instead of focusing just on the code, you need to start taking care of networking and communication between microservices. You also need to build each microservice separately, and if you choose to have different languages and frameworks, the building process won’t be the same for all of them. You need to spend extra effort in creating good integration tests. But you can focus on all the benefits of microservices and offload most of these extra tasks by using the container orchestration platform.

Docker containers

null

Packaging your microservices into Docker containers is a popular way of containerization for your application. Microservice is just a concept and relates to the way you write code for your application. Creating your application in small pieces and wrapping them into a REST API is what we call microservices architecture.

But in order to run a microservice, you still need everything that an application normally needs: kernel, some system libraries, runtime, and perhaps some server application to run it. Traditionally, you would need to install all of these dependencies on your server. Docker containers let you package all of that into a container. It’s worth mentioning, however, that containers are not like virtual machines. They all use the same underlying host OS kernel (and some libraries too).

Unlike virtual machines (VMs), containers are very small, giving you great portability, and (re)starting them takes seconds. You also need to know that Docker isn’t the only tool that allows you to run containers. In fact, nowadays, many container orchestration platforms are starting to migrate from Docker to competitors like containerd or Podman. They all, however, work almost the same way.

Kubernetes

null

Now that you know the basics, let’s discover some of the most popular container orchestration platforms. We’ll start with Kubernetes since it’s the most commonly used one. Kubernetes was created by Google in 2014 but got really popular in the last few years. It's an open-source platform for deploying and managing containers. Its main job is to keep the desired state, which you define via YAML configuration files. It not only keeps your containers up and running but also provides advanced networking and storage features. Kubernetes will also do a health monitoring of the cluster. It’s a complete platform for running modern applications.

How it works

Kubernetes follows the controller/worker model. Its core components are deployed on controller nodes, which are only responsible for managing the system, and the actual containers are run on worker nodes. Controller nodes run a few Kubernetes components like the API server, which is the “brain” of everything, and scheduler, which is responsible for scheduling containers. You can also find an ectd server on controller nodes, and that's where Kubernetes stores all its data. Worker nodes run small components called kubelet and kube-proxy, which are responsible for receiving and executing orders from the controller as well as managing containers.

Extending Kubernetes

Kubernetes comes with many built-in object types that you can use to control the behavior of the platform. But you can also extend it by using Custom Resource Definitions (CRDs). This makes Kubernetes very flexible and allows you to customize it to your needs. On top of that, Kubernetes also allows you to build operators, which gives you the almost unlimited ability to implement your own logic to Kubernetes. Operators and CRDs are part of the reason for the huge popularity of Kubernetes. The ability to extend Kubernetes and implement your own logic according to your business needs is very powerful. It’s worth mentioning here that both CRDs and Operators can be used even on managed Kubernetes clusters.

Docker Swarm

Next on our list is Docker's own Swarm mode, built directly into the Docker engine itself. It allows you to enable basic container orchestration on a single machine as well as connect more machines to create a Docker Swarm cluster. Since it’s built into Docker, it’s very easy to start with. You only need to initialize the Swarm mode and then optionally add more nodes to it.

It works similarly to Kubernetes, following the manager/workers model. All the management and decision-making is done by a swarm manager, and containers are run on nodes that joined the cluster. The main benefit of using Swarm mode vs. plain Docker is high availability and load balancing. You no longer have one node where all your Docker containers are run. Instead, you have multiple nodes and a swarm manager that ensures all the containers are spread evenly among them.

Docker Swarm is good and easy when you are just starting with Docker. You can keep using docker-compose for managing your Swarm. But it offers much less than Kubernetes, and there aren’t many managed Swarm offerings.

OpenShift

Created by Red Hat, OpenShift is another container orchestration platform. Its main focus is to provide a Kubernetes-like platform for running your containers on-premises or in hybrid cloud environments. In fact, under the hood, OpenShift is based on Kubernetes, and it shares many of the same components.

There are, however, many differences too. The main being the concept of build-related artifacts. OpenShift implements such artifacts as first-class Kubernetes resources. Another difference is that Kubernetes doesn’t really care about the underlying operating system, while OpenShift is tightly coupled to Red Hat Enterprise Linux. On top of that, OpenShift comes with many components, which, in vanilla Kubernetes, are optional extras. For example, Prometheus for monitoring or Istio for service mesh capabilities.

Overall, while Kubernetes leaves all the control and choices to the user, OpenShift tries to be a more complete package for enterprises.

Managed Kubernetes services

Remember when we talked about managed Kubernetes? These are a few examples of Kubernetes-as-a-service. Microsoft’s Azure Kubernetes Service (AKS), Amazon Web Services’ (AWS’s) Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and IBM Cloud Kubernetes service are well-known Kubernetes offerings from public cloud providers.

They all work the same as a normal Kubernetes cluster; however, you don’t have access to controller nodes, as the cloud provider manages the nodes. This can be good or bad, depending on how you look at it and what your needs are. On one hand, this offloads you from the installation and operation task of Kubernetes itself. So you can focus more on your containers. On the other hand, if your company requires some very customized Kubernetes options, you’ll be limited. Without access to controller nodes, you won’t be able to change all Kubernetes options.

Summary

The point of containers was to make developing and running applications easier. Instead of worrying about all the libraries and other dependencies, you can just create a container that will run anywhere without any changes or adjustments. This is, in fact, true, and we wouldn’t need any extra tools and platforms to help us manage containers if we weren’t moving to microservices at the same time.

The concept of splitting an application into many smaller pieces brings many advantages but also forces us to take care of a large number of containers. And the larger the number of containers you have, the more time that’s required to manage them. That’s why container orchestration platforms have become a must nowadays. They solved one problem but also brought us another advantage. By heavily integrating to the underlying infrastructure, they’re helping us with not only containers but the whole infrastructure management. Especially in cloud environments, container orchestration platforms can take care of networking, storage, and even provisioning new VMs to the cluster based on the load.

Therefore, container orchestration platforms are much more than what their name suggests.

* * *

Development teams use LaunchDarkly feature flags to migrate their architecture from a monolith to microservices. Feature flags give teams a great deal of control when performing these migrations. They allow you to gradually move parts of your application from the old system to the new one, rather than make the transition in a large, sweeping fashion. Moreover, LaunchDarkly feature flags provide a mechanism by which you can disable a faulty microservice during the migration in real-time without having to redeploy the service. 

If your team is familiar with the technique of feature flagging, then you may be interested to know that LaunchDarkly enables you to seamlessly use feature flags across multiple microservices. Often, with homegrown or open-source feature flag management systems, it can be challenging to orchestrate feature flags across multiple microservices. For example, you may have a feature spanning multiple microservices. Say you want to disable this feature across all the different services at once. With a homegrown system, it’s not uncommon for developers to have to manually toggle a flag for the same feature across each service. Whereas, with LaunchDarkly, you can toggle a single flag, which will, in turn, change the behavior of the feature in question across all the different services in 200 milliseconds. 

All told, LaunchDarkly’s feature management platform is an ideal solution for exerting fine-grained control over all the features in your application within the context of container-based and microservices architectures.