Contact Us Today! 301-337-8141 | [email protected]
Call Us: 301-337-8141
10 Oct 2019
| By AlphaBravo Engineering | Categories: Kubernetes

What Is Kubernetes?

This post “What Is Kubernetes?” will be a primer to Kubernetes, and why it is an important technology for your organization to evaluate.

We recently announced that AlphaBravo is now a Linux Foundation and CNCF member and posted a blog about our top 5 CNCF programs that we love. At the top of that list is Kubernetes.

In future blogs we will go a bit more into specific use cases and Kubernetes and even provide scenarios and labs.

Kubernetes History

Kubernetes is based on “Borg, Googles internal container management system. Developed by Google in 2014, Kubernetes (or K8S) is an open source platform for managing containerized workloads and services on public, private, or hybrid clouds. In 2018, Google transferred management of the Kubernetes project to the CNCF and continues to be a major contributor to the codebase and community.

Kubernetes, one of the CNCF’s most popular projects, has quite a large and rapidly growing ecosystem. There are many services and tools available that support Kubernetes. But what exactly is it? What does it do?

Container Orchestration

Kubernetes, at its core, is a container orchestration system. If you are unfamiliar with containers, we suggest you checkout this blog from on “What is a container?”, then come back here.

Originally, containers were a way to package all of the requirements of a piece of software into a single image and run it on a single machine without the worry of conflicting with software versions already installed on that machine. It also separated out the processed running in that container using Linux namespaces and cgroups for resource separation and management.

The use of containers was greatly enhanced when Docker developed an entire toolset around building, storing and running containers using the Docker Engine. Docker also created Docker Swarm, an extension the standard Docker install that allowed for multi-machine orchestration. The majority of Kubernetes deployments today still run the Docker Engine as the underlying container runtime.

The issue arises when you need to run multiple containers across more than one machine. How do you manage ingress, port mappings, container to container communications, scheduling and a whole host of other issues that arise? This is where the need for container orchestration engines like Swarm and Kubernetes comes in.

Core Components of Kubernetes

To understand how it helps solve some of the problems noted above, lets talk about some of its the core components.

Management Plane

Master Nodes

Master components provide the cluster’s control plane. Master components make global decisions about the cluster (for example, scheduling), and they detect and respond to cluster events (for example, starting up a new pod when a deployment’s replicas field is unsatisfied.

  • kube-apiserver – The API server is a component of the control plane that exposes the Kubernetes API. The API server is the front end for the K8s control plane.
  • etcd – Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.
  • kube-scheduler – Component on the master that watches newly created pods that have no node assigned, and selects a node for them to run on.
  • kube-controller-manager – Component on the master that runs controllers
  • cloud-controller-manager – cloud-controller-manager runs controllers that interact with the underlying cloud providers.

Worker Nodes

Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.

  • kubelet – An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.
  • kube-proxy – kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept
  • Container Runtime – The container runtime is the software that is responsible for running containers.

Key Kubernetes Components

In addition to the management plane components that manage the actual orchestration, below are a few (of the many) components that you should be aware of that represent parts of your actual workload running in K8s.

  • Pods – A Pod is the basic execution unit of a Kubernetes application–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents processes running on your Cluster.
  • Controllers – There are a number of controllers, each with their own use case. The most common is a Deployment, but there are also ReplicaSets, StatefulSets and DaemonSets.
  • Services – In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector (see below for why you might want a Service without a selector).
  • Storage – K8s support both ephemeral and persistent volumes, depending on the requirements of the Pod that is using it. This, coupled with the massive amount of support for underlying storage systems both built in and using CSI, makes this a very flexible and robust part of the the Kubernetes infrastructure.

There is a massive amount more to learn and understand about Kubernetes, but we wanted to give you a primer in to the inner workings of this powerful platform.

Why Should You Learn About Kubernetes?

When deploying applications there are countless manual processes involved when it comes to managing each individual services at scale.. And for enterprise containerized applications, there are hundreds if not more in highly available configurations, of containers that make up these services.

Kubernetes eliminates these manual processes by giving you the capabilities to run and deploy your containers at scale. A Kubernetes orchestration allows you to develop application services that span multiple containers, manage those containers, and even schedule them across a cluster of machines.

Of course, Kubernetes container orchestration doesn’t come without its own cost of overhead. As soon as you begin to scale your applications to a production environment or begin to deploy multiple applications, you will end up with a high volume of containers that need to work together to deliver various individual services, adding layers of complexity in the process.

Sounds easy, right?

Learn more about how Kubernetes can help your organization manage its infrastructure, contact us today!