In previous installments in this series, we looked at containers, the origins of Kubernetes, and DevOps. In this installment, we’ll take a deep dive into the building blocks of a Kubernetes application.
Kubernetes Containers and Pods
While the first building block of a Kubernetes application is a container, I wouldn’t call it the fundamental building block. You don’t run a container on Kubernetes. A container always runs in a pod, which is the fundamental building block.
A pod is a collective noun, which refers to a group of things—like a pod of dolphins or a pea pod. When it comes to Kubernetes applications, a pod is a group of one or more containers that should always run together. It should always be scheduled or placed on the same container host or physical host.
Are you wondering why you would have multiple containers in a pod? Think about an application where you’re performing some type of calculation and the code for that calculator is running in one container. Another container might log the output or transform that output in some way. If a calculation is happening in one container, then other ones will be transforming or logging. Likewise, if a container is transforming output, then one will be calculating.
These functions always go together. But, because I’m practicing microservices, I want to keep each function separate, running in its own container so it can be updated independently. So, I put both containers into a single pod. A pod is that basic unit of the application that I’m going to deploy to a server. All of the containers in pods should always be on the same server, and they should always run together.
Kubernetes Deployments and Services
A Kubernetes application is never composed of just one pod. It’s a distributed application, so it’s made up of multiple components (pods, containers, etc.), and those components run across multiple servers. You need to organize the components before you can deploy your application.
There are several ways you can do that within Kubernetes. One of the most popular approaches is to use a deployment, which is what you tell Kubernetes to run. Kubernetes has the concept of a desired state and an actual state, so when I tell Kubernetes I want my application to run, I know that my application is built up of multiple pods. Some of those pods might represent a database that has a primary and a replica. I always want one primary and one replica running at any given time. I specify the desired state as part of my deployment: This application should always have a primary and a replica running at any given time.
What happens next is really the magic of Kubernetes. It’s going to monitor the environment and make sure that the desired state is always implemented. So if it’s scanning and sees that a replica isn’t running anymore, it will redeploy a pod that serves as that replica. It’s like an operations person who knows which app should be running and where. When they see a server failure, they log in and redeploy that application. Kubernetes does this automatically.
A service is another concept within Kubernetes that is similar to a deployment. A deployment is the group of pods and how they’re related to one another—for example, the order you should start them in. A service is how all of those pods talk to each other over a network.
When you have a deployment and a service, you specify the desired state of your application. Kubernetes will make that state happen no matter what occurs. If pods or containers are crashing, pods get deleted, servers fail, or networks are partitioned, Kubernetes is always going to make sure that the deployment and service are available.
These are the fundamental building blocks of a Kubernetes application: containers embedded in pods, which are described by deployments and services.
Kubernetes Ingress Controllers
The next building block involves how a user can access the application. If the application is Netflix, for example, users need to access the movie catalog, update their billing, and so on. These are different microservices, which themselves are described as deployments.
How can you make sure that users can access the application they need?
There are several ways to do this within Kubernetes. In traditional apps, you would set up a load balancer that would control traffic into your application. Kubernetes has the same concept, but one of the more popular ways to get traffic into an app is through an ingress controller.
Kubernetes Namespaces
Kubernetes is used to manage large-scale systems, not just one application. If you’re running hundreds of applications, you need a way to organize them. This is where namespaces come in.
A namespace is a way of organizing. You can think of it as a folder within Kubernetes, similar to Google Drive. You can dump all of your files at the My Drive level—pictures, tax documents, car repair bills, etc. You can scroll through the files in the drive or run a search to find what you want. Creating separate folders based on the type of file makes it easier to organize them and find what you’re looking for. You can also easily share an entire folder, like one with photos of a recent vacation.
A namespace is similar. This organizing principle within Kubernetes allows you to gather similar applications. You can define what the similarities are—for example, everything running in staging versus everything running in production. You could organize it by business units within your company. How you use namespaces is completely up to you. Namespaces allow you to perform important operations—like migrations, backup, recovery, or deployments—in a one-to-many way. This significantly improves the efficiency and productivity of ops teams.
Kubernetes Configuration
Configuration is another important concept within Kubernetes. It involves using declarative operations—the idea that an application has a desired state. You describe how your application should run, how many copies should be running, and which pods should be running at any given time. You lay out whether or not some pods should be running on the same host, whether or not other pods should never be running together on the same host, and how much memory is in the CPU. You declare the desired state of your application. This declaration isn’t part of the container itself. And it isn’t code in the way that your application code is.
This configuration is stored in a YAML file, which holds all of the configuration information for a container. The configuration includes whether there should always be a primary database and a replica. The username and password that someone can use to log into your database are also part of the configuration. Defining a particular storage class or backup policy is part of the configuration as well.
When you deploy a Kubernetes application, it includes the containers and software that they run. If it’s a stateful application or an application with data, it has a data volume, but it also has a configuration. You might hear configuration described as Kubernetes objects—persistent volume claims (PVCs), persistent volumes (PVs), custom resource definitions (CRDs), or operators (another name for a CRD)—all of these are the configuration.
They refer to the configuration within Kubernetes that defines how your application should run. This is extremely important not only on day one when things are happening normally but also on day two when you need to back up or restore an application. You always need to make sure that your configuration is part of the backup. For example, you can’t just take a snapshot of a server and the data volume attached to that server and use it to recover your Kubernetes application. You need the configuration for your app.
Building a Powerful Kubernetes Solution
Those are the main building blocks of Kubernetes. There are a lot more, but if you understand these elements, you understand the most important parts of Kubernetes. The value of Kubernetes for enterprises is the automation—a developer can define what they want an app to do, what level of performance and availability they want, how many different web front ends they want to run, and how they’ll scale them. That all needs to be defined as part of the configuration in a YAML file. Hand the file over to Kubernetes, and Kubernetes will do the rest. This enables an enterprise to run hundreds of applications with a very small operational team. That’s the power of Kubernetes.
Next in the series: We’ll explore the Kubernetes ecosystem and diverse options enterprises have when it comes to running Kubernetes.