Websites used to be little more than text files placed in a folder that was shared with the world. They'd get downloaded and displayed in a web browser, and that was the extent of the transaction. Things got more complex, and websites became running instances of code. This enabled programmers to set up what could only be called "web applications," such as shopping carts, login portals, online email clients, image editors, and more.
Once the web was running applications, systems engineers and architects started to realize that the network itself was essentially a supercomputer. And just as resources must be managed on your laptop when you're running several applications at once, resources must be managed on the nodes of the supercomputer that is the internet.
Back in the mainframe days, computing time was relatively predictable. People literally scheduled time with the computer, so the resources were managed on a first-come, first-served basis. That doesn't scale to billions of users, any number of whom could want computing power, on-demand, at any given moment. Later, the solution was thought to be scaling up: more servers running more instances of applications meant more people could use them at any given time. Unfortunately, it can get expensive to run 100 servers on the off chance that you get a traffic spike one afternoon. So the problem isn't just how to manage the resources of the server running an application, it's also how to manage those resources dynamically, depending on how many people show up.
What is Kubernetes?
Kubernetes ("K8s" for short) is an open source solution for automating the deployment and dynamic scaling of containerized online applications. Kubernetes uses containers, a system in Linux that groups applications into logical units for centralized and secure management. Containers are designed to be ephemeral. A container can crash or die without losing user data because the data is kept outside the container. Because containerized applications are considered disposable, a cluster of servers can launch new instances of an application when lots of users need it or run only a few instances of an application when only a few hundred people need it. A sysadmin could do this manually or write a series of scripts to monitor traffic and respond accordingly, but Kubernetes makes it automatic.
You can install Kubernetes for testing or for production. If you just want to learn Kubernetes or experiment with building a cloud-based application, then you can run Kubernetes, OpenShift, or OKD locally on your workstation or laptop. For serious projects, you can build your own cluster, and for large-scale production, you can install OKD or Kubernetes.
Kubernetes is designed for big deployments. The fact is, you probably don't need Kubernetes unless you have so many users visiting your web applications that performance suffers. However, there are many good reasons to run Kubernetes at home, and it's getting easier and easier to do, the more popular Kubernetes becomes.
While Kubernetes makes managing containers easy, Kubernetes is mostly a do-it-yourself toolkit. You can (and many do) download and run Kubernetes exactly as it's developed, but that often means you will eventually have to develop your own tools to support your Kubernetes cluster. There are some really good tools out there, including Kubash (an interactive shell for your clusters to help build images and provision hosts), Konveyor (a whole toolchain to help migrate and optimize your cluster), Helm (a package manager), and much more.
What are OpenShift and OKD?
OpenShift is a Kubernetes application platform built for the cloud. OKD is its open source upstream project.
Modern infrastructure is moving to multicloud and hybrid cloud, but sysadmins and developers need a stable and predictable platform to target. OKD allows architects flexibility when choosing the cloud their infrastructure runs on, it provides administrators easy access to user and resource management, and it lets developers develop "knative" (native to Kubernetes) and cloud-ready applications. It's a control panel, a layer of abstraction, and a toolchain all at once.
OpenShift and OKD are distributions of Kubernetes, so admins can run them and still use all of the most common Kubernetes commands (kubectl
, kubeadm
, and so on). Because Kubernetes runs on and interacts directly with Linux and Linux containers, admins can use all their familiar Linux commands to manage the operating system underneath it all.
How does Kubernetes work?
Applications on the cloud run within containers. Containers aren't physical objects, of course; they're just a software concept that groups running processes on a server together so that they can share certain resources. Today, containers are usually run by CRI-O, Podman, LXC, or Docker.
When running applications, the main unit of organization in Kubernetes is called a pod. A pod is a group of containers, administered together on the same machine, virtual machine, or node, designed to be able to communicate with one another easily.
[Download our eBook: Getting started with Kubernetes]
Pods can be organized into a service, which is a group of pods that work together. These can be organized with a system of labels, allowing metadata about objects like pods to be stored in Kubernetes.
All of these parts can be orchestrated in a consistent and predictable way through an API, through OpenShift or OKD, through predefined instructions, or with terminal commands.
How can I learn more?
You might want to learn Kubernetes and OpenShift because you're planning on finding work in IT, or you're volunteering for a non-profit that's keen to expand, or you're just a hobbyist who loves knowing the latest tech. This is open source, so all the information you need is online. However, there is a lot of information to process. Here's some suggested reading to help you learn:
- What is Kubernetes?
- Kubernetes: Everything you need to know
- Developing applications on Kubernetes
- How to run a Kubernetes cluster on your laptop
- Understanding Kubernetes for enterprises
Want to master microservices? Learn how to run OpenShift Container Platform in a self-paced, hands-on lab environment.