What is the best way to introduce a new technology into your employer's ecosystem? You'd probably start by scheduling a meeting. But what if you're asked what the benefits are, if it will save money, and how it will make developers more efficient?
The answers may be obvious to you, but you need to be prepared to relay this information in a way that makes business sense. It's much easier to explain these benefits when you have a proof of concept.
Why use Kubernetes?
Unfortunately, "because it's cool" isn't a good enough reason to adopt a new technology. That said, Kubernetes is really cool.
There are a ton of use-cases, from hosting your own function-as-a-service (FaaS) to a full-blown application (both the microservices and monolith flavors). Or sometimes you just need a cron job to run once a day—throw the script into a container, and you've got yourself a perfect candidate for the K8s cron job object.
The real question: Will Kubernetes bring business value? As always, it depends. If your main application is already microservice-ish, you can make a good argument that some of the services could be broken off into containers managed by Kubernetes to better utilize those precious CPU cycles. It gets a little tougher when you attempt to shove a monolith into a container—but it is possible.
Another thing to consider is performance. There is a lot of complex networking involved with containerized services in K8s. Your application may suffer a response time increase if you're used to it running all on one machine.
Ok, let's say you've decided it will fit into your use case. What now?
Building the proof of concept
I'm not going to go over the details of deploying a cluster here; there are plenty of guides out there already. Instead, we'll focus on getting something up and running quickly to prove your case. I should also note that there are services available to provide a K8s cluster with minimal hassle: Google Cloud's GKE, Microsoft Azure's AKS, and Red Hat's Openshift. As of this writing, Amazon's service—EKS—is not available to most folks, but it might be the best option in the future if your company is heavily invested in AWS.
If none of those options are feasible for your PoC, you can accomplish a lot with Minikube and a laptop.
What to include in your PoC
So you've got a cluster. What sort of things should you start showing off? Ideally, you'd be able to operationalize a microservice or small app that your team manages. If time is a limiting factor, it's still possible to give a great presentation of an example application being deployed, scaled, and upgraded. In my opinion, the following is a strong list of features to display in your PoC:
- A deployment with replicas
- A service that exposes a set of pods internally to the cluster
- An ExternalName service that creates an internal endpoint for a service outside of the cluster
- Scaling those deployments up and down
- Upgrading a deployment with a new container image tag
Bonus points if any or all of that can be automated with a CI/CD pipeline that builds and deploys containers with few manual steps. Let's look at some config files that will help you accomplish this.
For all of these examples, I'll be using the official nginx container image. It would be sufficient to use this in your PoC to demonstrate the functionality of Kubernetes. Of course, if your company already has a containerized service, use that.
Also, a quick note: I'm assuming you have installed
kubectl and configured it to communicate with your new cluster or Minikube install. Minikube will actually set up your
kube config and context for you.
I'll include the source code of all my example configs in this repo. I have tested these on a Minikube install.
We'll start with the YAML, and then we'll dissect the various parts of it. The
FILENAME indicator in my code snippets indicate the filename in the repository.
# FILENAME: k8s-configs/nginx-deploy-v1.12.yaml
- name: poc-nginx
- name: http
Let's talk about the metadata—specifically, labels. Label keys are arbitrary, and you can set them to whatever you want. For instance, you could have objects with labels for the application version number, application name, or application tier. Here we just give
app—for the name of our app—and
version—where we'll track the currently deployed version of the app. Labels allow various parts of Kubernetes to find out about other parts by matching against their labels. For instance, if we have some other pods already running that are labeled
app: poc-nginx, when we apply this deployment for the first time, the
spec.selector.matchLabels section tells the deployment to bring any pods with those labels under control of the deployment object.
spec.template.spec section is where we create the pod definition that this deployment should manage. Containers could have more than one container defined for the pod, but in most cases, pods control only one container. If there are multiple, you are saying that those two containers should always be deployed together on the same node. If one of the containers fails, though, the whole pod will be relaunched—meaning the healthy container will be relaunched along with the unhealthy one. You'll find a full list of the pod spec variables on the Kubernetes website.
One last note on the deployment config: In the container ports section above, I gave Port 80 the name
http. This is also arbitrary and optional. You can name the ports anything you want, or exclude the name config altogether. Giving a name to a port allows you to utilize the name instead of the port number in other configs, such as Ingresses. This is powerful because now you can change the port number your pod container listens on by changing one config line instead of every other config line that references it.
Add this deployment to the Kubernetes cluster by running:
kubectl create -f k8s-configs/nginx-deploy-v1.12.yaml
The service config for your nginx deployment would look something like this:
# FILENAME: k8s-configs/nginx-svc.yaml
- port: 80
As a quick rundown, a service object sets up a single endpoint for inter-cluster communication to a set of pods. With the type set to
NodePort, however, it also allows access to the service from outside the cluster if your worker machines are available to your company network.
NodePort chooses a random high-level port number between 30000 and 32767 (unless you specify the port it should use) and then every machine in the cluster will map that port to forward traffic your pods.
Notice in the
spec.selector section above, you are using the label of your pods (created by your deployment) to tell the service object where traffic should be sent.
Add this service by running:
kubectl create -f k8s-configs/nginx-svc.yaml
This portion of your PoC is optional, and I haven't created a config for this in my example repo. But let's say you have a database cluster in Amazon RDS that you want multiple apps to interact with. To make this easier, you can create a service object of the type
ExternalName. All this does is create a
CNAME record in your Kube DNS that points the
svc endpoint to whatever address you give it. The
CNAME can be hit from any namespace with
<service_name>.<namespace>.svc.<cluster_name>. Here's an example config:
Now when something inside the cluster looks for
poc-rds-db.default.svc.minikube over DNS (assuming a Minikube cluster), it will find a
CNAME pointing to
Accessing the nginx deployment
Now you have a deployment up, with a service allowing you to talk to it, at least within the cluster. If you're using Minikube, you can reach your
nginx service like so:
# Take note of the IP address from this command
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
# Take note of the port number from this command
kubectl get svc/poc-nginx-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
poc-nginx-svc NodePort 10.103.171.66 <none> 80:32761/TCP 5s
Using the above examples, you could use your browser to hit
http://192.168.99.100:32761 to see your service, which is just the
nginx welcome screen at this point.
Scaling and upgrading
The exciting topics of scaling and upgrading are going to be the bread and butter of your PoC. It's so easy to do that, they may even seem anticlimactic. Here is how I would scale up our deployment in a pinch:
# This will take us from 2 replicas to 5
kubectl scale deploy/poc-nginx-deploy --replicas=5
Yeah, that's it. I did say this is how I would do it in a pinch. This is how you can temporarily update a deployment to have more capacity, but if you are permanently changing the number of replicas, you should update the YAML files and run
kubectl apply -f path/to/config.yml and of course, keep all your YAML configs in source control.
Now with upgrading, the default upgrade strategy is a rolling update. This will ensure no downtime in your application as it brings up a pod with the new version (or whatever configuration that was changed) before any containers are taken offline.
Let's make a quick adjustment to your deployment in a new YAML file to bump up the image version to 1.13 instead of 1.12. I also keep replicas up at 5 for this version instead of 2:
# FILENAME: k8s-configs/nginx-deploy-v1.13.yaml
- name: poc-nginx
- name: http
Before you upgrade, open another terminal window and keep an eye on your pods:
watch kubectl get po --show-labels -l app=poc-nginx
Notice I'm making use of labels by passing the
-l flag to limit the output to any pod with the label
poc-nginx Your output should look similar to this:
poc-nginx-deploy-75c8f68dd6-86js8 1/1 Running 0 7m app=poc-nginx,pod-template-hash=3174924882,version=1.12-alpine
poc-nginx-deploy-75c8f68dd6-pvh2p 1/1 Running 0 7m app=poc-nginx,pod-template-hash=3174924882,version=1.12-alpine
poc-nginx-deploy-75c8f68dd6-sfkvl 1/1 Running 0 15m app=poc-nginx,pod-template-hash=3174924882,version=1.12-alpine
poc-nginx-deploy-75c8f68dd6-stcqk 1/1 Running 0 7m app=poc-nginx,pod-template-hash=3174924882,version=1.12-alpine
poc-nginx-deploy-75c8f68dd6-z6bgz 1/1 Running 0 15m app=poc-nginx,pod-template-hash=3174924882,version=1.12-alpine
Now, just run the following in another terminal window and watch the magic happen:
kubectl apply -f k8s-configs/nginx-deploy-v1.13.yaml
This is a good start for your PoC, but it is just brushing the surface. I hope this article piques your interest and you dive in. If so, here is some suggested reading from the Kubernetes documentation:
Download the Open Organization Guide to IT Culture Change
Open principles and practices for delivering unparalleled business value.