How to 'Kubernetize' an OpenStack service

Kuryr-Kubernetes provides networking for Kubernetes pods by using OpenStack Neutron and Octavia.
354 readers like this.
OpenStack

Opensource.com

Kuryr-Kubernetes is an OpenStack project, written in Python, that serves as a container network interface (CNI) plugin that provides networking for Kubernetes pods by using OpenStack Neutron and Octavia. The project stepped out of its experimental phase and became a fully supported OpenStack ecosystem citizen in OpenStack's Queens release (the 17th version of the cloud infrastructure software).

One of Kuryr-Kubernetes' main advantages is you don't need to use multiple software development networks (SDNs) for network management in OpenStack and Kubernetes. It also solves the issue of using double encapsulation of network packets when running a Kubernetes cluster on an OpenStack cloud. Imagine using Calico for Kubernetes networking and Neutron for networking the Kubernetes cluster's virtual machines (VMs). With Kuryr-Kubernetes, you use just one SDN—Neutron—to provide connectivity for the pods and the VMs where those pods are running.

You can also run Kuryr-Kubernetes on a bare-metal node as a normal OpenStack service. This way, you can provide interconnectivity between Kubernetes pods and OpenStack VMs—even if those clusters are separate—by just putting Neutron-agent and Kuryr-Kubernetes on your Kubernetes nodes.

Kuryr-Kubernetes consists of three parts:

  • kuryr-controller observes Kubernetes resources, decides how to translate them into OpenStack resources, and creates those resources. Information about OpenStack resources is saved into annotations of corresponding Kubernetes resources.
  • kuryr-cni is an executable run by the CNI that passes the calls to kuryr-daemon.
  • kuryr-daemon should be running on every Kubernetes node. It watches the pods created on the host and, when a CNI request comes in, wires the pods according to the Neutron ports included in the pod annotations.

In general, the control part of a CNI plugin (like Calico or Nuage) runs as a pod on the Kubernetes cluster where it provides networking, so, naturally, the Kuryr team decided to follow that model. But converting an OpenStack service into a Kubernetes app wasn't exactly a trivial task.

Kuryr-Kubernetes requirements

Kuryr-Kubernetes is just an application, and applications have requirements. Here is what each component needs from the environment and how it translates to Kubernetes' primitives.

kuryr-controller

  • There should be exactly one instance of kuryr-controller (although that number may be higher with the A/P high-availability feature implemented in OpenStack Rocky). This is easy to achieve using Kubernetes' Deployment primitive.
  • Kubernetes ServiceAccounts can provide access to the Kubernetes API with a granular set of permissions.
  • Different SDNs provide access to the OpenStack API differently. API SSL certificates should also be provided, for example by mounting a Secret in the pod.
  • To avoid a chicken-and-egg problem, kuryr-controller should run with hostNetworking to bypass using Kuryr to get the IP.
  • Provide a kuryr.conf file, preferably by mounting it as a ConfigMap.

In the end, we get a Deployment manifest similar to this:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  labels:
    name: kuryr-controller
  name: kuryr-controller
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: kuryr-controller
      name: kuryr-controller
    spec:
      serviceAccountName: kuryr-controller
      automountServiceAccountToken: true
      hostNetwork: true
      containers:
      - image: kuryr/controller:latest
        name: controller
        volumeMounts:
        - name: config-volume
          mountPath: "/etc/kuryr/kuryr.conf"
          subPath: kuryr.conf
        - name: certificates-volume
          mountPath: "/etc/ssl/certs"
          readOnly: true
      volumes:
      - name: config-volume
        configMap:
          name: kuryr-config
      - name: certificates-volume
        secret:
          secretName: kuryr-certificates
      restartPolicy: Always

kuryr-daemon and kuryr-cni

Both of these components should be present on every Kubernetes node. When the kuryr-daemon container starts on the Kubernetes nodes, it injects the kuryr-cni executable and reconfigures the CNI to use it. Let's break that down into requirements.

  • kuryr-daemon should run on every Kubernetes node. This means it can be represented as a DaemonSet.
  • It should be able to access the Kubernetes API. This can be implemented with ServiceAccounts.
  • It also needs a kuryr.conf file. Again, the best way is to use a ConfigMap.
  • To perform networking operations on the node, it must run with hostNetworking and as a privileged container.
  • As it needs to inject the kuryr-cni executable and the CNI configuration, the Kubernetes nodes' /opt/cni/bin and /etc/cni/net.d directories must be mounted on the pod.
  • It also needs access to the Kubernetes nodes' netns, so /proc must be mounted on the pod. (Note that you cannot use /proc as a mount destination, so it must be named differently and Kuryr needs to be configured to know that.)
  • If it's running with the Open vSwitch Neutron plugin, it must mount /var/run/openvswitch.
  • To identify pods running on its node, nodeName should be passed into the pod. This can be done using environment variables. (This is also true with the pod name, which will be explained below.)

This produces a more complicated manifest:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kuryr-cni
  namespace: kube-system
  labels:
    name: kuryr-cni
spec:
  template:
    metadata:
      labels:
        Name: kuryr-cni
    spec:
      hostNetwork: true
      serviceAccountName: kuryr-controller
      containers:
      - name: kuryr-cni
        image: kuryr/cni:latest
        command: [ "cni_ds_init" ]
        env:
        - name: KUBERNETES_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: KURYR_CNI_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        securityContext:
          privileged: true
        volumeMounts:
        - name: bin
          mountPath: /opt/cni/bin
        - name: net-conf
          mountPath: /etc/cni/net.d
        - name: config-volume
          mountPath: /etc/kuryr/kuryr.conf
          subPath: kuryr-cni.conf
        - name: proc
          mountPath: /host_proc
        - name: openvswitch
          mountPath: /var/run/openvswitch
      volumes:
        - name: bin
          hostPath:
            path: /opt/cni/bin
        - name: net-conf
          hostPath:
            path: /etc/cni/net.d
        - name: config-volume
          configMap:
            name: kuryr-config
        - name: proc
          hostPath:
            path: /proc
        - name: openvswitch
          hostPath:
            path: /var/run/openvswitch

Injecting the kuryr-cni executable

This part took us the longest time. We went through four different approaches until everything worked. Our solution was to inject a Python application from the container into the container's host and to inject the CNI configuration files (but the latter is trivial). Most of the issues were related to the fact that Python applications aren't binaries, but scripts.

We first tried making our kuryr-cni script a binary using PyInstaller. Although this worked fairly well, it had serious disadvantages. For one thing, the build process was complicated—we had to create a container with PyInstaller and Kuryr-Kubernetes that generated the binary, then build the kuryr-daemon container image with that binary. Also, due to PyInstaller quirks, we ended up with a lot of misleading tracebacks in kubelet logs, i.e., in exceptions, we could get the wrong traceback on the logs. The deciding factor was that PyInstaller changed paths to the included Python modules. This meant that some checks in the os.vif library failed and broke our continuous integration (CI).

We also tried injecting a Python virtual environment (venv) containing a CPython binary, the kuryr-kubernetes package, and all its requirements. The problem is Python venvs aren't designed to be portable. Even though there is a --relocatable option in the virtualenv command-line tool, it doesn't always work. We abandoned that approach.

Then we tried what we think is the "correct" way: injecting the host with an executable script that does docker exec -i on a kuryr-daemon container. Because the kuryr-kubernetes package is installed in that container, it can easily execute the kuryr-cni binary. All the CNI environment variables must be passed through the docker exec command, which has been possible since Docker API v1.24. Then, we only needed to identify the Docker container where it should be executed.

At first, we tried calling the Kubernetes API from the kuryr-daemon container's entry point to get its own container ID. We quickly discovered that this causes a race condition, and sometimes the entry point runs before the Kubernetes API is updated with its container ID. So, instead of calling the Kubernetes API, we made the injected CNI script call the Docker API on the host. Then it's easy to identify the kuryr-daemon container using labels added by Kubernetes.

Lessons learned

In the end, we've got a working system that is easy to deploy and manage because it's running on Kubernetes. We've proved that Kuryr-Kubernetes is just an application. While it took a lot of time and effort, the results are worth it. A "Kubernetized" application is much easier to manage and distribute. 


Michał Dulko will present How to make a Kubernetes app from an OpenStack service at OpenStack Summit, November 13-15 in Berlin.

User profile image.
Michał is software engineer working at Red Hat, engaged in OpenStack-related activities since Folsom release. Through Newton, Ocata and Pike cycles he was serving the OpenStack community as a core reviewer in Cinder, where he focused on control plane availability, scalability and upgradability. Now he's cracking similar problems in Kuryr project.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.