How to enable serverless computing in Kubernetes

Knative is a faster, easier way to develop serverless applications on Kubernetes platforms.
186 readers like this.

Jason Baker. CC BY-SA 4.0.

In the first two articles in this series about using serverless on an open source platform, I described how to get started with serverless platforms and how to write functions in popular languages and build components using containers on Apache OpenWhisk.

Here in the third article, I'll walk you through enabling serverless in your Kubernetes environment. Kubernetes is the most popular platform to manage serverless workloads and microservice application containers and uses a finely grained deployment model to process workloads more quickly and easily.

Keep in mind that serverless not only helps you reduce infrastructure management while utilizing a consumption model for actual service use but also provides many capabilities of what the cloud platform serves. There are many serverless or FaaS (Function as a Service) platforms, but Kuberenetes is the first-class citizen for building a serverless platform because there are more than 13 serverless or FaaS open source projects based on Kubernetes.

However, Kubernetes won't allow you to build, serve, and manage app containers for your serverless workloads in a native way. For example, if you want to build a CI/CD pipeline on Kubernetes to build, test, and deploy cloud-native apps from source code, you need to use your own release management tool and integrate it with Kubernetes.

Likewise, it's difficult to use Kubernetes in combination with serverless computing unless you use an independent serverless or FaaS platform built on Kubernetes, such as Apache OpenWhisk, Riff, or Kubeless. More importantly, the Kubernetes environment is still difficult for developers to learn the features of how it deals with serverless workloads from cloud-native apps.


Knative was born for developers to create serverless experiences natively without depending on extra serverless or FaaS frameworks and many custom tools. Knative has three primary components—Build, Serving, and Eventing—for addressing common patterns and best practices for developing serverless applications on Kubernetes platforms.

To learn more, let's go through the usual development process for using Knative to increase productivity and solve Kubernetes' difficulties from the developer's point of view.

Step 1: Generate your cloud-native application from scratch using Spring Initializr or Thorntail Project Generator. Begin implementing your business logic using the 12-factor app methodology, and you might also do assembly testing to see if the function works correctly in many local testing tools.

Spring Initializr screenshot
Thorntail Project Generator screenshot

Step 2: Build container images from your source code repositories via the Knative Build component. You can define multiple steps, such as installing dependencies, running integration testing, and pushing container images to your secured image registry for using existing Kubernetes primitives. More importantly, Knative Build makes developers' daily work easier and simpler—"boring but difficult." Here's an example of the Build YAML:

kind: Build
  name: docker-build
  serviceAccountName: build-bot
      revision: master
  - args:
    - --context=/workspace/java/springboot 
    - --dockerfile=/workspace/java/springboot/Dockerfile 
    - name: DOCKER_CONFIG
      value: /builder/home/.docker
    name: docker-push

Step 3: Deploy and serve your container applications as serverless workloads via the Knative Serving component. This step shows the beauty of Knative in terms of automatically scaling up your serverless containers on Kubernetes then scaling them down to zero if there is no request to the containers for a specific period (e.g., two minutes). More importantly, Istio will automatically address ingress and egress networking traffic of serverless workloads in multiple, secure ways. Here's an example of the Serving YAML:

kind: Service
  name: greeter
            image: dev.local/rhdevelopers/greeter:0.0.1

Step 4: Bind running serverless containers to a variety of eventing platforms, such as SaaS, FaaS, and Kubernetes, via Knative's Eventing component. In this step, you can define event channels and subscriptions, which are delivered to your services via a messaging platform such as Apache Kafka or NATS streaming. Here's an example of the Event sourcing YAML:

kind: CronJobSource 
  name: test-cronjob-source
  schedule: "* * * * *"
  data: '{"message": "Event sourcing!!!!"}'
    kind: Channel
    name: ch-event-greeter 


Developing with Knative will save a lot of time in building serverless applications in the Kubernetes environment. It can also make developers' jobs easier by focusing on developing serverless applications, functions, or cloud-native containers.

What to read next
Technical Marketing, Developer Advocate, CNCF Ambassador, Public Speaker, Published Author, Quarkus, Red Hat Runtimes

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.