WebAssembly (also referred to as Wasm) has gained popularity as a portable binary instruction format with an embeddable and isolated execution environment for client and server applications. Think of WebAssembly as a small, fast, efficient, and very secure stack-based virtual machine designed to execute portable bytecode that doesn't care what CPU or operating system it runs on. WebAssembly was initially designed for web browsers to be a lightweight, fast, safe, and polyglot container for functions, but it's no longer limited to the web.
On the web, WebAssembly uses the existing APIs provided by browsers. WebAssembly System Interface (WASI) was created to fill the void between WebAssembly and systems running outside the browser. This enables non-browser systems to leverage the portability of WebAssembly, making WASI a good choice for portability while distributing and isolation while running the workload.
WebAssembly offers several advantages. Because it is platform neutral, one single binary can be compiled and executed on a variety of operating systems and architectures simultaneously, with a very low disk footprint and startup time. Useful security features include module signing and security knobs controllable at the run-time level rather than depending on the host operating system's user privilege. Sandboxed memory can still be managed by existing container tools infrastructure.
In this article, I will walk through a scenario for configuring container runtimes to run Wasm workloads from lightweight container images.
Adoption on cloud infrastructure and blockers
WebAssembly and WASI are fairly new, so the standards for running Wasm workloads natively on container ecosystems have not been set. This article presents only one solution, but there are other viable methods.
Some of the solutions include switching native Linux container runtimes with components that are Wasm compatible. For instance, Krustlet v1.0.0-alpha1 allows users to introduce Kubernetes nodes where Krustlet is used as a replacement for a standard kubelet. The limitation of this approach is that users have to choose between Linux container runtime and Wasm runtime.
Another solution is using a base image with Wasm runtime and manually invoking compiled binary. However, this method makes container images bloated with runtime, which is not necessarily needed if we invoke Wasm runtime natively at a lower level than container runtime.
I will describe how you can avoid this by creating a hybrid setup where existing Open Containers Initiative (OCI) runtimes can run both native Linux containers and WASI-compatible workloads.
Using crun in a hybrid setup of Wasm and Linux containers
Some of the problems discussed above can be easily addressed by allowing an existing OCI runtime to invoke both Linux containers and Wasm containers at a lower level. This avoids issues like depending on container images to carry Wasm runtime or introducing a new layer to infrastructure that supports only Wasm containers.
One container runtime that can handle the task: crun.
Crun is fast, has a low-memory footprint, and is a fully OCI-compliant container runtime that can be used as a drop-in replacement for your existing container runtime. Crun was originally written to run Linux containers, but it also offers handlers capable of running arbitrary extensions inside the container sandbox in a native manner.
This is an informal way of replacing existing runtime with crun just to showcase that crun is a complete replacement for your existing OCI runtime.
$ mv /path/to/exisiting-runtime /path/to/existing-runtime.backup
$ cp /path/to/crun /path/to/existing-runtime
One such handler is
crun-wasm-handler, which delegates specially configured container images (a Wasm compat image) to the parts of existing Wasm runtimes in a native approach inside the crun sandbox. This way, end users do not need to maintain Wasm runtimes by themselves.
Crun has native integration with
wasmer to support this functionality out of the box. It dynamically invokes parts of these runtimes as crun detects whether the configured image contains any Wasm/WASI workload, and it does so while still supporting the native Linux containers.
For details on building crun with Wasm/WASI support, see the crun repository on GitHub.
Building and running Wasm images using Buildah on Podman and Kubernetes
Users can create and run platform-agnostic Wasm images on Podman and Kubernetes using crun as an OCI runtime under the hood. Here's a tutorial:
Creating Wasm compat images using Buildah
Wasm/WASI compatible images are special. They contain a magic annotation that helps an OCI runtime like crun classify whether it is a Linux-native image or an image with a Wasm/WASI workload. Then it can invoke handlers if needed.
Creating these Wasm compat images is extremely easy with any container image build tools, but for this article, I will l demonstrate using Buildah.
1. Compile your
2. Prepare a Containerfile with your
COPY hello.wasm /
3. Build a Wasm image using Buildah with annotation
$ buildah build --annotation "module.wasm.image/variant=compat" -t mywasm-image
Once the image is built and the container engine is configured to use crun, crun will automagically do the needful and run the provided workload by the configured Wasm handler.
Running a WASM workload with Podman
Crun is the default OCI runtime for Podman. Podman contains knobs and handles to utilize most crun features, including the crun Wasm handler. Once a Wasm compat image is built, it can be used by Podman just like any other container image:
$ podman run mywasm-image:latest
Podman runs the requested Wasm compat image
mywasm-image:latest using crun's Wasm handler and returns output confirming that our workload was executed.
$ hello world from the webassembly module !!!!
Kubernetes-supported and tested container run-time interface (CRI) implementations
Here's how to configure two popular container runtimes:
- Configure CRI-O to use crun instead of runc by editing config at
/etc/crio/crio.conf. Red Hat OpenShift documentation contains more details about configuring CRI-O.
- Restart CRI-O with
sudo systemctl restart crio.
- CRI-O automatically propagates pod annotations to the container spec.
- Containerd supports switching container runtime via a custom configuration defined at
- Configure containerd to use crun by making sure the run-time binary points to crun. More details are available in the containerd documentation.
- Configure containerd to allowlist Wasm annotations so they can be propagated to the OCI spec by setting
pod_annotationsin the configuration:
pod_annotations = ["module.wasm.image/variant.*"].
- Restart containerd with
sudo systemctl start containerd.
- Now containerd should propagate Wasm pod annotations to containers.
The following is an example of a Kubernetes pod spec that works with both CRI-O and containerd:
- name: wasm-container
Known issues and workarounds
Complex Kubernetes infrastructure contains pods and, in many cases, pods with sidecars. That means crun's Wasm integration is not useful when a deployment contains sidecars and sidecar containers do not contain a Wasm entry point, such as infrastructure setups with service mesh like Linkerd, Gloo, and Istio or a proxy like Envoy.
You can solve this issue by adding two smart annotations for Wasm handlers:
wasm-smart. These annotations serve as a smart switch that only toggles Wasm runtime if it's necessary for a container. Hence while running deployments with sidecars, only containers that contain valid Wasm workloads are executed by Wasm handlers. Regular containers are treated as usual and delegated to the native Linux container runtime.
Thus when building images for such a use case, use annotation
module.wasm.image/variant=compat-smart instead of
You can find other known issues in crun documentation on GitHub.