Introduction to runC, a lightweight universal container runtime

runC: The little container engine that could

Image credits : 

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.

runC, a lightweight universal container runtime, is a command-line tool for spawning and running containers according to the Open Container Initiative (OCI) specification. That's the short version. The long version: The governance umbrella created by Docker, Google, IBM, Microsoft, Red Hat, and many other partners to create a common and standardized runtime specification has a readable spec document for the runtime elements of a container, and a usable implementation based on code contributed to the OCI by Docker. It includes libcontainer, the original lower-layer library interface originally used in the Docker engine, to set up the operating system constructs that we call a container.

Given that runC is an open source project with a regular release cadence, you can find the code and respective releases on GitHub. If you download or build the runC binary you will have everything you need to get started using runC as a simple container executor based on the runtime spec elements: a JSON container configuration and a root filesystem bundle. Note that if you have an installation of Docker 1.11 or above you will automatically have a recent copy of runC installed on your system as well. It is most likely named docker-runC and installed in /usr/bin, and can be used outside of Docker just like any normal installation of runC.

Benefits of using runC

Even before the OCI and runC existed, many core Docker engine developers used a runC-precursor, nsinit, that allowed a simplified entry point into running and debugging low-level container features without the overhead of the entire Docker daemon interface. Now that runC exists, this is definitely one continuing use case, especially for someone potentially exposing a new Linux isolation feature. For example, the checkpoint/restore capability using the Linux Checkpoint/Restore In Userspace (CRIU) project first was made available via runC, and just now is being prepared for addition to the Docker daemon at the layer above runC. Of course, as runC/OCI expands beyond Linux this will be true for other operating system (OS) isolation primitives, such as Solaris zones or Microsoft Windows-based containers, both of which expect to have capabilities via the OCI runtime spec and runC implementation.

Beyond new feature development at the operating system layer, runC is a useful debug platform for finding hard-to-solve bugs that are trickier to debug with the entire Docker stack above the container process.

Challenges to getting started with runC

Developers probably have gotten used to the low-friction entry point to containers with the overall Docker ecosystem, including using DockerHub (or private registries) for images, and simple docker run commands to enable and disable various features and configurations for their containers. With runC the developer must construct or export filesystem bundles from other systems to create their own starting point for a container. They also will need to put together the JSON configuration file that has similar "knobs" related to various docker run flags, but must be codified directly in the JSON file as the runC binary itself has a simple start, stop, pause, etc. interface with no flags.

Overall container strategy

How this fits into a developer’s overall strategy may depend on the intentions and desired outcomes of the developer. For a developer looking for a simpler model of container execution without needing broader Docker daemon capabilities, then runC paired with containerd, another Docker open source project used in the Docker 1.11 and above engines, may be a good fit. After my talk at DockerCon in Seattle, I had several developers come up and share full container cloud architectures they had built based on one or both of containerd and runC doing interesting workload and container lifecycles. In many cases, however, runC will probably be a lower-layer detail that may or may not be of general interest to a developer.

Broadening the discussion a bit beyond purely runC, one use model we haven't discussed is the pluggability of runC with the Docker engine or other future OCI-compliant engines. Already in the OCI community there are projects like runv, runz, and others implementing the common OCI runtime specification with Solaris zones, or a lightweight hypervisor (see Intel clear containers as an example) as the OS-level isolation technology. Another way that runC, or runC-like implementations, are of interest is to developers of other isolation techniques or operating system containment capabilities.

Mapping container features, such as seccomp and user namespaces

Because libcontainer, that operating system layer library that does the real work of performing the container isolation primitives for your OS, is at the heart of runC, any OS-layer features—such as seccomp and user namespaces—must be implemented first in runC before they can be exposed to higher layers, like the Docker engine. This additional capability—to try out new features in runC that haven't been exposed yet at higher layers—is another attractive draw to runC, and several of the latest features that have been exposed in Docker were available in libcontainer and runC well before they made their way into Docker. This also means that during the development of these isolation features or enhanced security capabilities, runC is a great tool for testing and trying out unique configurations using the JSON configuration file.

During his talk at ContainerCon, Phil will demonstrate this use case, and he will show how you can try turning on/off the ability to use certain syscalls using seccomp entries in the JSON configuration file and immediately observe the impact on an application. He also will show a workflow with existing open source tools to ease developer startup time with runC, using current Docker containers and images as an input to create runC-ready configurations and root filesystem bundles.

About the author

Phil Estes - Phil is a Distinguished Engineer & CTO, Container and Linux OS Architecture Strategy for the IBM Watson and Cloud Platform division. Phil is currently an OSS maintainer in the Docker (now Moby) engine project, the CNCF containerd project, and is a member of both the Open Container Initiative (OCI) Technical Oversight Board and the Moby Technical Steering Committee. Phil is a long-standing member of the Docker Captains program and has enjoyed a long relationship with key open source...