Creating a reproducible build system for Docker images

4 readers like this.
Creating a reproducible build system for Docker images

Håkan Dahlström. CC BY-SA 4.0

Last weekend, I presented at Texas Linux Fest 2016 about how the Fedora Project has been working to deliver a reproducible build system for Docker images. I addressed some of the new challenges that newer Linux container technologies are bringing to teams as well as how we in the Fedora community have been working to meet those challenges with solutions for our users and contributors.

As the population of DevOps practitioners grows greater in size, so does the Linux container userbase, as these often go hand in hand. In the world of Linux container implementations, Docker is certainly the most popular for server-side application deployments as of this writing. Docker is a powerful tool that provides a standard build workflow, an imaging format, a distribution mechanism, and a runtime. These attributes have made it a very attractive for developer and operations teams alike as it helps lower the barrier between these groups and establishes common ground.

As with all new technology, with new advantages comes new challenges and one of the challenges seen as a side effect of Docker's popularity is that of build sanitization. This is mostly because of the fact that Docker's build process is effectively equivalent to a shell script in terms of flexibility, meaning that it is extremely flexible and the sky is more or less the limit. While this too is seen as an advantage by many, it does pose a challenge for those within the technical community known as release engineering. Release engineering's job is make sure that the act of "manufacturing software" is done in a way that is reproducible, auditable, definable, and deliverable. This means that given the same set of inputs, we can deliver the same set of outputs in a standard way that is well defined and can be audited if necessary in order to deliver it predictably.

In the world of Docker building, these are not attributes of build pipelines that are often found. It's extremely difficult to trace the origin of any given Docker image as it is listed on Docker Hub because often someone built it on their laptop or on some server that we have no insight into and it is then pushed out to the world. The build itself is a black box that potentially has no persistence of logs or any safeguards in place. Maybe that build actually pulled some binary artifact from the Internet into the filesystem and that file has since been removed from it's place of origin. Examples of such concerns can go on and on.

All that being said, we don't want the power of Linux containers nor Docker to be passed up just because there are new challenges. Instead we aim to meet these challenges with solutions such that we can deliver software for the future in a way that is manageable.

Something to note is that there is a difference between "base images" and "layered images" in the Docker world. A base image is generated by a pre-composed root filesystem, this is normally something provided by your upstream Linux distribution of choice, such as Fedora, Debian, Arch, OpenSUSE, Ubuntu, or Alpine. Base images are traditionally already created in a way that is aligned with release engineering philosophies, as in many cases they originate from the same build or compose process to produces the distribution itself.

However, a layered image is anything built on top of one of those base images, and are generally where the enhanced flexibility of Docker builds can become challenging. Layered images are effectively a stacking mechanism of images that can inherit other images in lower layers of the stack. Often the lower layers provide functionality required by the upper layers.

This leads us to Fedora's Layered Docker Image Build Service, which is Fedora's implementation of an upstream project called OpenShift Build Service (OSBS) paired with a plugin to Fedora's koji build system allowing uniform workflows for Fedora contributors to maintain RPMs and Docker layered images using the same set of tools they always have but simply producing a new output artifact. The system itself takes advantage of the OpenShift build primitive, which is a built-in feature of the OpenShift Container Platform.

What OSBS does is create what is known as a "custom strategy" build definition within OpenShift and then presents this newly defined component to users and developers as a command line utility as well as a Python API. The OSBS client then requires information from whomever or whatever (a person, automation task, CI, CD, etc) initiates a build to be provided from a git repository in order to provide the inputs to the system in a way that can be audited. The build process itself then occurs inside of a limited environment known as a "buildroot" that is an unprivileged container with sanitized inputs, meaning that software of unknown or unvetted sources is disallowed by the system programatically with all activities within to build process being centrally logged.

This activity is performed using an utility called atomic-reactor, from its upstream Project Atomic. The use of atomic-reactor inside the buildroot affords certain luxuries as pushing images to a registry when successfully built, injecting arbitrary yum/dnf repositories inside Dockerfile (change source of your packages for input sanitization/gating), change base image (FROM) in your Dockerfile to match that of the registry available inside the isolated buildroot, run simple tests after image is built, and many more.

Once a build occurs then the image can immediately be pulled from a registry of your choosing. This where we will then execute automated test upon and then eventually "promote" images to a "production" or "stable" registry in order to gate the release of newly built software in Docker layered images.

Now we're done, and we are able to reproduce, audit, define, and deliver our software via this system.

User profile image.
Adam Miller is a member of the Ansible Engineering Team at Red Hat focusing on the Ansible Core runtime, Linux system administration automation, container orchestration integrations, and Information Security automation. Adam has completed his Bachelors of Science in Computer Science and Masters of Science in Information Assurance and Security, both from Sam Houston State University.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.