Docker (previously dotCloud) made a big splash this year when they open-sourced their software for creating "lightweight, portable, self-sufficient containers" that powers their Platform-As-A-Service offering.
Developers are excited because Docker offers an easier to use alternative to Chef and Puppet for managing server environments. Instead of wrangling with configuration files, Docker allows developers to simply take an image of their system and share it with their team. When a team member makes a change to their local environment, they just create a new image (a Docker container) and share it with the team. Its like git for disk images.
Recently, Red Hat and Docker announced they'd be working together on several projects. One of them is packaging Docker for the Fedora Project: a Red Hat sponsored, volunteer-driven, rock-solid Linux operating system. I asked Alexander Larsson, a principle software engineer at Red Hat with an interest in application container technologies, a few questions about the collaboration.
What does the Fedora community hope to do with Docker?
We strive to bring interesting new technologies to Fedora users, often and early. Also, Docker is getting a lot of interest in the community at large, especially within the DevOps community. Many of these people are running Fedora and would like to deploy Docker on Fedora servers and/or base Docker containers on Fedora releases. Its important to allow these people to continue to use Fedora as they wish.
Making Docker work on Fedora is also the first step towards it working on RHEL, and RHEL support is the most requested bug in the Docker issue tracker.
What incompatibilities were resolved to allow Docker to work with Fedora?
The main issue was that Docker relied on the AUFS union filesystem, which is not (and is not expected to be) included in the upstream kernel. (It is currently in the Ubuntu kernel, but deprecated and it will eventually be removed). AUFS allows copy-on-write instantiation of Docker images when a container is launched, which is very central to Docker as it means starting containers is fast and cheap.
We looked at the various copy-on-write technologies available in Linux and eventually settled on device-mapper as the only currently stable, widely available solution. Device-mapper is the kernel part of the logical volume manager, and it allows copy-on-write block devices with the "thin provisioning" module. So, I wrote a new Docker backend based on this, which is the main new feature in the upcoming 0.7 version.
There was also a host of minor issues, like packaging new dependencies, fixing some bugs in our go packages, and handling a name conflict with an existing "Docker" package. We are also working on some things that are not strictly incompatibilities, but rather making Docker integrate better with Fedora. For instance, Josh Poimboeuf is working on a libvirt-lxc backend for Docker, and we're adding support for capability flags in Docker images (which many Fedora packages use). We are also working on Docker base images based on Fedora releases.
Tell us about a Linux container and how it compares to a virtual machine.
The definition of a "container" is based on a broad concept that can be used in several ways, but at the core a "container" is a form of kernel-based process isolation that is very cheap. The kernel sets up a set of new namespaces for the container processes such that it looks to the container as if it is the only thing running on the computer. Then you essentially run a separate operating system (sharing only the kernel) inside the container.
It is possible to run a full Linux distribution inside a container, just like with virtualization, but that is not what you typically do with containers. Instead you run just a single process inside the container, putting only the very minimal requirements that the program needs in the container. For instance, a container typically has no init/cron/udev, no hardware support, no disk/network configuration, etc. All that is handled by the host operating system.
How does Docker improve working in the cloud?
Docker makes it very easy to create and deploy container images. Developers can very quickly, on their own machines, create and configure a container based on one of the widely available base images. They can then then test the container locally, and when it works, create a standardized image that they can deploy anywhere in the cloud, running the exact same set of bits that they tested locally.
Docker also makes it easier to share images with others in a standardized fashion, similar to git. A developer commits their container locally to an image and then pushes it to a remote repository where everyone can pull it with a single command and use as basis for their work.
Do you expect Docker to help create more tools for Linux platforms?
Hopefully we will see a wide variety of preconfigured service setups as Docker images, making it easy to set up new services. I also think that having a standardized format for services will lead to more standardization around the higher level orchestration of services. Service discovery, connecting services to each other, configuring services, etc.
3 Comments