It's no secret: application containers have seen an enormous surge in interest and popularity over the past year or two. While Docker has been one driver of this trend, there are other contenders as well. Perhaps chief among them is Rocket.
To learn more about Rocket, and the Application Container spec which underlies is, we caught up with Jonathan Boulle. Boulle is an engineer at CoreOS who is leading the development of Rocket and doing a lot of the coordination work around the App Container spec. Before working at CoreOS, Boulle worked on a similar project at Twitter that never quite saw the light of day, but was able to apply some of the ideas and experiences to his current work on Rocket.
Boulle is giving a talk on Rocket and the Application Container spec at this year's Southern California Linux Expo (SCALE 13X). In this interview, we asked Jonathan to tell us a little bit more about Rocket, containers and where they're all headed.
For those not familiar with the topic, how do you explain what a container is? Why is there a sudden interest in containers as an alternative to traditional virtualization?
First, I will give you my quick definition of what a container is, and then I'll explain why it's a slightly tricky question to answer!
The basic idea of a container is to package applications as self-contained units: instead of relying on any libraries or tools provided by the underlying operating system, the container has all of the dependencies of the application included alongside it. In a way, it's similar to the idea of a statically linked binary: the software in the container is self-contained and doesn't require anything else at runtime.
The other important aspect to containers is that they are generally constrained in some way from interacting with the outside world (i.e. the host operating system). For example, one common use case would be to apply a memory limit to a container so that if the application within it exceeds some limit then it won't affect the host on which the container is running.
Now, the historical problem with the term "container" is that it's a bit of a nebulous word and everyone has a subtly (or sometimes significantly) different definition. For example, in the Linux kernel there is actually no such concept as a container, and when people use the word it's usually to describe somewhat arbitrary combinations of underlying technologies like chroots, cgroups and namespaces. This ambiguity is one of the big motivating factors in creating the App Container spec: we really want to write down in specific detail what a container is and have the community at large be able to agree on it as a reference point.
The reason that containers are receiving so much attention lately is down to both efficiency and usability. Compared to traditional virtualization, containers are much more lightweight: they don't incur the same performance hit, and they don't need to deploy or manage an entire operating system. But perhaps even more importantly, containers make it really easy for developers to rapidly iterate and develop in "clean room" environments without the overhead of spinning up virtual machines. And since containers are so portable and self-contained, they can be integrated with continuous integration systems and deployed really easily to production. This simplified workflow and greater level of portability makes it much easier to create distributed, reliable, reproducible software architectures.
What is Rocket? How does it differ from Docker?
Rocket is a new runtime for application containers, and in particular it's an implementation of the "App Container spec," which is what we're proposing as an open and interoperable standard for containers. Rocket is designed foremost for simplicity, composability and security, and while it is at an early stage of development it is ultimately targeted for server environments with rigorous security and production requirements.
One of the key architectural differences from Docker is that Rocket only exists in the form of a CLI tool, `rkt`. There is no long-running monolithic daemon or API; instead, all operations are performed through independent invocations of rkt. This design means that we can run application containers directly under the process tree of rkt itself, instead of them being forked by a different daemon. The really important implication here is that any isolation or process management applied to rkt is applied to the applications in the container. This gives us first-class integration with init systems; for example, on a systemd host, any unit file constraints like memory limits are transitively applied to the applications that rkt is actually running. And when systemd tears down a rkt process, it is guaranteed to clean up the entire container.
The other key ability this model unlocks is that we can provide easy in-place upgrades without interrupting existing containers: there's no Rocket daemon to restart, so an upgrade can be performed without needing to kill running containers. Of course even though there's no daemon it's still necessary to track some state, but we leverage the filesystem and process tree which means we can rely on the kernel to track and enforce this for us. For example, we use file locking of container directories to guarantee that rkt can be run multiple times simultaneously without different invocations stepping on each other.
And finally, one other important benefit of the CLI model is that we can start to tease apart the privileges required for different operations, rather than running everything through a daemon running as the superuser.
Another important architectural difference—which goes back to the design of the spec itself—is the idea that the first-class citizen in a container is a group of applications, not just a single one. This is the pattern that the Kubernetes team at Google did a great job of explaining with their concept of a pod. By defining the basic deployable unit as a group rather than a single application it allows us to support a lot of common use cases really easily within a single Rocket container, so users don't have to worry about setting up different links between containers and so forth.
Why is it important for containers to have an "App Container spec?" Who gets to decide what that spec looks like?
One of the really important things about having a specification is that until now there hasn't been a standardized and agnostic definition of exactly what a container is. Different container runtimes—like Docker and LXC—have had their own ideas and implementations, which in a sense are de facto standards, but there has never been something designed from scratch or formally described in an open way. This means that not only is it very difficult for these different systems to work together, but that anyone building against existing tools is at risk of having their code break at any time when the upstream software changes. By decoupling a container specification from implementation, codifying it in a canonical form, and choosing the right abstractions, we can create something that's truly portable, composable and interoperable, which is very powerful.
At CoreOS we are big believers in open source, and we absolutely want the specification to be something that's community owned and driven. While first crafting the spec, we sought and received a lot of very valuable input from engineers at companies like Google, Mesosphere and Pivotal. Since announcing and releasing it publicly we've received dozens of contributions from these and other developers around the world. This is a highly collaborative project. As the spec stabilizes and more implementations emerge, we are going to be looking to create a more formal structure around the appc organization to guide its future.
Do you think that it's important that the open source community converge around a single specification for containers? Is competition a good thing in this space?
Competition is absolutely a great thing in the open source community; it drives a lot of new features and can prevent engineers from getting complacent as they're developing software. But as I expressed earlier, a specification which the community at large can agree upon and build around is very powerful because it allows a whole ecosystem to emerge around portable applications. We are not expecting the industry to converge overnight, but from the very beginnings of the application container spec it has been foremost in our mind that we want and hope to see varied and alternative implementations.
Something else I want to mention here is that interoperability with existing common implementations is also very important to us. You can definitely expect to see some interesting integrations in the near future.
Tell us a little bit about the Rocket community. Who is contributing code, and what does the collaboration process look like?
Rocket development is done exclusively in the open, through GitHub and the mailing list. In terms of contributions, we've had around fifty developers external to CoreOS submitting patches and improvements—in some cases really major functionality—which is great to see.
We encourage people to use GitHub issues for tracking most things, where we have a lot of healthy discussion about new features and changes. The general process is that someone will submit a feature request or patch, various other interested parties from the community will comment and engage in discussion, and then we will come to a consensus about whether it is something we we want to do. Since Rocket is such a young project this has been happening very quickly so far. For larger proposals we find that the comments system on Google Docs offers a better experience for tracking feedback and discussions, and so in cases like that we will use GitHub and the mailing list to announce the document and then open it up to everyone to post comments.
Rocket is just a few months old; what do you see at the roadmap going forward? What is the status of the project now, and how fast are things maturing?
The project is still at a very early stage, but development is proceeding rapidly and we're really happy with how it's maturing. Rocket is already fully capable of running applications in basic configurations, and more advanced features—for example, our networking plugin system—are shaping up nicely and will be available in releases soon.
We have a lot of the key pieces in place like image discovery, signature validation, and the file-based locking framework to coordinate multiple instances of Rocket. On the near term roadmap, we expect to land a few important features like a more efficient on-disk filestore, robust indexing of images, and overlayfs support for container filesystems.
Outside of some of these specific technical features, one of the key things steering Rocket's development right now is the evolution of the appc spec itself. As it develops towards a stable release we will continue making regular updates to Rocket to keep it in sync with the latest changes in the specification.
How can people learn more and get involved with Rocket?
We would love to have more people from the community involved with Rocket! The best place to start is the GitHub repository, where we have instructions for how to get started with Rocket and a growing pool of documentation. We also have an open mailing list dedicated to Rocket at firstname.lastname@example.org, and for real-time discussions we generally use the #coreos IRC room on Freenode.
For those eager to start hacking, I encourage them to check out the "Help Wanted" tag for issues on GitHub we have a variety of issues there from simple bug fixes to bigger features that we'd love to have help with.
For those interested in reading a little more context about why we created Rocket and the appc spec, they should check out the initial blog post.