The Open Container Project and what it means

No readers like this yet.
Shipping containers stacked in a yard

Lucarelli via Wikimedia Commons. CC-BY-SA 3.0

Yesterday saw the announcement of the Open Container Project in San Francisco. It is a Linux Foundation project that will hold the specification and basic run-time software for using software containers. This is all "A Good Thing™."

The list of folks signing up to support the effort contains the usual suspects and this too is a good thing: Amazon Web Services, Apcera, Cisco, CoreOS, Docker, EMC, Fujitsu Limited, Goldman Sachs, Google, HP, Huawei, IBM, Intel, Joyent, the Linux Foundation, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat, and VMware. (Disclosure: I work for HP.)

A quick way to sort the space in your head: first there were virtual machines. Virtual machines are a way to stack more compute resource on computers that had excess capacity. While VMware made the process easy on Intel architectures, the idea goes back to IBM mainframes in the 1970s and 1980s. And virtual machines in a datacenter need to be managed and orchestrated. Think of managing as a per machine (real or virtual) process for provisioning (what's running), starting and stopping individual machines, and orchestrating as a way to talk about a collection or cluster of machines (real or virtual) together.

Now, bootstrapping entire operating systems takes time and space. So, what if there was a way to reduce the required footprint and time for the process? Essentially, what if we could more efficiently pack application workloads faster on compute resources? Enter containers. A container is a nice metaphor. We saw what container standardization did in the transportation and goods distribution industry. A container could be racked on ships, trains, and trucks (and stored in warehouses) depending upon the speed, cost, and access needs of the contents of the container.

Google has been exploring and using containers for a decade now. They released Kubernetes a while ago as a way to collaborate on innovation in the space. The Docker project was launched a couple years ago, and the company formed around the open source licensed project continues to grow rapidly but is shifting their container definition as they explore business models. CoreOS started in a similar timeframe and is handling container management and orchestration in a slightly different way. CoreOS started lobbying for agreement on a smaller container definition late 2014, and put a stake in the ground with appc. Cloud Foundry (now housed by the Cloud Foundry Foundation) has a different container orchestration plan (Warden) and is evolving that platform (to Garden).

And this is where things get messy. With lots of investment pouring into the container space (e.g. Docker has US$ 150 million, CoreOS has US$ 20 million), new micro-Linux container appliances popping up all over the place (CoreOS, RancherOS, Photon, Clear Linux), and vendors new and old marketing loudly for their solutions for virtualization, cloud, and containerization, serious fragmentation becomes a real possibility.

The cloud computing space must also participate in the discussion. Cloud computing demonstrated that one could blur the line of virtual machines (compute, software defined storage, and software defined network architectures) across collections of machines within the data center (private clouds) and outside the data center (public clouds). Virtual machines are considered today to still be a more secure solution than containers, but containers as first order participants in a cloud isn't a stretch, and there is already lots of experimentation.

Declaring a standard container format, and providing reference software for running such a standardized container, becomes an urgent and important step. Parking the intellectual property for the specification and runtime with a trusted non-profit well-understood by all participants is even more important. As long as a such open source licensed property is held by a single vendor, it is at risk, regardless of the best intentions of the owners to collaborate. Investors will drive small companies to make proprietary decisions. Large entrenched vendors often make such decisions. (All we would need is a fairly large proprietary company to come along and decide to buy one of the smaller key players and its intellectual property as it's "container strategy," and we would once again be stuck in the situation of a now wealthy ex-CTO complaining that their community has been done wrong, and the complete fragmentation of this corner of the industry at a time when it wants to innovate quickly.)

There is still a lot of work to be done, but the Linux Foundation is in the ideal position to care for and feed the efforts. Monday's announcement is good for the industry.

If you're interested in the container world, I encourage you check out some of the following resources to learn more:

User profile image.
I am a technical executive, a founder, a consultant, a writer, an international business person, a systems developer, a software construction geek, and a standards diplomat. I love to build teams and products that make customers ecstatic. I have worked in the IT industry since 1980 as both customer and vendor.


actually the container idea goes wayyy back too, as does the vm. we had chroots ages ago, together with grsec gnu/linux patch it was a container like thing, also there is openvz, on bsd theres jails.

i find it a bit... disturbing that everyone nowadays thinks "hey, thats new", it is not and theres alot of existing projects who could really need some sponsoring too, as they already have solid solutions and not all of them are proprietary.

Yes. Completely true. I'm a little uncomfortable with the meme that somehow Docker invented containers. Far from the truth. Same for the [W|G]arden container orchestration in Cloud Foundry. There's lots of foundational work here.

In reply to by Oliver Leitner (not verified)

What about Solaris Container [] or is that a different thing?

In reply to by Oliver Leitner (not verified)

"A container is a nice metaphor." - So to picture them, just don't use barrels that conjure up images of toxic waste instead (even if their color can't be seen)...

Agreed. I imagine the editors found an image they could use (CC licensed appropriately) and it was as close as they could get.

In reply to by Dime (not verified)

How is this any different to OpenStack?

Openstack is a way (presently) to manage compute, storage, and network resources. For the compute resources it works at a machine level (virtual or real). Containers pack workloads more tightly. There is a project within the Openstack world (magnum) to treat containers as first class citizens.

In reply to by pd (not verified)

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.