Senior software engineer Petazzoni on the breathtaking growth of Docker

No readers like this yet.
Favouring open source

Opensource.com

For those of us veterans in the open source software (OSS) community, certain technologies come along in our lifetime that revolutionise how we consume and manage our technology utilisation. During the early 2000s the concept of high availiability (HA) and clustering allowed Linux to really stack up in the datacentre.

In the same way that power management and virtualisation has allowed us to get maximum engineering benefit from our server utilisation, the problem of how to really solve first world problems in virtualisation has remained prevalent. Docker's open sourcing in 2013 can really align itself with these pivotal moments in the evolution of open source—providing the extensible building blocks allowing us as engineers and architects to extend distributed platforms like never before. At the same time, managing and securing the underlying technology to provide strength in depth while keeping in mind Dan Walsh's mantra: Containers do not contain.

Jérôme Petazzoni, a senior software engineer at Docker, is talking at OSCON 2014, and I had the opportunity to pose him some questions that I thought would make interesting reading to the Opensource.com audience. Thanks to Jérôme for taking the time to answer them, and I urge as many of you as possible to attend both his and all the other keynotes and breakout sessions throughout OSCON.

The transition from DotCloud to Docker, and the breathtaking growth curve to the release of 1.0, has seen Docker really demonstrate how you can take good engineering, great code and deliver it. Openly. You've worked hard with the release of 1.0 to get best practices in place to ensure the code you put out was stable. Many projects go through a curve where they move from a release tree to all of a sudden having to start thinking about development methodologies and testing environments.

How have you found this in the short evolution that Docker has gone through to get to 1.0? What have you learnt from this that will help you in future releases?

You said it: "testing environments." We didn’t wait for Docker 1.0 to have testing at the core of the development process! The Docker Engine had unit tests very early (as early as the 0.1.0 version in March 2013!), and integration tests followed shortly. Those tests were essential to the overall quality of the project. One important milestone was the  ability to run Docker within Docker, which simplified QA by leveling the environment in which we test Docker. That was in Docker 0.6, in August 2013 — almost one year ago. That shows how much we care about testing and repeatability.


View the complete collection of OSCON speaker interviews

It doesn’t mean that the testing environment is perfect and won’t be changed, ever. We want to expand the test matrix, so that every single code  change gets to run on multiple physical and virtual machines, on a variety of host distributions, using all the combinations of runtime options of the Docker Engine. That’s a significant endeavor, but a necessary one. It fits to the evolutionary curve of Docker, going from being "mostly Ubuntu only " to the ubiquitous platform that it became today.

The announcement of the Repository Program [note: it's "Official Repository"] really added massive value and a vote of confidence in how partners can work with you how did you ensure when building the program that contributing partners could engage and maintain quality control over their contributions openly?

In order to participate in the Official Repository program, partners had to host their source code in a public GitHub repository with the associated Dockerfile.  This allowed the Docker team to review ahead of declaring it "Official. "  In addition, participating partners must provide instructions on how users can submit pull requests to continuously improve the quality of the Official Repositories in the program.

Over the last decade plus the community and the supporting companies involved in open source have shown enterprise the value of adopting Linux as a mainstream workload for applications and engineering solutions. Docker is now a staple part of a seed change in how we take next steps in service and application provision, probably more so than at any time in  recent history. Do you think that enough enterprises understand the difference between the more lightweight and agile container paradigm shift Docker brings compared to say heavyweight (and expensive) technologies such as VMWare?

We definitely see an increasing number of enterprises understanding this shift. Don’t get me wrong: they are not abandoning virtual machines to replace them with containers. Both approaches (heavyweight and lightweight virtualizations) are complementary. Embracing containers can be a huge paradigm shift, but it doesn’t have to be. When discussing Docker best practices with developers and operation teams, we often present two approaches side by side: evolutionary and revolutionary. The former requires minor changes to existing processes, yet provides significant improvements; the latter, as the name implies, is more drastic—both in implementation cost (for existing platforms) and realized savings. Our users and customers appreciate both possibilities and pick the one that suits best their requirements.

Docker has been very open and responsible with making sure that you provide your users with complete transparency around security concepts in the Docker architecture. Do you see an opportunity for having a "Super Docker" container that allows you to enforce mandatory controls in future releases reducing your threat attack surface and making the most of the advantages of namespaces and also limiting (and auditing) access to a control socket?

It’s a possibility. I personally believe that security is not a binary process. There is no single feature that will grant you the absolute protection. You have to deploy multiple additional layers: security modules (like SELinux or AppArmor), kernel isolation features (like the user namespace), UNIX common best practices (like "don’t run stuff as root when it doesn’t need it "), and of course stay on top of the security advisories for the libraries and frameworks that you are using in your application components.

A new "Super Docker " container would probably not add a lot of security; however, it could be used to group multiple related containers in the same security context, if they need advanced or efficient ways to share files or data.

One of the reasons I personally am excited about Docker is not just the opportunities it affords us in our ability to offer  stable services to customers and to build the platforms we want but for taking Linux securely to new areas we already "tickle" but where Solaris and AIX still remain entrenched. The use of Linux in the classified and government spaces is tied down Common Criteria to determine Evaluation Assurance Levels. Docker is a game changer. It actually is one of the most prominent and important adolescence changes in the Linux story. Docker actually has an opportunity to tear up the rule book. Are governments aware of the opportunity Docker gives them and if not is this something that you're going to engage in the next steps of Docker as an organisation?

The government market is very aware of Docker and has already reached out to us. We have been in touch with government organizations, including those within the intelligence  community.

You personally come from a service provider background having helped organisations with hosting and private cloud needs with your own company Enix, you therefore know that many managed hosting companies now looking to cloud already built clouds people don't want to consume services on—because they had to have a cloud. Especially those companies have already spent a lot of their budgets on proprietary technologies to help them get to cloud. Do you see many of them now knocking on your door realising that customers have a need to look to Docker?

Many channel partners recognize the portability benefits of Docker and are actively developing practices based on Docker to help their customers abstract away any technological dependencies on any specific service provider.

Their past investments in private clouds are still relevant. If they have deployed (for instance) OpenStack, they can already leverage on the multiple integrations available between OpenStack and Docker, through Nova, Heat, and soon Solum. Even if they built their own in-house IAAS solution, they can still deploy Docker on top of that to use it for themselves or offer it to their customers. Of course, native approaches (i.e. selling "Docker-as-a-Service ") will require additional integration work, but Docker doesn’t reduce the value of their platform: it complements it and unlocks new revenue channels.

Moving forward what are your hopes and ambitions for Docker, not taking into account your door being knocked by Solomon or Ben for new features that always tend to throw the whole development environment?

The Docker platform offers a new approach for IT teams to build, ship, and run distributed apps. Our ambition is to grow a great, sustainable business that nurtures an active community ecosystem and provides great solutions to customers who are moving to this new world of microservices.

View the complete collection of OSCON speaker interviews.

Richard Morrell
I am a Red Hat staffer, I'm an OSS veteran first started working with Red Hat in 1997 former Linuxcare, Linux.com and VA Linux staffer. Former founder crew at Zimbra. Founder of SmoothWall the evergreen Linux firewall distro. Podcaster, writer, author, dad to two amazing boys. Personal time spent off grid designing solar equipment for technical use on the road and doing bushcraft.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.