Why the operating system matters in a containerized world | Opensource.com

Why the operating system matters in a containerized world

Posted 19 Aug 2014 by 

Gordon Haff (Red Hat)
Rating: 
(15 votes)
Image by : 

opensource.com

submit to reddit

Applications running in Linux containers are isolated within a single copy of the operating system running on a physical server. This approach stands in contrast to hypervisor-based virtualization in which each application is bound to a complete copy of a guest operating system and communicates with the hardware through the intervening hypervisor. As a result, containers consume very few system resources such as memory and impose essentially no performance overhead on the application.

One of the implications of using containers is that the operating system copies running in a given environment tend to be relatively homogeneous because they are essentially acting as a sort of common shared substrate for all the applications running above. Specific dependencies can be packaged with the application (within an isolated process in userspace), but the kernel is shared among the containers running on a system.

The operating system is therefore not being configured, tuned, integrated, and ultimately married to a single application as was the historic norm, but it's no less important for that change. In fact, because the operating system provides the framework and support for all the containers sitting above it, it plays an even greater role than in the case of hardware server virtualization where that host was a hypervisor. (Of course, in the case of KVM for example, the hypervisor makes use of the operating system for the operating system-like functions that it needs, but there’s nothing inherent in the hypervisor architecture requiring that.)

All the security hardening, performance tuning, reliability engineering, and certifications that apply to the virtualized world still apply in the containerized one. And, in fact, the operating system shoulders a greater responsibility for providing security and resource isolation than in the case where a hypervisor is handling some of those tasks. We’re also moving toward a future in which the operating system explicitly deals with multi-host applications, serving as an orchestrator and scheduler for them. This includes modeling the app across multiple hosts and containers and providing the services and APIs to place the apps onto the appropriate resources. In other words, Linux is evolving to support an environment in which the “computer” is increasingly a complex of connected systems rather than a single discrete server.

In such an environment, it's also increasingly important to have a mechanism to portably compose applications. The general concept is nothing particularly new. Throughout the aughts, as an industry analyst, I spent a fair bit of time writing research notes about the various virtualization and partitioning technologies available at the time. One such set of techs was “application virtualization.” As a category, application virtualization remained something of a niche but it’s been re-imagined of late. Technologies including Docker are taking advantage of the containers model to create something which looks an awful lot like what application virtualization was intended to accomplish: compose applications as a set of layers and move them around an environment with low overhead.

Yes, there is absolutely an ongoing abstraction of the operating system; we’re moving away from the handcrafted and hardcoded operating instances that accompanied each application instance—just as we previously moved away from operating system instances lovingly crafted for each individual server. And, yes, applications that depend on this sort of extensive operating system customization to work are not a good match for a containerized environment. One of the trends that makes containers so interesting today in a way that they were not (beyond a niche) a decade ago is the wholesale shift toward more portable and less stateful application instances. The operating system's role remains central; it’s just that you’re using a standard base image across all of your applications rather than taking that standard base image and tweaking it for each individual one.

Add it all together and applications become much more adaptable, much more mobile, much more distributed, and much more lightweight. Their placement and provisioning becomes more automated. But they still need to run on something. Something solid. Something open. Something that's capable of evolving for new requirements and new types of workloads. And that something is a (Linux) operating system.

submit to reddit

7 Comments

storix
Newbie

The importance of the Operating System cannot be overstated. Very well written article.

Vote up!
2
Vote down!
0
Curious George

Goodness gracious, what a load of drivel! Most applications are not running on hand-crafted or seriously tweaked operating systems.

Vote up!
1
Vote down!
-2
mimmus

Great part of real-world applications are heavily tied to operating system or seriously tuned in their config files.
Simply running a Tomcat with default params in a Docker container (as you can see in a lot of examples around) is a too simplicistic approach.

Vote up!
6
Vote down!
0
ghaff
Open Minded

It's really a mix today and a matter of what type of workloads you deal with. There's a fair bit of different kernel versions, different configurations/tuning, different OSs among traditional enterprise workloads. For new styles of workloads (microservices, etc.) not so much. That's why I expect a mix of hardware virtualization and containers for the foreseeable future.

Vote up!
4
Vote down!
0
stites
Open Source Champion

What is a container? Is it the same as a chroot jail?

------------------------------
Steve Stites

Vote up!
4
Vote down!
0
ghaff
Open Minded

BSD jails are often cited as the first example of a container. I describe containers in more detail in this post: http://bitmason.blogspot.com/2013/09/what-are-containers-anyway.html

Vote up!
4
Vote down!
0
JFK

As with all emerging technologies people will find fault with the movement explanations. Thank you for helping to start education process which helps light the way for new ways of thinking and eventually drowns out the negative.

Vote up!
0
Vote down!
0

Comment now

Gordon Haff is Red Hat’s cloud evangelist, is a frequent and highly acclaimed speaker at customer and industry events, and helps develop strategy across Red Hat’s full portfolio of cloud solutions. He is the author of Computing Next: How the Cloud Opens the Future in addition to numerous other publications. Prior to Red Hat, Gordon wrote hundreds of research notes, was frequently quoted in publications like The New York Times on a wide range of IT topics, and advised clients on product and

Getting started with Docker promo

Reader favorite