Why the operating system matters even more in 2017

The operating system isn't going away any time soon.
900 readers like this.
Why the operating system matters even more in 2017

Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

Operating systems don't quite date back to the beginning of computing, but they go back far enough. Mainframe customers wrote the first ones in the late 1950s, with operating systems that we'd more clearly recognize as such today—including OS/360 from IBM and Unix from Bell Labs—following over the next couple of decades.

An operating system performs a wide variety of useful functions in a system, but it's helpful to think of those as falling into three general categories.

First, the operating system sits on top of a physical system and talks to the hardware. This insulates application software from many hardware implementation details. Among other benefits, this provides more freedom to innovate in hardware because it's the operating system that shoulders most of the burden of supporting new processors and other aspects of the server design—not the application developer. Arguably, hardware innovation will become even more important as machine learning and other key software trends can no longer depend on CMOS process scaling for reliable year-over-year performance increases. With the increasingly widespread adoption of hybrid cloud architectures, the portability provided by this abstraction layer is only becoming more important.

Second, the operating system—specifically the kernel—performs common tasks that applications require. It manages process scheduling, power management, root access permissions, memory allocation, and all the other low-level housekeeping and operational details needed to keep a system running efficiently and securely.

Finally, the operating system serves as the interface to both its own "userland" programs—think system utilities such as logging, performance profiling, and so forth—and applications that a user has written. The operating system should provide a consistent interface for apps through APIs (application programming interface) based on open standards. Furthermore, commercially supported operating systems also bring with them business and technical relationships with third-party application providers, as well as content channels to add other trusted content to the platform.

The computing technology landscape has changed considerably over the past couple of years. This has had the effect of shifting how we think about operating systems and what they do, even as they remain as central as ever. Consider changes in how applications are packaged, the rapid growth of computing infrastructures, and the threat and vulnerability landscape.

Containerization

Applications running in Linux containers are isolated within a single copy of the operating system running on a physical server. This approach stands in contrast to hypervisor-based virtualization in which each application is bound to a complete copy of a guest operating system and communicates with the hardware through the intervening hypervisor. In short, hypervisors virtualize the hardware resources, whereas containers virtualize the operating system resources. As a result, containers consume few system resources, such as memory, and impose essentially no performance overhead on the application.

Containerization leans heavily on familiar operating system concepts. Containers build on the Linux kernel's process model as augmented by additional operating system features, such as namespaces (e.g., process, network, user), cgroups, and permission models to isolate containers while giving the illusion that each is a full system.

Containers have become so interesting recently by the addition of mechanisms to portably compose applications as a set of layers and move them around an environment with low overhead. In this respect, containers are the realization of a general concept that's been around for a while in various guises, but never really went mainstream. (Think application virtualization, for example.) One important change today is the greatly increased role of open source and open standards. For example, the Open Container Initiative, a collaborative project under the Linux Foundation, is focused on creating open industry standards around the container format and runtime.

Also significant is that container technology, together with software-defined infrastructure (such as OpenStack), is being built into and engineered together with Linux. The history of computer software clearly shows that integrating technologies into the operating system tends to lead to much wider adoption and a virtuous cycle of ecosystem development around those technologies—think TCP/IP in networking or any of a wide range of security-related features.

Scale

Another significant shift is that we increasingly think in terms of computing resources at the scale point of the datacenter rather than the individual server. This transition has been going on since the early days of the web, of course. However, today we're seeing the reimagining of high-performance computing "grid" technologies both for traditional batch workloads as well as for newer services-oriented styles.

Dovetailing neatly with containers, applications based on loosely coupled "microservices" (running in containers)—with or without persistent storage—are becoming a popular cloud-native approach. This approach, although reminiscent of Service Oriented Architecture (SOA), has demonstrated a more practical and open way to build composite applications. Microservices, through a fine-grained, loosely coupled architecture, allows for an application architecture to reflect the needs of a single well-defined application function. Rapid updates, scalability, and fault tolerance, can all be individually addressed in a composite application, whereas in traditional monolithic apps it's much more difficult to keep changes to one component from having unintended effects elsewhere.

One important aspect to this shift from the perspective of the operating system is that it increasingly makes more sense to talk about a "computer" as an aggregated set of datacenter resources. Of course, there are still individual servers under the hood and they still must be operated and maintained—albeit in a highly automated and hands-off way. However, container scheduling and management effectively makes up the new and relevant abstraction for where workloads run and how multi-tier applications are composed—rather than the server.

The Cloud Native Computing Foundation (CNCF), also under the Linux Foundation, was created to "drive the adoption of a new computing paradigm that is optimized for modern distributed systems environments capable of scaling to tens of thousands of self-healing multi-tenant nodes." One project under the CNCF is Kubernetes, an open source container cluster manager originally designed by Google, but now with a wide range of contributors from Red Hat and elsewhere.

Security

All the security hardening, performance tuning, reliability engineering, and certifications that apply to the virtualized world still apply in the containerized one. And, in fact, the operating system shoulders a greater responsibility for providing security and resource isolation in a containerized and software-defined infrastructure world than in the case in which dedicated hardware or other software may be handling some of those tasks. Linux has been the beneficiary of a comprehensive toolbox of security-enforcing functionality built using the open source model, including SELinux for mandatory access controls, a wide range of userspace and kernel-hardening features, identity management and access control, and encryption.

Today, however, information security must also adapt to a changing landscape. Whether it's providing customers and partners with access to certain systems and data, allowing employees to use their own smartphones and laptops, using applications from Software-as-a-Service (SaaS) vendors, or taking advantage of pay-as-you-go utility pricing models from public cloud providers, there is no longer a single perimeter.

The open development model allows entire industries to agree on standards and encourages their brightest developers to continually test and improve technology. The groundswell of companies and other organizations providing timely security feedback for Linux and other open source software provides clear evidence of how collaborating within and among communities to solve problems is the future of technology. Furthermore, the open source development process means that when vulnerabilities are found, the entire community of developers and vendors can work together to update code, security advisories, and documentation in a coordinated manner.

These same processes and practices apply across hybrid cloud infrastructures as the role of the operating system evolves and expands to include new capabilities like Linux containers. Furthermore, when components are reused in the form of microservices and other loosely coupled architectures, maintaining trust in the provenance of those components and their dependencies (when making up applications) becomes more important, not less.

Some things change, some don't

Priorities associated with operating system development and operation have certainly shifted. The focus today is far more about automating deployments at scale than it is about customizing, tuning, and optimizing single servers. At the same time, there's an increase in both the pace and pervasiveness of threats to a no longer clearly-defined security perimeter—requiring a systematic understanding of the risks and how to mitigate breaches quickly.

Add it all together and applications become much more adaptable, much more mobile, much more distributed, much more robust, and much more lightweight. Their placement, provisioning, and securing must become more automated. But they still need to run on something. Something solid. Something open. Something that's capable of evolving for new requirements and new types of workloads. And that something is a (Linux) operating system.

User profile image.
Gordon Haff is Red Hat technology evangelist, is a frequent and highly acclaimed speaker at customer and industry events, and is focused on areas including Red Hat Research, open source adoption, and emerging technology areas broadly.

5 Comments

Just Say NO! to "cloud" computing or storage. All it really means is storing your data on someone else's computer, where you have no control over who can access, copy or steal it! Just SAY NO!

Where are you going to host your website then? At home? Then you have to upgrade your Internet service to some business class tier and in most cases pay hundreds more for lesser connection. And it's not like your HTML is then magically protected, it's public anyway. It's not all black or white. There's a case for and place for both in-house and cloud solutions.

In reply to by Nonya (not verified)

An excellent text and an excellent post, really. Strength and honor!

The operating system is important. This article doesn't just illustrate why they are important, it illustrates why it's important that the OS is open source, like the rest of the stack.

Great article, thanks for writing it.

Thank you for that informative article :))

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.