What are cloud-native applications?

A decade or so into the cloud revolution, we finally have some solid ideas about the best ways to take full advantage of new types of infrastructure.
372 readers like this.
What's new in OpenStack in 2016: A look at the Newton release


As cloud computing was starting to hit its stride six or seven years ago, one of the important questions people were struggling with was: "What do my apps have to look like if I want to run them in a public, private, or hybrid cloud?"

There were a number of takes at answering this question at the time.

One popular metaphor came from a presentation by Bill Baker, then at Microsoft. He contrasted traditional application "pets" with cloud apps "cattle." In the first case, you name your pets and nurse them back to health if they get sick. In the latter case, you give them numbers and, if something happens to one of them, you eat hamburger and get a new one.


The metaphor was imperfect—as well as being perhaps a bit culturally insensitive—but it did capture an essential distinction between long-lived unique instances on the one hand and large numbers of essentially disposable instances on the other.


There were other attempts to codify the distinction. "Twelve-factor apps" is an explicit methodology for building software-as-a-service apps. Looking at the question from more of a business angle, industry analysts Gartner used Mode 1 and Mode 2 to distinguish classic IT (focuses on attributes like stability, robustness, cost-effectiveness, and vertical scale) from cloud-native IT (emphasizing adaptability and agility).

These remain useful perspectives. Many modern, dynamic, and scale-out workloads run as virtual machines in both public clouds and private clouds like OpenStack. They're clearly developed and operated according to a different philosophy than traditional scale-up, long-running apps on a "Big Iron" server.

However, cloud-native has come to usually mean something more specific, especially in the context of application architecture and design patterns. It's the intersection of containerized infrastructure and applications composed using fine-grained API-driven services, aka, microservices. The combination has been fortuitous. Companies like Netflix were promoting the microservices idea as a way to make effective use of cloud computing. Containers, first as implemented through early platform-as-a-service offerings and then as part of a broader, standardized ecosystem, came along as a great way to package up, deploy, and manage those microservices.

Perhaps the biggest change for containers over the past few years has been the increase in the number and maturity of the tools available to manage them.

Don't get too hung up on the microservices term, by the way. What's important is the overall agility and maintainability of applications. As delivered through a DevOps process using continuous integration and continuous delivery, this tends to lead to modular and loosely coupled services whose dependencies are explicitly defined.

However, not everything needs to be decomposed into single-function services that only communicate through exposed, stable APIs, if that doesn't make sense for the nature of the application and the size of the team.

Containers, for their part, maintain the resource and security isolation between services. They provide a fast and resource-efficient way to spin up additional services as needed and retire them when demand drops and they're no longer needed. Containers are also a great productivity tool from a developer standpoint because they package up content as a series of layers and can be rapidly and consistently updated if patches are needed.

Perhaps the biggest change for containers over the past few years has been the increase in the number and maturity of the tools available to manage them. Kubernetes is the best known; it automates Linux container operations and eliminates many of the manual processes involved in deploying and scaling containerized applications.

However, Kubernetes is just the start when it comes to open source projects in the container ecosystem. There's monitoring like Prometheus, distributed tracing like Jaeger. The Istio service mesh connects, manages, and secures microservices. Another developing area is functions-as-a-service (often called serverless) that executes functions (i.e., code that does something) in response to an event (a trigger of some sort); a primary driver is to further simplify the way programmers create new services.

Many, even most, workloads can run in a cloud or in a hybrid combination of multiple clouds. But "cloud-native" apps gets at the idea that, a decade or so into the cloud revolution, we've got some solid ideas about the best ways to take full advantage of new types of infrastructure. And, in turn, we're continuing to improve those infrastructure technologies to give app developers the tools they need. That's really what cloud-native means. Flexible, scalable, reusable apps using the best container and cloud technology available to them.

What to read next
User profile image.
Gordon Haff is Red Hat technology evangelist, is a frequent and highly acclaimed speaker at customer and industry events, and is focused on areas including Red Hat Research, open source adoption, and emerging technology areas broadly.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.

Download the Open Organization Guide to IT Culture Change

Open principles and practices for delivering unparalleled business value.