Running storage services on Kubernetes

Learn how containers are changing the way software-defined storage is managed in the cloud.
266 readers like this.
Storage units side by side

Scott Meyers. Modified by Opensource.com. CC BY-SA 2.0.

With the advent of containers, it's a good time to rethink storage completely. As applications now consist of volatile sets of loosely coupled microservices running in containers, ever-changing in scale, constantly being updated and re-deployed, and always evolving, the traditional mindset of serving storage and data services must change.

Kubernetes paved the way to enable these types of applications by making it inexpensive and manageable to run the hundreds of container instances that make up an application. Likewise, software-defined storage (SDS) made running dozens of systems that make up a highly elastic storage system serving hundreds of volumes/LUNs to many clients viable and manageable. Now is a perfect time to combine these two systems.

Background

Kubernetes runs containers at scale, and GlusterFS runs storage at scale. Both are entirely software-defined scale-out approaches providing the ideal foundation for next-generation workloads. With GlusterFS running on top of Kubernetes, you have a universal infrastructure stack that is available in bare-metal deployments, on-premise virtualization environments, as well as private and public clouds—basically, everywhere Linux runs.

GlusterFS is software-defined, scale-out, POSIX-compatible file storage. Gluster runs on industry-standard servers, VMs, or containers, and virtualizes the locally attached storage into a single elastic pool that usually is accessed via a native network protocol, or alternatively SMB3 or NFS. For container workloads, iSCSI block storage and S3 object access protocols have been added.

Kubernetes is a container orchestration platform at heart, featuring automated deployment, scaling, and management of containerized applications. Kubernetes can turn a cluster of Linux systems into a flexible application platform that provides compute, storage, networking, deployment routines, and availability management to containers. Red Hat's OpenShift Container Platform builds upon Kubernetes to provide a powerful PaaS, ready to turn developers' code into applications running in test, development, and production.

Kubernetes with traditional storage

Kubernetes supports the use of existing legacy scale-up storage systems, such as SAN arrays. Going this route can lead to a couple of challenges:

  1. The existing storage system is not API-driven, but built for human administration entirely. In this case, properly integrating with the dynamic storage provisioning capabilities on Kubernetes, in which application instances (and thus storage) needs to be provisioned on the fly, is not possible. Users—and Kubernetes on their behalf—cannot request storage ad-hoc from a backend using the Kubernetes API because there is no storage provisioning API, and provisioning on these legacy storage systems is typically a static and lengthy process. This is usually the case for NFS-, iSCSI-, or FC-based storage.

    In the absence of automation, an administrator instead must guess storage demand on a per-volume basis upfront, provision, and expose these to Kubernetes manually, and then constantly monitor how the actual demand matches the supply estimate. That will not scale and almost certainly guarantees unhappiness for both developers and operators alike.

  2. The other scenario is that existing storage is somehow tied into Kubernetes-automated provisioning, but it was not designed to scale to the number of storage consumers typically run on Kubernetes. The storage system may have been optimized for a use case involving a hypervisor, in which usually you end up serving dozens (to in rare cases hundreds) of volumes of predictable sizes as data-stores. With Kubernetes, this can and will easily go into the hundreds to thousands of volumes, because they are usually consumed on a per-container basis. Most storage systems likely will be limited to a number of volumes at a level, which is not suitable for the parallelism and consumption to be expected by Kubernetes.

Kubernetes with software-defined storage

Using traditional storage with Kubernetes is at best a short-term option to bridge the time to the next procurement cycle. At this stage, you will enter the promising land of software-defined storage with systems that have been built with scale and automation in mind. Here you will find that there are many solutions that carry the "ready for containers" label. Lots of the solutions, however, are not suitable for containers, because they have a legacy implementation under the hood aimed at virtual machines instead of containers.

In the open source space, GlusterFS is a well-known project built with scale in mind and has been running in production systems all over the world for years.

GlusterFS on Kubernetes

GlusterFS not only provides storage to workloads on Kubernetes with proper dynamic provisioning, but also runs on top of Kubernetes alongside other workloads. This is what gluster-kubernetes is about—it puts the GlusterFS software binaries in containers and runs them as Kubernetes pods (the Kubernetes abstraction of containers) with access to host networking and storage on all or a subset of the cluster nodes to provide file, block, and even object storage. This is entirely controlled by the Kubernetes control plane.

Using GlusterFS this way with Kubernetes comes with a couple of distinct advantages. First, storage—software-defined or not—when running outside of Kubernetes, is always something that you must drag along with the platform. Storage is a system that comes with its own management, must be supported in the environment in which Kubernetes is deployed, must be installed on additional systems, and has its own installation procedure, packaging, availability management, and so on.

At this point, effectively what you are doing is learning yet an additional system to administer while duplicating many of the functions Kubernetes already provides out-of-the-box to scale-out applications—just to run software-defined storage, which is merely another scale-out application.

With GlusterFS on Kubernetes, the latter will take over scheduling your SDS software processes in containers and ensures that always enough instances are running while also providing them with access to sufficient network, CPU, and disk resources. For this purpose, a GlusterFS image has been made available to run in an OCI-compatible container runtime.

Second, when installed inside a VM, cloud instance, or bare-metal server, most SDS technologies require special procedures to set up, which are different for all those environments. Many solutions are not even supported on all these deployment options, which hinders your Kubernetes adoption in situations in which you want to use more than one (such as hybrid cloud, multiple public clouds, etc).

GlusterFS can run on all three flavors plus all public cloud providers, and is abstracted on a high-enough layer (Linux OS, TCP/IP stack, LVM2, block devices) so that it almost does not care what is actually beneath the surface.

Putting GlusterFS on top of Kubernetes makes things even easier: the installation and configuration procedure is exactly the same, no matter where you deploy, because it's completely abstracted by Kubernetes. Updates become a breeze when it's simply a matter of a new version of the container image being launched, updating the existing fleet in a rolling fashion.

Third, especially on cloud providers, with GlusterFS on top of Kubernetes you can rely on a common feature-set no matter what the underlying platform looks like. In the cloud, the compute network and storage resources are actually localized in several availability zones. This is not always obvious for application developers. Cloud storage, especially when it's based on block devices, typically is not replicated across those availability zones.

By relying on this, while Kubernetes flexible scheduling would just restart your pods in another zone in case of the complete loss of an availability zone, your cloud provider storage would not travel with it, and so your application would be of no value.

GlusterFS-integrated replication works great across availability zones (supported at up to 5ms round-trip time, noting that average latency between popular cloud providers availability zones is much lower). You can instruct GlusterFS on Kubernetes to run at least one container instance of GlusterFS in each zone, thereby distributing redundant copies of the data across an entire region on every write operation. Here, a failure of one site/zone is transparent to the storage consumers and once brought back up (again, with help of Kubernetes scheduling) automated self-healing commences in the background.

Finally, as described earlier, it may well be the case that you try to leverage existing SAN storage on-premise to be used by workloads on Kubernetes. GlusterFS can help you make them container-ready. GlusterFS deals with locally available storage on a Linux host, which in turn can come from a local SATA / SAS interface as well as a FibreChannel LUN. When you present the latter to your Kubernetes nodes you have all you need for GlusterFS on Kubernetes.

This way GlusterFS can bridge the gap for those storage systems to be effectively used by containers. Ideal cost footprint is, of course, achieved when using local storage eventually.

Implications and caveats

Of course, there are not only upsides but also some challenges with this approach. Some of which will be overcome over time.

Currently, the implementation of GlusterFS on Kubernetes consists of pods, that run the glusterd software daemon which provides its storage services over well-known network ports. These ports are not configurable by the installer nor with the clients and so, one pod running on a host will occupy these ports exclusively—effectively making it impossible to run another GlusterFS pod on that system.

Also, as of yet, Kubernetes does not have the ability to present block storage directly to containers. GlusterFS, however, needs direct access to block storage, also when run in a container. Hence GlusterFS pods, for now, need to run as super-privileged containers with elevated permissions to directly access the block devices.

Also, the minimal amount of nodes for a production cluster is three. More nodes are possible but less than that will put your data at risk: with one copy of your data, held by a single GlusterFS pod on a single node will give you a single point of failure. Two copies in two pods on two nodes are prone to split-brain situations as GlusterFS requires (and will enforce) quorum to ensure data consistency. When you lose one out of the two pods, your data will be intact but read-only.— Only with three or more pods, your data is replicated three times making it immune to loss or inconsistency due to outages. So you will need at least three Kubernetes nodes to start with.

Beyond the mere technical side, when it comes to operating such a platform, this will also present new perspectives to operations: Storage teams, or in general ops teams, need to get familiar not only how to deploy and run Kubernetes, but also how to run infrastructure software on top of it. Fortunately, as it turns out, many of the classic administrative operations are already taken care of by Kubernetes.

There is also a certain degree of releasing control on "who gets how much and when" in the world of Kubernetes were developers are getting self-service access to compute resources and networking. And to storage capacity on demand along the way. Instead of a classic, ticket-based approval process to get additional capacity (which can take hours to days), there is now a dynamic provisioning sequence with no human interaction (which just takes seconds).

Finally, the storage world in Kubernetes is fairly new and exciting. Many features that classic platforms expect from storage, like snapshots, replication or tiering are not yet available in Kubernetes' simple abstraction model of PersistentVolumes (the main logical entity Kubernetes by which it represents storage to users and containers) and PersistentVolumeClaims (a way to request storage in Kubernetes). So while GlusterFS is certainly capable of providing these features (e.g. snapshots, geo-replication, etc.), they are not yet available through the Kubernetes API. The GlusterFS community is working closely with the Storage SIG in the Kubernetes community to steadily implement these concepts.

Should you be interested?

If you are looking to adopt the benefits of containers, introduce and support a DevOps culture in your organization, run micro-services or in general try to get corporate IT to provide more immediate value to the business by shortening the time to market, you will at least evaluate Kubernetes. When you adopt it, it won't be long until stateful applications find their way into the cluster—and with that the need for robust, persistent storage. Will databases be among those applications? Very likely. Or workloads, that share large content repositories or such that consume object storage? In either of those cases, you should definitely take a look at gluster-kubernetes.

Consequently, if you have only stateless applications or all your applications manage storage and it's consistency entirely themselves, gluster-kubernetes will not give you additional benefits. It is however very unlikely that you have a cluster like this.

Verdict

GlusterFS on Kubernetes provides a swiss-army knife approach to modern computing which requires robust storage now more than ever and at much higher scale and velocity. It integrates perfectly with Kubernetes and Red Hat's OpenShift Container Platform and provides a consistent feature-set of storage services no matter where and how Kubernetes is deployed. It insulates you from all the different implementations and limitations in different environments while it naturally complements the existing compute and networking services already provided by Kubernetes. At the same time, it follows the same design principle that Kubernetes encourages: software-defined scale-out, based on services distributed in containers across a cluster of commodity systems.

To learn more, attend the talk You Have Stateful Apps - What if Kubernetes Would Also Run Your Storage? at KubeCon + CloudNativeCon, which will be held December 6-8 in Austin, Texas.

User profile image.
Daniel is the Technical Marketing Manager for Storage Products in Red Hat.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.