Virtual Kubernetes clusters: A new model for multitenancy

Try vcluster, an open source implementation that tackles certain aspects of typical namespace- and cluster-based isolation models.
2 readers like this.
Parts, modules, containers for software

Opensource.com

If you speak to people running Kubernetes in production, one of the complaints you'll often hear is how difficult multitenancy is. Organizations use two main models to share Kubernetes clusters with multiple tenants, but both present issues. The models are:

  • Namespace-based multitenancy
  • Cluster-based multitenancy

The first common multitenancy model is based on namespace isolation, where individual tenants (a team developing a microservice, for example) are limited to using one or more namespaces in the cluster. While this model can work for some teams, it has flaws. First, restricting team members to accessing resources only in namespaces means they can't administer global objects in the cluster, such as custom resource definitions (CRDs). This is a big problem for teams working with CRDs as part of their applications or in a dependency (for example, building on top of Kubeflow or Argo Pipelines).

Second, a much bigger long-term maintenance issue is the need to constantly add exceptions to the namespace isolation rules. For example, when using network policies to lock down individual namespaces, admins likely find that some teams eventually need to run multiple microservices that communicate with each other. The cluster administrators somehow need to add exceptions for these cases, track them, and manage all these special cases. Of course, the complexity grows as time passes and more teams start to onboard to Kubernetes.

The other standard multitenancy model, using isolation at the cluster level, is even more problematic. In this scenario, each team gets its own cluster, or possibly even multiple clusters (dev, test, UAT, staging, etc.). The immediate problem with using cluster isolation is ending up with many clusters to manage, which can be a massive headache. And all of those clusters need expensive cloud computing resources, even if no one is actively using them, such as at night or over the weekend. As Holly Cummins points out in her KubeCon 2021 keynote, this explosion of clusters has a dangerous impact on the environment.

Until recently, cluster administrators had to choose between these two unsatisfying models, picking the one that better fits their use case and budget. However, there is a relatively new concept in Kubernetes called virtual clusters, which is a better fit for many use cases.

What are virtual clusters?

A virtual cluster is a shared Kubernetes cluster that appears to the tenant as a dedicated cluster. In 2020, our team at Loft Labs released vcluster, an open source implementation of virtual Kubernetes clusters.

With vcluster, engineers can provision virtual clusters on top of shared Kubernetes clusters. These virtual clusters run inside the underlying cluster's regular namespaces. So, an admin could spin up virtual clusters and hand them out to tenants, or—if an organization already uses namespace-based multitenancy, but users are restricted to a single namespace—tenant users could spin up these virtual clusters themselves inside their namespace.

This combines the best of both multitenancy approaches described above: Tenants are restricted to a single namespace with no exceptions needed because they have full control inside the virtual cluster but very restricted access outside the virtual cluster.

Like a cluster admin, the user has full control inside a virtual cluster. This allows them to do anything within the virtual cluster without impacting other tenants on the underlying shared host cluster. Behind the scenes, vcluster accomplishes this by running a Kubernetes API server and some other components in a pod within the namespace on the host cluster. The user sends requests to that virtual cluster API server inside their namespace instead of the underlying cluster's API server. The cluster state of the virtual cluster is also entirely separate from the underlying cluster. Resources like Deployments or Ingresses created inside the virtual cluster exist only in the virtual cluster's data store and are not persisted in the underlying cluster's etcd.

This architecture offers significant benefits over the namespace isolation and cluster isolation models:

  1. Since the user is an administrator in their virtual cluster, they can manage cluster-wide objects like CRDs, which overcomes that big limitation of namespace isolation.
  2. Since users communicate with their own API servers, their traffic is more isolated than in a normal shared cluster. This also provides federation, which can help with scaling API requests in high-traffic clusters.
  3. Virtual clusters are very fast to provision and tear down again, so users can benefit from using truly ephemeral environments and potentially spin up many of them if needed.

[ Learn what it takes to develop cloud-native applications using modern tools. Download the eBook Kubernetes-native microservices with Quarkus and MicroProfile. ] 

How to use virtual clusters

There are many use cases for virtual clusters, but here are a few that we've seen most vcluster users adopt.

Development environments

Provisioning and managing dev environments is currently the most popular use case for vcluster. Developers writing services that run in Kubernetes clusters need somewhere to run their applications while they're in development. While it's possible to use tools like Docker Compose to orchestrate containers for dev environments, developers who code against Kubernetes clusters will have an experience much closer to how their services run in production.

Another option for local development is using a tool like Minikube or Docker Desktop to provision Kubernetes clusters, but that has some downsides. Developers must own and maintain that local cluster stack, which is a burden and a huge time sink. Also, those local clusters may need a lot of computing power, which is difficult on local dev machines. We all know how hot laptops can get during development, and it may not be a good idea to add Kubernetes to the mix.

Running virtual clusters as dev environments in a shared dev cluster addresses those concerns. In addition, as mentioned above, vclusters are quick to provision and delete. Admins can remove a vcluster just by deleting the underlying host namespace with a single kubetctl command, or by running the vcluster delete command provided with the command-line interface tool. The speed of infrastructure and tooling in dev workflows is critical because improving cycle times for developers can increase their productivity and happiness.

CI/CD pipelines

Continuous integration/continuous development (CI/CD) is another strong use case for virtual clusters. Typically, pipelines provision systems under test (SUTs) to run test suites against. Often, teams want those to be fresh systems with no accumulated cruft that may interfere with testing. Teams running long pipelines with many tests may be provisioning and destroying SUTs multiple times in a test run. If you've spent much time provisioning clusters, you have probably noticed that spinning up a Kubernetes cluster is often a time-consuming operation. Even in the most sophisticated public clouds, it can take more than 20 minutes.

Virtual clusters are fast and easy to provision with vcluster. When running the vcluster create command to provision a new virtual cluster, all that's involved behind the scenes is running a Helm chart and installing a few pods. It's an operation that usually takes just a few seconds. Anyone who runs long test suites knows that any time shaved off the process can make a huge difference in how quickly the QA team and engineers receive feedback.

In addition, organizations could use vcluster's speed to improve any other processes where lots of clusters are provisioned, like creating environments for workshops or customer training.

Testing different Kubernetes versions

As mentioned earlier, vcluster runs a Kubernetes API server in the underlying host namespace. It uses the K3s (Lightweight Kubernetes) API server by default, but you can also use k0s, Amazon Elastic Kubernetes Service, or the regular upstream Kubernetes API server. When you provision a vcluster, you can specify the version of Kubernetes to run it with, which opens up many possibilities. You could:

  • Run a newer Kubernetes version in the virtual cluster to get a look at how an app will behave against the newer API server.
  • Run multiple virtual clusters with different versions of Kubernetes to test an operator in a set of different Kubernetes distros and versions while developing or during end-to-end testing.

Learn more

There may not be a perfect solution for Kubernetes multitenancy, but virtual clusters address many issues with current tenancy models. Vcluster's speed and ease of use make it a great candidate for many scenarios where you would prefer to use a shared cluster but also wish to give users the flexibility to administer their clusters. There are many use cases for vcluster beyond the ones described in this article.

To learn more, head to vcluster.com, or if you'd like to dive right into the code, download it from the GitHub repo. The Loft Labs team maintains vcluster, and we love getting ideas on it. We have added many features based on user feedback. Please feel free to open issues or PRs. If you'd like to chat with us first about your ideas or have any questions while exploring vcluster, we also have a vcluster channel on Slack.

User profile image.
Lukas Gentele is the CEO of Loft Labs, Inc., a startup that builds open-source developer tooling for Kubernetes and helps companies with their transition from traditional to cloud-native software development. Before moving to San Francisco to start Loft Labs, Lukas founded a Kubernetes-focused consulting company in his home country Germany.
Photo of Rich Burroughs
Rich Burroughs is a Senior Developer Advocate at Loft Labs where he's focused on improving the happiness of teams using Kubernetes. He's the creator and host of the Kube Cuddle podcast where he interviews members of the Kubernetes community. He was one of the founding organizers of DevOpsDays Portland, and he's helped organize other community events.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.