Automated provisioning in Kubernetes

Learn how Automation Broker can help simplify management of Kubernetes applications and services.
380 readers like this.
Automated provisioning in Kubernetes

When deploying applications in a Kubernetes cluster, certain types of services are commonly required. Many applications require a database, a storage service, a message broker, identity management, and so on. You have enough work on your hands containerizing your own application. Wouldn’t it be handy if those other services were ready and available for use inside the cluster?

The Service Catalog

Don’t get stuck deploying and managing those other services yourself; let the Service Catalog do it for you. The Kubernetes Service Catalog README states:

“The end-goal of the service-catalog project is to provide a way for Kubernetes users to consume services from brokers and easily configure their applications to use those services, without needing detailed knowledge about how those services are created or managed.”

Anyone can make a broker that advertises one or more services to the Service Catalog by implementing the Open Service Broker API. But today we are looking at the Automation Broker, which enables you to easily make your application or service deployable from the Service Catalog.

The Service Bundle

At a basic level, all you need to do is provide the Automation Broker with a specially crafted container that knows how to provision and de-provision your service. We call this container a Service Bundle. Inside this container, you employ any means necessary to provision your service, but most examples so far utilize Ansible.

Writing an Ansible role to create resources in a cluster feels very familiar if you have ever created a Kubernetes resource directly from YAML. Using a general automation tool such as Ansible means you are free to integrate with resources both inside and outside the cluster. For example, your Service Bundle may deploy a web application inside the cluster that utilizes a database outside the cluster.

Lastly, each Service Bundle has a standard set of attributes that end users will see, including a name, a description, and what parameters a user can specify at provision time. This metadata, combined with the logic you implemented with Ansible or otherwise, forms a complete application definition.

For more information on creating an Ansible Playbook Bundle (APB), including a look at the tooling and base image that make it easy, see the Getting Started Guide.

Putting it all together

An end user of a Kubernetes cluster can view the Service Catalog to see what services are available. The Automation Broker may be one of several brokers advertising services in the catalog. When a user selects your Service Bundle, they have an opportunity to provide any arguments that the bundle accepts.

The user experience varies by platform. On pure Kubernetes, you can use the svcat command-line tool. On OpenShift, the web console provides a graphical experience.

With user input complete, the Service Catalog then tells the Automation Broker to provision the chosen service. The broker sets up a secure namespace within the cluster and launches your service bundle as a running container inside. At that point, it is up to your bundle to do whatever it takes. For example, the Postgresql bundle creates three Kubernetes resources: a DeploymentConfig, a Service, and a PersistentVolumeClaim. A more advanced service bundle could deploy an entire stack of related services and tie them together.

Once a service is provisioned, you can create Bindings, a standardized construct for connecting other applications to your service. Look for a future blog post on how applications consume provisioned services.

Ready to see it in action? “Up and Running with the OpenShift Ansible Broker” is an easy, step-by-step guide to starting an OpenShift cluster and interacting with the Automation Broker. (Astute readers will notice that Openshift’s documentation refers to the “Openshift Ansible Broker,” which is just their name for the Automation Broker.)

Give it a try, and let us know what you think.

Michael Hrivnak will be presenting at SCaLE16x this year, March 8-11 in Pasadena, California. To attend and get 50% of your ticket, register using promo code OSDC.

Want to master microservices? Learn how to run OpenShift Container Platform in a self-paced, hands-on lab environment.

User profile image.
Michael Hrivnak is a Principal Software Engineer at Red Hat. After leading development of early registry and distribution technology for container images, he became involved with solving real-world orchestration problems on Kubernetes.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.