OpenStack Summit interview with Scott McCarty

The future of development with OpenShift and OpenStack

OpenStack Summit interview with Scott McCarty
Image by : 

opensource.com

At OpenStack Summit Austin, Scott McCarty is giving a talk titled OpenShift and OpenStack: Delivering applications together.

In this interview, Scott shares his thoughts on the benefits of using OpenShift and OpenStack together to develop and deliver applications. He also speculates about the advances we might see in the next year's development of OpenShift and OpenStack, and shares some features he would like to see in the future.

What are the benefits of using OpenShift and OpenStack together?

So, I think there is a lot of confusion around exposition and consumption of resources. Fundamentally, the operating system has been the glue between applications and hardware. A few years ago, people were saying OpenStack is the new operating system. Now, people are saying it's the container platform. The goal of my presentation is to help people understand that just because you break the operating system into two parts, it doesn't mean that these fundamental roles of exposition and composition completely change.

In a traditional Linux operating system running a traditional application, the operating system handles exposition and consumption of CPU, RAM, disk, and network. In the new world, I think programmable at scale, but fundamentally the age old problem. Hardware and operating system expose resources, that's OpenStack. Software applications and operating system consume resources, that's OpenShift. Each solves a different piece of the problem.

What are some of the of the challenges for developers moving from a more traditional development model? How do OpenStack and OpenShift help facilitate the transition?

Moving to a distributed systems computing environment can be painful for traditional developers (and systems administrators). In particular, I think there is still a lot of confusion about who owns orchestration and how they will do it. Developers can't just hand their application off to operations to figure out how to run it in a distributed environment. They need to be involved in and drive the architecture and how their application will be orchestrated at runtime (consumption of resources). That means developers have to fundamentally understand some of the pitfalls of distributed systems.

Without OpenStack and OpenShift, developers essentially have to hard-code their application for a given cluster. That application owns the cluster and must be able to make decisions about how it will run. As a developer, I want to focus on building my application. I really don't want to build a distributed systems computing environment for each application. OpenStack and OpenShift provide a standardized platform so developers can ask for resources they need and code their app. While this is not new in the HPC world, it is fairly new in enterprise IT.

Though OpenStack and OpenShift don't remove the need to understand how your application will run when distributed, they do provide a standard that can be learned over time. As developers gain skill running their distributed applications on OpenStack and OpenShift, they can gain much greater agility. Also, if they build their containers right, they can hand off almost all of the runtime logic and maintenance to the operations team (so they don't get paged in the middle of the night).

If you were to give this same presentation at the Spring 2017 OpenStack Summit, what do you think you would have to change in your talk? What improvements to both projects do you foresee?

I foresee a lot of improvement in clarity between the exposition and consumption of resources. As an example, look at the dynamic storage provisioning plugins in OpenShift. When run on OpenStack with Cinder, this provides a killer combination. Storage administrators don't need to pre-provision small, medium, and large flavors of storage chunks ahead of time. The exposition of the storage is customized at runtime. For example, if a developer wants 5GB of storage, that's what they get from Cinder. Before dynamic provisioning it was necessary to pre-provision storage, and a developer might get an 80GB chunk of storage because that is all that was left in the cluster.

I think more and more convergence will happen around CPU, RAM, and network. If a new VM is needed, it will be able to happen quicker and quicker in a pool (like ClearContainers or vFork). Imagine asking for 1,000 new containers, and OpenStack spins up 1,000 VMs of the exact size needed in milliseconds. Imagine that I need to create a new project in OpenShift and Neutron automatically provisions a new overlay network only for that project and wires it into the VMs that that project has access to. The combination of OpenStack and OpenShift is the new distributed operating system.

Are there any features you'd like to see added to OpenStack or OpenShift that are not currently under development?

I would love to see something like ClearContainers or vFork added to OpenStack. This would allow for dynamic provisioning of computer resources in milliseconds. I would also like to see more coordination of Neutron and projects in OpenShift. I think dynamic exposition of compute and network would allow the combination of containers and VMs really succeed.

Which OpenStack Summit sessions do you look forward to attending? What do you hope to learn?

I am really interested in seeing how others are thinking about OpenStack and containers together (even competitors). I think we have to work together to bring about a better world of computing.