OpenStack Summit interview with Ian Lewis of Google

How do we keep track of ephemeral containers?

Image credits : 

Lucarelli via Wikimedia Commons. CC-BY-SA 3.0

Cloud-native computing relies on ephemeral containers instead of pinned servers. Executing applications within ephemeral containers solves resource scarcity challenges, but also creates a dynamic environment that requires new practices and tooling. To address these concerns, Ian Lewis of Google is giving a talk at this month's OpenStack Summit in Tokyo, Japan entitled "In a world of ephemeral containers, how do we keep track of things?"

We caught up with Ian to learn how DevOp teams are applying ephemeral in practice, adopting new architectural patterns, and migrating applications into containers. Ian provides good tips about where to store data, why service discovery is necessary, and what new open source projects within OpenStack and outside (in Kubernetes) help teams make the shift to ephemeral containers.

Why should containers be ephemeral? What problem are we trying to solve with ephemeral containers?

One of the main benefits of running containers is they can be easily run and managed in a cluster irrespective of the actual machines they are running on. This allows you to move running applications, for instance when maintenance is required, to different hardware transparently to your users. The flexibility to move containers around means that you cannot store state locally on a specific machine because the process could be moved at any given time. You need to be able to store the state in a way that's accessible no matter where the application is running.

What does ephemeral mean in practice?

In general, it means that your app should not write data to local storage. This includes things from application data to log data. Your app should be able to restart and work with a fresh container image. Data will need to be stored outside the container using services specifically meant for storage.

Disposable and ephemeral containers sound cool, just launch, shift, and retire as needed, but what about the contents stored within the containers and endpoints exposed by a container? For example, session state, API endpoints, and database connection endpoints. Does session state and database connections just disappear as well?

Containers, themselves, are not a silver bullet for managing state, API endpoints, and connections. You will still need to orchestrate the containers and networking in a way that makes them highly available. This is one of the reasons Google created Kubernetes to help with some of these issues.

Traditional deployments usually rely on everlasting, persistent servers. How should teams refactor servers and applications to handle shifting containers?

I think there are two large changes that need to me made. The first is to store state outside of the host and outside of the container. This is the subject of my talk. The second is that applications should have a method of service discovery. Containers moving will mean that clients will need a way to connect to the services no matter where they are actually running.

Do you recommend any good architectural patterns?

I would recommend a service oriented architecture when using containers. The benefits of containers are much more easily felt when the services you run are small and can be scaled up and down independently of each other.

How can I get started building an ephemeral container environment instead of an everlasting container environment? What infrastructure and development frameworks should teams use to safely deploy, shift, and dispose containers?

Trying out cluster managers such as Kubernetes (or Container Engine) is a good start towards building such an environment. Kubernetes is a container orchestrator that schedules containers in a cluster of servers where they fit best at the time they are run, which may not be the same server each time. Using Kubernetes will help you get a taste of best practices in this area.

How are containers being implemented within OpenStack?

Container orchestration engines such as Docker Swarm and Kubernetes will be available as first class resources in OpenStack through an API service called Magnum. Magnum will create clusters of servers as VMs with a cluster orchestrator installed. You can then use Magnum to containers in them very easily.

In the latest OpenStack release and on the roadmap, what progress is occurring in container orchestration, scheduling, and composition?

There is a lot of development in this space going on and many of the features are very new. Magnum will be included in OpenStack Liberty, released this month. Other tools such as Kolla, which allows you to deploy OpenStack in containers, and Murano, which provides easy deployment of apps from an app catalog, are worth checking out as well.

OpenStack Summit
Speaker Interview

This article is part of the Speaker Interview Series for OpenStack Summit Tokyo, a four-day conference for developers, users, and administrators of OpenStack cloud software.

About the author

Chris Haddad @cobiacomm
Chris Haddad - Chris Haddad (aka cobiacomm) helps reshape IT delivery by introducing disruptive open source projects, refreshing technology platforms (think Cloud-Native), rebuild team interactions (think DevOps), and re-invent opportunity (think APIs).