Why OpenStack is living on the edge

The OpenStack Summit shows how open source clouds can do more and cost less while bringing us to new computing frontiers.
526 readers like this.
open source button on keyboard

Opensource.com

In the early days of OpenStack, much of the media coverage seemed fixated on whether or not the project would be able to "win" the cloud computing marketplace, and which company would "win" OpenStack, as if the future of technology is a zero-sum game. The keynotes at this week's OpenStack Summit highlight just how narrow view this is.

What has emerged isn't a need for a one-size-fits-all generic cloud, but instead, many competing needs across nearly every industry you can think of, for which cloud helps provide part of the answer.

And OpenStack seems uniquely positioned to answer this call. OpenStack itself isn't a single solution, it's a collection of many related projects and tools that can be mixed and matched to build a cloud computing environment that can be tailored to just about any need.

On Monday and Tuesday of this week, the keynotes from various OpenStack and industry leaders covered a variety of subjects, but the common theme that ran throughout is that OpenStack is doing things that perhaps we never imagined it would a few years ago, and working together better than ever.

Reaching for new horizons

One of the more exciting parts of the OpenStack Summit for me is hearing from the user community and learning how OpenStack is being used in new ways; one area is edge computing.

Processing is going to have to take place at the network edge, with a new kind of cloud.

The idea behind edge computing is simple: We are producing more data than ever before and we need the cloud to be sitting as close as possible to where that data originates. One slide from an Intel presentation highlighted just how much this might be: A smart hospital, 3 TB/day; a self-driving car, 4 TB/day; a connected plane, 5 TB/day. And all of these numbers are rising. A connected factory might produce a whole PB per day, and by 2020, there may be 1.5 GB of data trafficked per person, per day.

Networks and central clouds simply can't scale to handle all of this data, so the processing is going to have to take place at the network edge, with a new kind of cloud.

OpenStack and telecoms

The classic users of this approach to cloud computing are telecoms. Both AT&T and Verizon spoke on Monday about how they are using OpenStack to build out their networks. AT&T showed off how they are using OpenStack to power their live TV services across multiple devices. Verizon showed off an OpenStack-in-a-box, not much bigger than a shoebox that can put compute, networking, and storage resources at the edge of their network to help power tomorrow's 5G services.

OpenStack and robotics

Daniela Rus of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), which has been using OpenStack in production since 2012, presented on some of her research into several robotics topics. MIT researchers are considering ways to make the design, creation, and testing of robots easier and faster than ever. One example is the use of self-folding printed robots and how they might be applied to save lives as an alternative to surgery in some situations. A rapid proliferation of robots and sensors will require cloud nearby to process and perform analysis on the huge amount of data they will be collecting, simply because there will be too much data to move around.

But not every workload in an organization's cloud needs to be limited to computing on the edge, or even that organization's own data center. One of the advantages that come with standardizing on an open source cloud platform with many different vendors providing services is that it's easy to scale out to other clouds as capacity, price, or other needs change.

OpenStack and Kubernetes

Following up on the interoperability demo at the last summit, we watched a database deployment using OpenStack and Kubernetes quickly scale from a single cloud demo to 15 clouds around the world, each run by a different provider, in a matter of minutes as operators added the application to their clouds in real-time on stage.

openstack-interop.jpg

A healthy ecosystem

The other takeaway that was clear from this summit is that other cloud computing projects aren't so much competitors as they are parts of a larger, healthy ecosystem that exists to help OpenStack and related projects reach the broadest possible audience.

What's perhaps the most fascinating is how well each part of the stack seems to work together now. The future is not going to be Kubernetes or OpenStack, containers or virtual machines, public cloud or private cloud. For most organizations operating computing infrastructure, the answer is going to be and, and, and. What may vary from company to company are the precise needs of their different cloud components, whether optimized for storage or compute power or graphics processing power; how each of these components will talk to one another; and for each area in their cloud, determining when it makes sense to keep the applications in-house and when it makes sense to move them to a public cloud.

What's old is new again. The march towards cloud computing has certainly solved many problems of modern computing infrastructure, particularly around scale, but it doesn't absolve the need for IT teams to manage different parts of their infrastructure in ways that best meet the needs of their individual clients. This multi-cloud future is going to take some smart minds and great interoperability tools, but if what we've seen at the summit this week holds true, the OpenStack community is producing a lot of both.

User profile image.
Jason was an Opensource.com staff member and Red Hatter from 2013 to 2022. This profile contains his work-related articles from that time. Other contributions can be found on his personal account.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.