The best software prioritizes the needs of its users. Listening to the user and more closely involving them in all aspects of design, development, and documentation has been a key focus of this year's OpenStack Summit, which is wrapping up here in Atlanta today.
The user is an important aspect of any open source project, but for a big project with lots of different overlapping use cases, understanding the user is even more important.
To help learn more about those deploying OpenStack in the wild, the OpenStack user committee operates a user survey every release cycle, and this week the results of that survey were revealed to the public. Some new questions were added this year, including an open-ended comment section to better understand users likes and dislikes about OpenStack.
I'd encourage you to take a look at the extended version of the results below, but here are some of my immediate takeaways, based on the data and the presentation which accompanied it.
The community is truly global. North America, Europe, and Asia are all seeing adoption, but there are deployments on six continents. (C'mon, Tux, get your penguin brethren to work in Antarctica.)
OpenStack is being used by organization of all sizes. Organizations with 1-20 employees led the pack, but there were healthy numbers from organizations of all sizes.
There are still a lot of deployments of older versions out there, and production clouds are lagging on average one to two years behind the current release. This may take a while to overcome, as a clear upgrade path didn't exist in older versions, but fortunately from Icehouse onward migration has gotten considerably easier.
Dev/QA, proof of concept, and production uses all continue to grow. Adoption is still clearly on the upswing.
Documentation has gotten significantly better. Not only were there fewer negative comments about documentation in this survey, but quite a few positive ones stood out as well.
Want to see the full results from the user survey? The complete slide deck is embedded below. While as a data guy, I'd love to dive down into the numbers with a little bit finer granularity and take a look at crosstabs of some of the responses, the user committee made a conscious choice to limit the depth of results to protect the anonymity of respondents. Particularly for private clouds, operators may be reluctant to share key details of their system without being assured of anonymity. And given that trade off of choices—finer detail on a small portion of the community, or sanitized statistics on a broad set of users—I think the user committee made the right choice.