How 2018 learned to do the cloud right
How 2018 learned to do the cloud right
Forecast: Cloudy with a chance of more clouds.
Over the last five years, I've worked in various roles bringing high-performance computing (HPC) into the public cloud. It's been fascinating to watch the industry change from saying "You can't do HPC in the cloud" to "How can I do HPC in the cloud?" to "Why wouldn't I do HPC in the cloud?"
Of course, HPC is just one narrow slice of the tech sector, but you can see that same pattern (albeit on different timelines) across the industry. In 2018, it's pretty clear that the cloud model—whether public, private, or hybrid—is the way to go.
You can see that reflected in the content on Opensource.com, too. Last year, editor Jason Baker recapped eight resources for understanding the open source cloud. The articles from 2017 largely center on explaining cloud concepts and making a case for having your head in the clouds. In 2018, we see a shift toward that second phase: articles focused on the how because the why is already assumed. Let's take a look at some of the highlights from this year in the open source cloud.
How to connect to a remote desktop from Linux
The thing about cloud-based desktops is that you can't exactly plug your keyboard and monitor into them. There are a lot of reasons you might want to run a desktop in a cloud environment, but how do you connect to it? Kedar Vijay Kulkarni wrote a step-by-step guide to using Remmina for remote desktop. With support for both VNC (used by Linux) and RDP (used by Windows), Remmina is a great tool for connecting to desktops in the cloud or anywhere else.
A sysadmin's guide to containers
One way to take better advantage of the scale and flexibility of the cloud is to use containers. Containers provide a convenient way to package and isolate applications on a host without the overhead of full virtualization. But it helps to understand how they work before you try to use them. Dan Walsh takes a look at containers from a sysadmin's point of view. In this article, he examines process namespaces, storage, registries, and more.
7 open source platforms to get started with serverless computing
Containers are nice and all, but what if you got even more abstract? "Serverless" computing still requires servers, of course, but you no longer have to care about them. While cloud service providers such as Amazon Web Services and Microsoft Azure provide their own serverless (also called "Functions-as-a-Service") offerings, you can also create a serverless environment on your own, er, servers. In this article, Daniel Oh introduces seven open source serverless platforms and explains how to get started.
Host your own cloud with Raspberry Pi NAS
Part of the decision to use a public cloud provider is the belief that they can run the service better than you can. That may mean cheaper, more secure, faster upgrade cycles, or anything else that makes sense for your needs. But sometimes you might want or need to run your own private cloud service. In that case, there's Nextcloud and a Raspberry Pi. This article caps off a three-part series by Manuel Dewald that takes you through getting the hardware and software set up to turn a Raspberry Pi into network-attached storage (NAS), configuring backups, and finally installing Nextcloud.
An introduction to Ansible Operators in Kubernetes
One of the great things about containers is they make it easy to quickly scale out many instances of a service (for example, a fleet of web frontends for when your project gets a big spike in traffic after an Opensource.com article). Of course, that leads to the problem of how you manage those containers. Kubernetes is a leading choice for container orchestration; its Operators feature deploys and manages a service or application. In this article, Michael Hrivnak introduces Ansible to build a Kubernets operator.
How Kubernetes became the solution for migrating legacy applications
Containers may be how applications are developed these days, but what about the legacy applications you still depend on? They're often monolithic, proprietary, and surrounded by years or decades of cruft. Swapnil Bhartiya suggests putting those applications in a container and building new functionality around them. Over time, you can begin tearing down the monoliths in favor of modern designs.
What are cloud-native applications?
The modern notion of cloud computing has been around for over a decade. While mainstream adoption isn't quite that old, it's been around long enough that practitioners have started to figure out how to best run services in the cloud. But more than that, they've figured out the best way to design those services with a cloud model in mind. Gordon Haff says cloud-native means the "intersection of containerized infrastructure and applications composed using fine-grained API-driven services, aka, microservices." While this answers the immediate question, it's worth reading the rest of the article for the historical context and how this definition applies to modern applications.
Getting started with openmediavault: A home NAS solution
The days of a single computer shared by the family are long gone. These days, you're likely to have several computers, where computer means any device that can access digital files. Maybe you have a desktop and a laptop, a tablet or two, your phone, your set-top streaming box, your video game system, et cetera, et cetera. This means you'll want a place to centrally store files for sharing and backup. Community Moderator Jason van Gumster introduces openmediavault and describes how he set it up to be a network-attached storage (NAS) solution in his home.
Get started with REST services with Apache Camel
There's no rest for the wicked, but the modern computing landscape has a lot of REST services. If you want to build middleware to connect your REST services, Apache Camel is one option to help you achieve that goal. Mary Cochran and Krystal Ying shared their tips for getting started based on their poster at this year's Grace Hopper Celebration of Women in Computing.
Running integration tests in Kubernetes
Code tests—you do that, right? Of course you do! But doing integration tests to ensure external operations work is challenging. You want a fresh environment every time, but you need it quickly and you want it to be identical between tests. Using Kubernetes to manage your test pipeline is a great way to meet those goals. Balazs Szeti gives a thorough tutorial for creating a Jenkins build environment on OpenShift.