Using Ansible for deploying serverless applications

Serverless is another step in the direction of managed services and plays nice with Ansible's agentless architecture.
451 readers like this.
A guide to packing and preparing for a tech conference

Opensource.com

Ansible is designed as the simplest deployment tool that actually works. What that means is that it's not a full programming language. You write YAML templates that define tasks and list whatever tasks you need to automate your job.

Most people think of Ansible as a souped-up version of "SSH in a 'for' loop," and that's true for simple use cases. But really Ansible is about tasks, not about SSH. For a lot of use cases, we connect via SSH but also support things like Windows Remote Management (WinRM) for Windows machines, different protocols for network devices, and the HTTPS APIs that are the lingua franca of cloud services.

In a cloud, Ansible can operate on two separate layers: the control plane and the on-instance resources. The control plane consists of everything not running on the OS. This includes setting up networks, spawning instances, provisioning higher-level services like Amazon's S3 or DynamoDB, and everything else you need to keep your cloud infrastructure secure and serving customers.

On-instance work is what you already know Ansible for: starting and stopping services, templating config files, installing packages, and everything else OS-related that you can do over SSH.

Now, what about serverless? Depending who you ask, serverless is either the ultimate extension of the continued rush to the public cloud or a wildly new paradigm where everything is an API call, and it's never been done before.

Ansible takes the first view. Before "serverless" was a term of art, users had to manage and provision EC2 instances, virtual private cloud (VPC) networks, and everything else. Serverless is another step in the direction of managed services and plays nice with Ansible's agentless architecture.

Before we go into a Lambda example, let's look at a simpler task for provisioning a CloudFormation stack:

- name: Build network
  cloudformation:
    stack_name: prod-vpc
    state: present
    template: base_vpc.yml

Writing a task like this takes just a couple minutes, but it brings the last semi-manual step involved in building your infrastructure—clicking "Create Stack"—into a playbook with everything else. Now your VPC is just another task you can call when building up a new region.

Since cloud providers are the real source of truth when it comes to what's really happening in your account, Ansible has a number of ways to pull that back and use the IDs, names, and other parameters to filter and query running instances or networks. Take for example the cloudformation_facts module that we can use to get the subnet IDs, network ranges, and other data back out of the template we just created.

- name: Pull all new resources back in as a variable
  cloudformation_facts:
    stack_name: prod-vpc
  register: network_stack

For serverless applications, you'll definitely need a complement of Lambda functions in addition to any other DynamoDB tables, S3 buckets, and whatever else. Fortunately, by using the lambda modules, Lambda functions can be created in the same way as the stack from the last tasks:

- lambda:
    name: sendReportMail
    zip_file: "{{ deployment_package }}"
    runtime: python3.6
    handler: report.send
    memory_size: 1024
    role: "{{ iam_exec_role }}"
  register: new_function

If you have another tool that you prefer for shipping the serverless parts of your application, that works as well. The open source Serverless Framework has its own Ansible module that will work just as well:

- serverless:
    service_path: '{{ project_dir }}'
    stage: dev
  register: sls
- name: Serverless uses CloudFormation under the hood, so you can easily pull info back into Ansible
  cloudformation_facts:
    stack_name: "{{ sls.service_name }}"
  register: sls_facts

That's not quite everything you need, since the serverless project also must exist, and that's where you'll do the heavy lifting of defining your functions and event sources. For this example, we'll make a single function that responds to HTTP requests. The Serverless Framework uses YAML as its config language (as does Ansible), so this should look familiar.

# serverless.yml
service: fakeservice

provider:
  name: aws
  runtime: python3.6

functions:
  main:
    handler: test_function.handler
    events:
      - http:
          path: /
          method: get

At AnsibleFest, I'll be covering this example and other in-depth deployment strategies to take the best advantage of the Ansible playbooks and infrastructure you already have, along with new serverless practices. Whether you're able to be there or not, I hope these examples can get you started using Ansible—whether or not you have any servers to manage.

AnsibleFest is a day-long conference bringing together hundreds of Ansible users, developers, and industry partners. Join us for product updates, inspirational talks, tech deep dives, hands-on demos and a day of networking. Get your tickets to AnsibleFest in San Francisco on September 7. Save 25% on registration with the discount code OPENSOURCE.

Tags
User profile image.
Ryan is a Senior Software Engineer and spends most of his time on cloud-adjacent Open Source tooling, including Ansible and the Serverless Framework.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.