How to use Ansible to manage PostgreSQL

Ansible, an open source automation tool, can make complex configuration and management tasks in Postgres simple.
506 readers like this.
Crowdfunding

Opensource.com

Working with a database in a pressure-cooker production environment using an agile approach with tight deadlines can be a contradictory experience. As this article demonstrates, you can operationalize those many steps and prepare Postgres for any range of service. The key is Ansible, an open source automation engine for software provisioning, configuration management, and application deployment.

For the purposes of this article, we assume readers have some knowledge of both Ansible and PostgreSQL, not to mention Linux. I'm covering only the most basic features here; for a deeper dive, check out the references at the end of this article.

How to administrate a cluster of database servers on a developer workstation

As root, I create my template container:

  lxc-create -t download -n template_centos6 -- --dist centos --release 6 --arch amd64

Let's start the container, adding the following packages:

   lxc-start -n template_centos6

    lxc-attach -n template_centos6 -- <<_eof_
        yum update -y
        yum install openssh-server screen mlocate man vim python-psycopg2 sudo -y
        /usr/sbin/makewhatis
        /usr/bin/updatedb
        useradd ansible
        echo ansible | passwd --stdin ansible
        echo -e '\n\n# ANSIBLE GLOBAL PERMISSIONS FOR DEMO PURPOSES ONLY\nansible ALL=(ALL) PASSWD:ALL' >> /etc/sudoers
_eof_

Now we're ready to make our actual containers:

    lxc-stop -n template_centos6

    for u in ansible pg1 pg2 pg3
    do
        lxc-copy -n template_centos6 -N $u
    done

Let's prepare container Ansible:

    lxc-start -n ansible

    lxc-attach -n ansible <<_eof_
        rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
        yum install ansible -y
        /usr/sbin/makewhatis
        /usr/bin/updatedb
        su -c "ssh-keygen -t rsa -N '' -f /home/ansible/.ssh/id_rsa" ansible
_eof_

Let's turn everything on:

    for u in $(seq 3)
    do
        lxc-start -n pg$u
    done

And see what our environment looks like:

    root@wolven:~# lxc-ls --fancy
    NAME             STATE   AUTOSTART GROUPS IPV4       IPV6
    ansible          RUNNING 0         -      10.0.3.7   -
    pg1              RUNNING 0         -      10.0.3.6   -
    pg2              RUNNING 0         -      10.0.3.168 -
    pg3              RUNNING 0         -      10.0.3.17  -
    template_centos6 RUNNING 0         -      10.0.3.173 -

Congratulations, you have a working network on your Linux box!

Configuring our Ansible container

Now let's get to work and create our playbooks on guest host Ansible. Ansible uses a special configuration file that defines all those hosts we'd like to administrate:

    lxc-attach -n ansible
    su - ansible

    mkdir -p $HOME/playbooks

    echo "
    pg1 ansible_ssh_pass=ansible ansible_sudo_pass=ansible
    pg2 ansible_ssh_pass=ansible ansible_sudo_pass=ansible
    pg3 ansible_ssh_pass=ansible ansible_sudo_pass=ansible
    " > $HOME/playbooks/host.cfg

Ping pg servers

While logged into the Ansible host, we're now going to ping our Postgres (pg) hosts using an adhoq command; refer to the man pages for more information regarding the various switches used on the command line:

    ansible -i $HOME/playbooks/hosts.cfg all -m ping

Here's the output:

    pg3 | SUCCESS => {
        "changed": false,
        "ping": "pong"
    }
    pg2 | SUCCESS => {
        "changed": false,
        "ping": "pong"
    }
    pg1 | SUCCESS => {
        "changed": false,
        "ping": "pong"
    }

Add repos: PostgreSQL Versions 9.4 and 9.6

    (01.install_repo.yml)

Playbook 01.install_repo.yml installs the postgres.org repository onto each guest host. Notice we're going to install two versions of Postgres—versions 9.4 and 9.6—which makes it possible to perform an inline upgrade:

    ansible-playbook -i hosts.cfg 01.install_repo.yml
---
- hosts: dbservers
  remote_user: ansible
  become: yes

  tasks:
    - name: install repo for PostgreSQL 9.4
      yum:
        name: https://download.postgresql.org/pub/repos/yum/9.4/redhat/rhel-6-x86_64/pgdg-centos94-9.4-3.noarch.rpm
        state: present

    - name: install repo for PostgreSQL 9.6
      yum:
        name: https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6-x86_64/pgdg-centos96-9.6-3.noarch.rpm
        state: present
...

Install Postgres packages, 9.4

(02.install_94.yml)

With the repo configured, we can now install the rpms. Notice the use of variables and the loop with_items in our playbook invocation:

    ansible-playbook -i hosts.cfg 02.install_94.yml --extra-vars "host=dbservers"
---
- hosts: "{{ host }}"
  remote_user: ansible
  become: yes

  tasks:
    - name: install PostgreSQL version 9.4
      yum:
        name: "{{ item }}"
        state: latest
      with_items:
        - postgresql94-server
        - postgresql94-contrib
        - pg_repack94
...

Note: Although not discussed, pg_repack94 removes database bloat, so I invite you to take time to read up on it.

Add public key on all database servers for Unix account Ansible

(03.install_key.yml)

Adding a public key is a practice I adopted years ago. It is a secure and easy way to access your machine, not to mention really useful for those emergency situations when you panic and accidently lock yourself out of the server(s) as you mistype your password.

Here's our playbook, install_key.yml:

---
- hosts: dbservers
  remote_user: ansible
  become: yes

  tasks:
    - name: install repo for PostgreSQL 9.4
      yum:
        name: https://download.postgresql.org/pub/repos/yum/9.4/redhat/rhel-6-x86_64/pgdg-centos94-9.4-3.noarch.rpm
        state: present

    - name: install repo for PostgreSQL 9.6
      yum:
        name: https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6-x86_64/pgdg-centos96-9.6-3.noarch.rpm
        state: present
...

And here's our invocation. Notice the use the extra variable identifying Unix account Ansible:

    ansible-playbook -i hosts.cfg install_key.yml --extra-vars "user=ansible"

Configure each host as a standalone service

    (04.configure_standalone_94.yml)

All the pieces are now in place, and it's time to create our Postgres servers! I like building out my network incrementally; that way I save myself debugging grief if something goes wrong.

In addition to initializing the data cluster, we include the following steps in this playbook:

  • Start the Postgres service
  • Create role replicant with a password
  • Update pg_hba.conf allowing remote access
  • Update postgresql.conf for master/slave service

Now I'm going to get a little fancy: In order to keep changes as clean and as clear as possible, I'm appending to the configuration file postgresql.conf with a single line describing an include file where we'll locate all our changes in a separate and distinct file, thus improving clarity. The Ansible keyword blockinfile is cool, as it adds text in a file identifying itself within a nice, big labeled block.

The connection permissions between master and slaves are updated in pg_hba.conf. Keep in mind that truly secure environments always use SSL encryption between replicating servers, which we are not doing here.

And here's our invocation. Notice we've typed out the replication password as an argument; not only is it secure, but now we add flexibility to our script:

    ansible-playbook -i hosts.cfg 04.configure_standalone_94.yml --extra-vars "host=dbservers passwd=mypassword"
---
- hosts: "{{ host }}"
  remote_user: ansible
  become: yes

  tasks:
    - name: create data cluster
      command: service postgresql-9.4 initdb

    - service:
        name: postgresql-9.4
        state: started

- hosts: "{{ host }}"
  remote_user: postgres

  tasks:
    - name: create ROLE replicant
      postgresql_user:
        db: postgres
        login_unix_socket: /tmp
        name: replicant
        password: "{{ passwd }}"
        role_attr_flags: LOGIN,REPLICATION

    - name: add new configuration to "postgresql.conf"
      blockinfile:
        dest: /var/lib/pgsql/9.4/data/postgresql.conf
        block: |
          include 'server.conf'

    - name: add new configuration to "server.conf"
      blockinfile:
        create: yes
        dest: /var/lib/pgsql/9.4/data/server.conf
        block: |
          listen_addresses = '*'
          wal_level = hot_standby
          checkpoint_segments = 10
          max_wal_senders = 6
          wal_keep_segments = 10
          hot_standby = on

    - name: add new configuration to "pg_hba.conf"
      blockinfile:
        dest: /var/lib/pgsql/9.4/data/pg_hba.conf
        block: |
          host    all             all             0.0.0.0/0                md5
          host    replication     replicant       0.0.0.0/0                md5

    - name: update environment variables in UNIX account postgres
      blockinfile:
        create: yes
        dest: /var/lib/pgsql/.pgsql_profile
        block: |
          export PGHOST=/tmp PAGER=less PGDATA=/var/lib/pgsql/9.2/data

- hosts: "{{ host }}"
  remote_user: ansible
  become: yes

  tasks:
    - service:
        name: postgresql-9.4
        state: restarted

    - name: configure init for startup on bootup
      shell: chkconfig --level 2345 postgresql-9.4 on
...

Configuring Postgres slaves

    (05.configure_slave_94.yml)

The previous script of preparing the server for standalone mode actually killed two birds with one stone by preconfiguring/enabling a slave host using the configuration files postgresql.conf and server.conf, which are copied to the slave using the pg_basebackup command. What's especially interesting in this example is how we created a cascading replicated slave pg3 which gets its data from the slave pg2. Take note of the second part of the script; we're taking advantage of Ansible's public key we installed in a previous script by logging directly into Postgres instead of Sudo:

    ansible-playbook -i hosts.cfg 05.configure_slave_94.yml --extra-vars "master=pg1 slave=pg2 passwd=mypassword"
    ansible-playbook -i hosts.cfg 05.configure_slave_94.yml --extra-vars "master=pg2 slave=pg3 passwd=mypassword"
---
- hosts: "{{ slave }}"
  remote_user: ansible
  become: yes

  tasks:
    - service:
        name: postgresql-9.4
        state: stopped

    - file:
       path: /var/lib/pgsql/9.4/data/
       state: absent

    - file:
       path:  /var/lib/pgsql/9.4/data/
       owner: postgres
       group: postgres
       mode:  0700
       state: directory

- hosts: "{{ slave }}"
  remote_user: postgres

  tasks:
    - name: execute base backup
      shell: export PGPASSWORD="{{ passwd }}" && /usr/pgsql-9.4/bin/pg_basebackup -h pg1 -U replicant -D /var/lib/pgsql/9.4/data -P -v --xlog-method=stream 2>&1

    - name: add new configuration "recovery.conf"
      blockinfile:
        create: yes
        dest: /var/lib/pgsql/9.4/data/recovery.conf
        block: |
          standby_mode = 'on'
          primary_conninfo = 'user=replicant password={{ passwd }} host={{ master }} port=5432 sslmode=prefer'
          recovery_target_timeline = 'latest'

- hosts: "{{ slave }}"
  remote_user: ansible
  become: yes

  tasks:
    - service:
        name: postgresql-9.4
        state: started
...

Invoke failover

    (06.failover_94.yml)

DANGER Will Robinson, DANGER!!

Actually we just shut it down.

Failover and promotion is super easy; just execute a single command against pg2 and you're done. Because pg3 is configured as a cascaded slave it will automatically replicate from the newly promoted master. The secret is in the recovery.conf file, where we configured it to always read the most recent timeline:

    ansible-playbook -i hosts.cfg 06.failover_94.yml
---
- hosts: pg1
  remote_user: ansible
  become: yes

  tasks:
    - service:
        name: postgresql-9.4
        state: stopped

- hosts: pg2
  remote_user: postgres

  tasks:
    - name: promote data cluster pg4
      command:  /usr/pgsql-9.4/bin/pg_ctl -D /var/lib/pgsql/9.4/data/ promote
...

Upgrade servers from 9.4 to 9.6

    (hosts.cfg 07.pg1_upgrade_94-96.yml)

Upgrading from one version of Postgres to another at the best of times can be tricky. But a correctly configured playbook can make even this a straightforward proposition:

    ansible-playbook -i hosts.cfg 07.pg1_upgrade_94-96.yml

In this example, we upgrade pg1, which was shut down in our failover example, by executing the following steps:

  • Install the Postgres ver 9.6, binaries onto our three hosts
  • Shut down the 9.4 service and disable the binaries from starting upon server restart
  • Enable the 9.6 binaries in case the machine is required to restart
  • Create an empty data cluster for the upgraded version of Postgres; optionally remove any previously existing one
  • Execute upgrade process using the pg_upgrade utility; notice the use of two different port numbers
  • Update the version 9.6 data cluster with an updated set of configuration, i.e. postgresql.conf, server.conf, and pg_hba.conf
  • Update runtime environment variables when logging into the Unix Postgres account for easier administration
  • Start up our new Postgres version 9.6 service

Note: As there's more than one way to skin a cat, so too are there many ways of writing this playbook; it's up to you.

---
- hosts: all
  remote_user: ansible
  become: yes

  tasks:
    - name: install repo for PostgreSQL 9.6
      yum:
        name: https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6-x86_64/pgdg-centos96-9.6-3.noarch.rpm
        state: present

    - name: install PostgreSQL version 9.6
      yum:
        name: "{{ item }}"
        state: latest
      with_items:
        - postgresql96-server
        - postgresql96-contrib
        - pg_repack96

    - name: disable init for 9.4
      shell: chkconfig --level 2345 postgresql-9.4 off

    - name: enable init for 9.6
      shell: chkconfig --level 2345 postgresql-9.6 on

- hosts: pg1
  remote_user: ansible
  become: yes

  tasks:
    - service:
        name: postgresql-9.6
        state: stopped

    - file:
        path: /var/lib/pgsql/9.6/data/
        state: absent

    - name: create data cluster
      command: service postgresql-9.6 initdb

- hosts: pg1
  remote_user: postgres

  tasks:
    - name: execute the upgrade from 9.4 to 9.6
      shell: |
        /usr/pgsql-9.6/bin/pg_upgrade \
          -d /var/lib/pgsql/9.4/data \
          -D /var/lib/pgsql/9.6/data \
          -b /usr/pgsql-9.4/bin \
          -B /usr/pgsql-9.6/bin \
          -p 10094 \
          -P 5432

        exit 0

    - name: add new configuration to "postgresql.conf"
      blockinfile:
        dest: /var/lib/pgsql/9.6/data/postgresql.conf
        block: |
          include 'server.conf'

    - name: add new configuration to "server.conf"
      blockinfile:
        create: yes
        dest: /var/lib/pgsql/9.6/data/server.conf
        block: |
          listen_addresses = '*'
          wal_level = hot_standby
          max_wal_senders = 6
          wal_keep_segments = 10
          hot_standby = on

    - name: add new configuration to "pg_hba.conf"
      blockinfile:
        dest: /var/lib/pgsql/9.6/data/pg_hba.conf
        block: |
          host    all             all             0.0.0.0/0                md5
          host    replication     replicant       0.0.0.0/0                md5

    - name: update environment variables in UNIX account postgres
      blockinfile:
        create: yes
        dest: /var/lib/pgsql/.pgsql_profile
        block: |
          export PGHOST=/tmp PAGER=less PGDATA=/var/lib/pgsql/9.6/data

- hosts: pg1
  remote_user: ansible
  become: yes

  tasks:
    - service:
        name: postgresql-9.6
        state: started
...

Invoke failover

    (08.configure_slave_96.yml)

This script is virtually the same as script 05.configure_slave_94.yml. You might even want to edit it, thus eliminating one script from your collection:

    ansible-playbook -i hosts.cfg 08.configure_slave_96.yml --extra-vars "master=pg1 slave=pg2 passwd=mypassword"
    ansible-playbook -i hosts.cfg 08.configure_slave_96.yml --extra-vars "master=pg2 slave=pg3 passwd=mypassword"
---
- hosts: "{{ slave }}"
  remote_user: ansible
  become: yes

  tasks:
    - service:
        name: postgresql-9.4
        state: stopped

    - service:
        name: postgresql-9.6
        state: stopped

    - file:
       path: /var/lib/pgsql/9.6/data/
       state: absent

    - file:
       path:  /var/lib/pgsql/9.6/data/
       owner: postgres
       group: postgres
       mode:  0700
       state: directory

- hosts: "{{ slave }}"
  remote_user: postgres

  tasks:
    - name: execute base backup
      shell: |
        export PGPASSWORD="{{ passwd }}" && \
        /usr/pgsql-9.6/bin/pg_basebackup \
            -h pg1 \
            -U replicant \
            -D /var/lib/pgsql/9.6/data \
            -P -v --xlog-method=stream 2>&1

        exit 0

    - name: add new configuration "recovery.conf"
      blockinfile:
        create: yes
        dest: /var/lib/pgsql/9.6/data/recovery.conf
        block: |
          standby_mode = 'on'
          primary_conninfo = 'user=replicant password={{ passwd }} host={{ master }} port=5432 sslmode=prefer'
          recovery_target_timeline = 'latest'

    - name: update environment variables in UNIX account postgres
      blockinfile:
        create: yes
        dest: /var/lib/pgsql/.pgsql_profile
        block: |
          export PGHOST=/tmp PAGER=less PGDATA=/var/lib/pgsql/9.6/data

- hosts: "{{ slave }}"
  remote_user: ansible
  become: yes

  tasks:
    - service:
        name: postgresql-9.6
        state: started
...

Conclusion

Coding successfully using a relational database management system is often seen as an unsexy competency expected of any developer worth his salt. And yet, as I reflect over all the conversations, standups, and meetings I've had over the years, it has often turned out to be the most critical, time-consuming, and magical of all activities. Whole development teams will spend hours debating how data should move between the frontends and backends. Rock star developers will earn their spurs after successfully designing and implementing the movement of data between the user interface and its persistent storage. Never has so much depended upon something so mundane as the RDBMS. And never before has PostgreSQL seen, in all its long and storied history, so much adoption and critical implication into infrastructure as it has today, in this age of big data and analytics.

Don't stop here! By all means, follow up on the excellent online documentation and create your own playbooks. A little word of advice, you'll go farther and faster by avoiding the temptation of learning by using one of the many Postgres modules available in the Ansible Galaxy.

Ansible references

Postgres references

User profile image.
First introduced to hangman on a mini computer playing the William Tell Overture, Robert is an old school hacker. Initially trained on punch cards, and after undergoing several years of much needed therapy overcoming the trauma, Robert discovered Linux and was part of the rush leveraging the Internet. Eventually, he focused his open source activities on PostgreSQL.

4 Comments

Nice article.Can you post a similar article on MySQL database too ?

Thanks for the great article!

A small inaccuracy
" > $ HOME/playbooks/host.cfg ----------> hosts.cfg

Seems not enough listing
03.install_key.yml

It would be great if the sources were on github.

Thanks again!

Thanks for doing this. It isn't much better than just writing them as scripts in bash/ssh. I am unable to run this playbook again without potentially causing a serious disruption to my running environment.

The shell/command module though necessary is the bane of a good desired state configuration system.

It is more common to implement this with idempotent roles instead of playbooks. Use a healthcheck to trigger failover, and model master/slave with vars.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.