18. Lifecycle of an application

18.1. Pushing applications from test to production

  • define a version variable for a application
  • the version comes into test.yml and production.yml
# file: group_vars/production.yml
  app_version: v1.0
# file: group_vars/test.yml
  app_version: v1.1
  • use the version variable in your deploy task
# file: deploy.yml
- hosts: web
  tasks:
    - name: install git
      package:
        name: git
        state: latest

    - name: install app version {{ app_version }}
      git:
        repo: https://github.com/pstauffer/flask-mysql-app.git
        dest: /srv/checkout
        version: '{{ app_version }}'
# deploy on test
      ansible-playbook -i test deploy.yml
      TASK [install app version v1.1] ************************************************
      ok: [web2.pascal.lab]

# deploy on prod
      ansible-playbook -i prod deploy.yml
      TASK [install app version v1.0] ************************************************
      ok: [web1.pascal.lab]

18.2. Rollback

  • just checkout a working git commit / branch / tag of your ansible project
  • Run the deploy playbook again
# checkout an older version of the ansible project
git checkout v1.5

# run the deploy playbook
ansible-playbook -i prod deploy.yml

18.3. Scale out

  • simply extend the web group in the inventory
# prod static inventory
[flask_web]
web1.test.lab
web2.test.lab
  • Run the deploy playbook again
# run the deploy playbook
ansible-playbook -i prod deploy.yml

18.4. Serial

  • by default, Ansible will manage all machines in parallel
  • with serial you can control how many machines you are updating at once in the batch
  • use case: update a webserver farm
  • if we have 50 servers in the webservers group, 3 hosts would complete the play completely before moving on to the next 3 hosts
# file: play-serial.yml
---
- hosts: all
  serial: 3
#  serial: "50%"
  tasks:
    - name: install a package
      apt:
#        name: tar=1.27.1-2+deb8u1
        name: tar=1.29b-1~bpo8+1
        force: yes
        state: present

# verify the version
ssh web1.lifecycle.lab "dpkg -l | grep tar"

Hint

serial can also be specified as a percentage in Ansible 1.8 and later.

18.4.1. Maximum Failure Percentage

  • By default, Ansible will continue executing actions as long as there are hosts in the group that have not yet failed
  • abort the play when a certain threshold of failures have been reached
  • set a maximum failure percentage on a play with max_fail_percentage
# file: play-serial.yml
---
- hosts: all
  serial: 10
  max_fail_percentage: 30
  tasks:
    - name: install a package
      apt:
        name: tar
        state: present
  • In the above example, if more than 3 of the 10 servers in the group were to fail, the rest of the play would be aborted.

Important

The percentage set has be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort when 2 of the systems failed, the percentage should be set at 49 rather than 50.

18.4.2. Batch Size with Ansible 2.2

  • it’s also possible to define batch sizes
# file: play-serial.yml
---
- hosts: all
  serial:
     - 1
     - 1
     - 2 # if there are any hosts left, every following batch would contain 2 hosts until all available hosts are used
  tasks:
    - name: install a package
      apt:
#        name: tar=1.27.1-2+deb8u1
        name: tar=1.29b-1~bpo8+1
        force: yes
        state: present

# verify the version
ssh web1.lifecycle.lab "dpkg -l | grep tar"

Hint

You can also mix and match the values with percentage.

Important

No matter how small the percentage, the number of hosts per pass will always be 1 or greater.

18.4.3. Delegation

  • perform a task on one host with reference to another hosts -> use delegate_to
  • the delegated host doesn’t have to be in the inventory
  • use case: loadbalancer management
  • often used in combination with serial
# file: play-delegation.yml
---
- hosts: web2.test.lab

  tasks:
    - name: run a script local
      command: hostname
      delegate_to: localhost

    - name: run a script on remote
      command: hostname

# file: play-loadbalancer.yml
---
- hosts: webservers
  serial: 5

  tasks:
  - name: take out of load balancer pool
    command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
    delegate_to: 127.0.0.1

  - name: update webserver
    yum:
      name=apache2
      state=latest

  - name: add back to load balancer pool
    command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
    delegate_to: 127.0.0.1

18.4.4. Run Once

  • run a task one time and only on one host (the first one in the inventory)
  • use case: database schema update
  • can be used in combination with delegate_to
- command: /opt/application/upgrade_db.py
  run_once: true

18.4.5. Delegated facts

  • can be useful to get informations of another hosts (for example the ipaddress of the database host)
  • option 1: gather_facts has to be run on the host. You can just define a playbook with no tasks.
  • option 2: delegate facts! See Delegated facts
# option 1
- hosts: db.pascal.lab
  tasks: []

# option 2
- name: get database facts
  setup:
  delegate_to: db.pascal.lab
  delegate_facts: True

- name: debug db facts
  debug:
    msg: "{{ hostvars['db.pascal.lab'] }}"

18.4.6. Aborting a play

  • abort the whole play for all hosts, if a task fails.
  • will mark all hosts as failed if any fails
any_errors_fatal: true

18.4.7. Pre_Task / Post_Task

  • defined in the playbook with pre_task / post_task
  • are not the main tasks
  • setup the environment to be ready for the main-tasks
  • pre_task example -> deactivate monitoring or remove host from a loadbalancer group
  • often used in combination with delegate_to
pre_tasks:
  - shell: echo 'Pre Task for flask_app'

roles:
  - flask_app

post_tasks:
  - shell: echo 'Post Task for flask_app'

18.4.8. Use-Case

  • set monitoring maintenance mode
  • remove the server / app from the loadbalancer configuration
  • stop the service
  • wait for the service stop
  • checkout the new code / change something
  • deploy the new app
  • start the service
  • wait for the service start
  • run some basic tests (wait_for)
  • add the server / app to the loadbalancer
  • check the logfiles (wait_for)
  • remove the maintenance mode