Ansible Reference
Ansible Reference
Commands, playbook patterns, inventory management, roles, vault, and debugging — the stuff you look up every time.
ansible-playbook — running playbooks
# Basic run
ansible-playbook site.yml -i inventory/prod
# Limit to specific hosts or groups
ansible-playbook site.yml --limit webservers
ansible-playbook site.yml --limit "db1,db2"
ansible-playbook site.yml --limit "*.example.com"
# Dry run — check what would change
ansible-playbook site.yml --check
ansible-playbook site.yml --check --diff # show exact file diffs
# Extra variables
ansible-playbook site.yml -e "env=prod version=1.4"
ansible-playbook site.yml -e @vars/prod.yml
# Tags — run only specific tasks
ansible-playbook site.yml --tags "deploy,nginx"
ansible-playbook site.yml --skip-tags "slow,tests"
# Start at a specific task (resume after failure)
ansible-playbook site.yml --start-at-task "Restart nginx"
# Verbose output
ansible-playbook site.yml -v # task results
ansible-playbook site.yml -vvv # connection debug
ansible-playbook site.yml -vvvv # SSH debug
# Step through tasks interactively
ansible-playbook site.yml --step
# Vault
ansible-playbook site.yml --ask-vault-pass
ansible-playbook site.yml --vault-password-file ~/.vault_pass
# Limit forks (parallel execution, default 5)
ansible-playbook site.yml -f 10
ansible — ad-hoc commands
# Ping all hosts
ansible all -i inventory -m ping
# Run a shell command (use -m shell for pipes/redirects)
ansible webservers -m shell -a "uptime && free -h"
ansible webservers -a "uptime" # command module (no shell features)
# Copy a file
ansible webservers -m copy -a "src=/tmp/app.conf dest=/etc/app.conf mode=0644"
# Fetch files from remote hosts
ansible webservers -m fetch -a "src=/var/log/app.log dest=/tmp/logs/ flat=yes"
# Install a package
ansible webservers -m apt -a "name=nginx state=present" --become
ansible webservers -m dnf -a "name=nginx state=present" --become
ansible webservers -m yum -a "name=nginx state=latest" --become
# Manage a service
ansible webservers -m service -a "name=nginx state=restarted" --become
ansible webservers -m service -a "name=nginx state=stopped" --become
# Gather facts from a host
ansible web1 -m setup
ansible web1 -m setup -a "filter=ansible_distribution*"
# Run as another user
ansible webservers -a "whoami" --become --become-user=postgres
# Use a specific SSH key
ansible all -m ping --private-key=~/.ssh/deploy_key
Ad-hoc: -m command (default) does not invoke a shell — no pipes, no redirects, no env vars. Use -m shell when you need those.
Inventory — hosts, groups, and variables
# inventory/hosts.yml (YAML — preferred over INI)
all:
vars:
ansible_user: deploy
children:
webservers:
hosts:
web1.example.com:
http_port: 80
web2.example.com:
http_port: 8080
databases:
hosts:
db1.example.com:
ansible_port: 5432
ansible_host: 10.0.1.10 # override DNS with actual IP
production:
children:
webservers:
databases:
# INI format (still widely used)
[webservers]
web1.example.com
web2.example.com ansible_user=ubuntu
[databases]
db1.example.com ansible_port=2222
[production:children]
webservers
databases
[webservers:vars]
ansible_user=deploy
# Quick inline inventory (testing)
ansible all -i "web1.example.com,web2.example.com" -m ping
ansible all -i "localhost," -m ping -c local # comma required for single host
# Inspect inventory
ansible-inventory -i inventory/ --list
ansible-inventory -i inventory/ --graph
# Key connection variables
# ansible_host — actual IP/hostname (overrides inventory name)
# ansible_user — SSH user
# ansible_port — SSH port (default 22)
# ansible_ssh_private_key_file
# ansible_become — escalate privileges
# ansible_connection — ssh | local | docker | winrm
Playbook structure and handlers
---
- name: Deploy web application
hosts: webservers
become: yes
gather_facts: yes # set false to skip for speed when not needed
vars:
app_version: "1.4.2"
app_dir: /opt/myapp
vars_files:
- vars/secrets.yml # ansible-vault encrypted
pre_tasks:
- name: Bootstrap Python (before anything else)
raw: apt-get install -y python3
changed_when: false
roles:
- common
- nginx
- { role: app, tags: [app, deploy] }
tasks:
- name: Create app directory
file:
path: "{{ app_dir }}"
state: directory
owner: www-data
mode: "0755"
- name: Deploy config from template
template:
src: app.conf.j2
dest: "{{ app_dir }}/app.conf"
notify: Restart app # triggers handler on change
- name: Ensure service is started and enabled
service:
name: myapp
state: started
enabled: yes
handlers:
- name: Restart app
service:
name: myapp
state: restarted
Handlers run once at the end of the play, regardless of how many tasks triggered them. Use meta: flush_handlers to trigger them mid-play.
Variables, facts, and Jinja2 templates
# Variable precedence (lowest → highest)
# role defaults → inventory group_vars → inventory host_vars
# → playbook vars → include_vars → registered vars → extra vars (-e)
# Register task output as a variable
- name: Get current version
command: /opt/app/bin/version
register: app_version_out
changed_when: false
- debug:
msg: "Running {{ app_version_out.stdout }}"
- fail:
msg: "Version check failed"
when: app_version_out.rc != 0
# Commonly used facts (gathered by default)
ansible_os_family # Debian | RedHat | Darwin
ansible_distribution # Ubuntu | CentOS | macOS
ansible_distribution_version # 22.04
ansible_hostname
ansible_default_ipv4.address
ansible_memtotal_mb
ansible_processor_vcpus
# Jinja2 filters
{{ my_var | default("fallback") }}
{{ my_var | default(omit) }} # omit the param entirely if undefined
{{ my_list | join(", ") }}
{{ my_string | upper | replace("X","Y") }}
{{ my_var | bool }}
{{ my_var | int }}
{{ my_dict | dict2items }} # convert dict to [{key:, value:}] list
{{ my_list | selectattr("active") | list }}
# Template file (templates/nginx.conf.j2)
server {
listen {{ http_port | default(80) }};
server_name {{ ansible_hostname }};
{% if ssl_enabled | default(false) %}
ssl_certificate /etc/ssl/{{ domain }}.crt;
{% endif %}
{% for backend in groups['webservers'] %}
upstream {{ backend }};
{% endfor %}
}
Roles — structure and Galaxy
# Create role skeleton
ansible-galaxy role init my_role
# Directory layout
roles/my_role/
tasks/main.yml # entry point
handlers/main.yml
templates/ # .j2 Jinja2 templates
files/ # static files for copy module
vars/main.yml # high-priority, not easily overridden
defaults/main.yml # low-priority defaults (users override these)
meta/main.yml # dependencies and Galaxy metadata
# tasks/main.yml — static vs dynamic includes
- import_tasks: install.yml # static: parsed upfront, tags work
- include_tasks: "{{ env }}.yml" # dynamic: resolved at runtime, can use vars
# meta/main.yml — role dependencies
dependencies:
- role: common
- role: python
vars:
python_version: "3.11"
# requirements.yml
roles:
- name: geerlingguy.docker
version: "6.1.0"
- src: https://github.com/myorg/myrole.git
scm: git
version: main
# Install from Galaxy
ansible-galaxy role install geerlingguy.docker
ansible-galaxy role install -r requirements.yml
ansible-galaxy collection install community.general
import vs include: import_tasks is static (tags, --list-tasks work). include_tasks is dynamic (filename can use vars, but tags don’t propagate automatically).
Loops, conditionals, and error handling
# Loops
- name: Create users
user:
name: "{{ item.name }}"
groups: "{{ item.groups }}"
loop:
- { name: alice, groups: sudo }
- { name: bob, groups: www-data }
- name: Install packages
apt:
name: "{{ item }}"
state: present
loop: "{{ packages }}"
loop_control:
label: "{{ item }}" # cleaner output (show name not full dict)
# Conditionals
- name: Ubuntu only
apt:
name: apache2
when: ansible_os_family == "Debian"
- name: Only in production with explicit flag
command: /opt/deploy.sh
when:
- env == "prod"
- not skip_deploy | default(false)
# Error handling
- name: Check health endpoint
uri:
url: http://localhost/health
status_code: 200
ignore_errors: yes
register: health
- fail:
msg: "Service unhealthy: {{ health.msg }}"
when: health.failed and not allow_failure | default(false)
# Block / rescue / always (try / catch / finally)
- block:
- name: Deploy
command: /opt/deploy.sh
- name: Smoke test
uri:
url: http://localhost/health
status_code: 200
rescue:
- name: Rollback
command: /opt/rollback.sh
- name: Alert team
uri:
url: "{{ slack_webhook }}"
method: POST
body_format: json
body: { text: "Deploy failed on {{ inventory_hostname }}" }
always:
- name: Cleanup temp files
file:
path: /tmp/deploy_work
state: absent
ansible-vault — managing secrets
# Encrypt / decrypt a file
ansible-vault encrypt vars/secrets.yml
ansible-vault decrypt vars/secrets.yml # in-place, careful
ansible-vault edit vars/secrets.yml # safer: edit without decrypting to disk
ansible-vault view vars/secrets.yml
# Encrypt a single string (embed inline in playbook)
ansible-vault encrypt_string "s3cr3tP@ss" --name "db_password"
# Paste the !vault | $ANSIBLE_VAULT... block directly into your vars
# Multiple vault IDs — different passwords per environment
ansible-vault encrypt --vault-id prod@prompt vars/prod.yml
ansible-vault encrypt --vault-id dev@~/.vault-dev vars/dev.yml
ansible-playbook site.yml --vault-id prod@prompt --vault-id dev@~/.vault-dev
# Rekey (change vault password)
ansible-vault rekey vars/secrets.yml
# Store password in file
echo "mypassword" > ~/.vault_pass && chmod 600 ~/.vault_pass
# ansible.cfg: vault_password_file = ~/.vault_pass
Commit encrypted vault files to git. Never commit plaintext secrets. Use per-environment vault IDs so dev and prod secrets use different passwords.
ansible.cfg — key settings
[defaults]
inventory = inventory/
remote_user = deploy
private_key_file = ~/.ssh/id_ed25519
host_key_checking = False # needed for ephemeral cloud hosts
retry_files_enabled = False # suppress annoying .retry files
stdout_callback = yaml # clean output (yaml|debug|minimal)
forks = 20 # parallel (default 5)
timeout = 30
gathering = smart # cache facts: always|smart|explicit
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible-facts
fact_caching_timeout = 3600
[privilege_escalation]
become = True
become_method = sudo
[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
pipelining = True # faster SSH (requires requiretty=False in /etc/sudoers)
Config file search order: ANSIBLE_CONFIG env var → ./ansible.cfg → ~/.ansible.cfg → /etc/ansible/ansible.cfg
Debugging and troubleshooting
# Print a variable
- debug:
var: my_variable
- debug:
msg: "Deploying {{ app_version }} to {{ inventory_hostname }}"
verbosity: 1 # only show with -v flag
# Dry run with diff
ansible-playbook site.yml --check --diff
# List tasks / hosts without running
ansible-playbook site.yml --list-tasks
ansible-playbook site.yml --list-hosts
ansible-playbook site.yml --list-tags
# Syntax check and lint
ansible-playbook site.yml --syntax-check
ansible-lint site.yml
# Check connectivity
ansible all -m ping -i inventory/
# Show all facts for a host
ansible web1 -m setup | less
ansible web1 -m setup -a "filter=ansible_network_interfaces"
# Keep temp files on remote for debugging
ANSIBLE_KEEP_REMOTE_FILES=1 ansible-playbook site.yml -vvv
# Drop into Python debugger on failure
ansible-playbook site.yml --debugger=on_failed
# Useful environment variables
ANSIBLE_STDOUT_CALLBACK=debug # very verbose task output
ANSIBLE_FORCE_COLOR=1 # force colour in CI
ANSIBLE_HOST_KEY_CHECKING=False # skip host key check (CI)
Founded
2023 in London, UK
Contact
hello@releaserun.com