Overview
Woodpecker CI plugin to execute Ansible playbooks. This plugin is a fork of drone-plugins/drone-ansible with substantial modifications of the source code.
Features
- Install required dependencies before the start of a playbook
- Execute Ansible playbooks
Installing required python module dependencies
Many ansible modules require additional python dependencies to work. Because ansible is run inside an alpine-based container, these dependencies must be installed dynamically during playbook execution.
It is important to use delegate_to: localhost
as otherwise the pip module will install the dependency on the remote host, which will not have an effect.
- name: Install required pip dependencies
delegate_to: localhost
ansible.builtin.pip:
name: <name>
state: present
extra_args: --break-system-packages
Without --break-system-packages
alpine will complain aiming for plain pip3 packages being installed system-wide.
Alternatively, one can also use the apk/packages module if the required pip module is available as an python3-<name>
package
Effient handling of Ansible dependencies
By default, each step using the plugin will install the required dependencies using ansible-galaxy install -r requirements.yml
.
Often, one wants to run multiple playbooks in different steps, ideally in parallel.
In this case, a step which installs the requirements for all subsequent steps is useful.
steps:
"Install galaxy requirements":
image: pad92/ansible-alpine
commands:
- ansible-galaxy install -r requirements.yml
In addition, Ansible dependencies can be cached. This avoids having to re-download them for each build, saving bandwith and time. If root access to the Woodpecker instance is given, one can mount a volume to the container and store the dependencies there.
steps:
"Install galaxy requirements":
image: pad92/ansible-alpine
volumes:
- /root/woodpecker-cache/collections:/tmp/collections
commands:
- cp -r /tmp/collections $${CI_WORKSPACE}/
- ansible-galaxy install -r requirements.yml
- cp -r $${CI_WORKSPACE}/collections /tmp/
In the above example, the first command copies the cached dependencies to the workspace directory.
After the installation, the dependencies are copied back to the cache directory.
Note that this requires the creation of the cache directory on the host upfront (i.e. /root/woodpecker-cache
).
The location of the cache directory can be adjusted to the user's needs.
Mounting the cache directory directly to $${CI_WORKSPACE}/collections
is not feasible due to the following reasons:
- The volume mount conflicts with the volume mount providing the workspace directory to each container
- The mount would need to be added to each step as otherwise the dependencies are missing in these
Settings
Settings Name | Default | Description |
---|---|---|
become-method |
none | privilege escalation method to use |
become-user |
none | run operations as this user |
become |
false |
run operations with become |
check |
false |
run in "check mode"/dry-run, do not apply changes |
connection |
none | connection type to use |
diff |
false |
show the differences (may print secrets!) |
extra-vars |
none | set additional variables via key=value list or map or load them from yaml/json files via @ prefix |
flush-cache |
false |
clear the fact cache for every host in inventory |
force-handlers |
none | run handlers even if a task fails |
forks |
5 |
number of parallel processes to use |
galaxy-force |
true |
force overwriting an existing role or collection |
galaxy |
none | path to galaxy requirements file |
inventory |
none | specify inventory host path |
limit |
none | limit selected hosts to an additional pattern |
list-hosts |
false |
outputs a list of matching hosts |
list-tags |
false |
list all available tags |
list-tasks |
false |
list all tasks that would be executed |
module-path |
none | prepend paths to module library |
playbook |
none | list of playbooks to apply |
private-key |
none | SSH private key to connect to host |
requirements |
none | path to python requirements file to install |
scp-extra-args |
none | specify extra arguments to pass to scp only |
sftp-extra-args |
none | specify extra arguments to pass to sftp only |
skip-tags |
none | skip tasks and playbooks with a matching tag |
ssh-common-args |
none | specify common arguments to pass to sftp/scp/ssh |
ssh-extra-args |
none | specify extra arguments to pass to ssh only |
start-at-task |
none | start the playbook at the task matching this name |
syntax-check |
false |
perform a syntax check on the playbook |
tags |
none | only run plays and tasks tagged with these values |
timeout |
none | override the connection timeout in seconds |
user |
none | connect as this user |
vault-id |
none | the vault identity to used |
vault-password |
none | vault password |
verbose |
0 |
level of verbosity, 0 up to 4 |
Examples
steps:
'[CI Agent] ansible (apply)':
image: woodpeckerci/plugin-ansible
settings:
playbook: playbooks/ci/agent.yml
diff: true
inventory: environments/prod/inventory.ini
syntax_check: false
limit: ci_agent_prod
become: true
user: root
private_key:
from_secret: id_ed25519_ci
extra_vars:
woodpecker_agent_secret:
from_secret: woodpecker_agent_secret
woodpecker_agent_secret_baarkerlounger:
from_secret: woodpecker_agent_secret_baarkerlounger