forked from finallycoffee/base
Compare commits
1 Commits
main
...
transcaffe
Author | SHA1 | Date | |
---|---|---|---|
78e4d0a50e |
@ -13,9 +13,6 @@ and configuring basic system utilities like gnupg, ssh etc
|
|||||||
|
|
||||||
- [`gnupg`](roles/gnupg/README.md): configures gnupg on the target system
|
- [`gnupg`](roles/gnupg/README.md): configures gnupg on the target system
|
||||||
|
|
||||||
- [`lego`](roles/lego/README.md): runs [lego (LetsEncrypt Go)](https://github.com/go-acme/lego),
|
|
||||||
a ACME client written in go, using systemd (timers). Multi-instance capable.
|
|
||||||
|
|
||||||
- [`mariadb`](roles/mariadb/README.md): runs [MariaDB Server](https://mariadb.org/), one of the world's most popular open source relational database
|
- [`mariadb`](roles/mariadb/README.md): runs [MariaDB Server](https://mariadb.org/), one of the world's most popular open source relational database
|
||||||
|
|
||||||
- [`minio`](roles/minio/README.md): Deploy [min.io](https://min.io), an
|
- [`minio`](roles/minio/README.md): Deploy [min.io](https://min.io), an
|
||||||
@ -27,9 +24,6 @@ and configuring basic system utilities like gnupg, ssh etc
|
|||||||
- [`restic`](roles/restic/README.md): Manage backups using restic
|
- [`restic`](roles/restic/README.md): Manage backups using restic
|
||||||
and persist them to a configurable backend.
|
and persist them to a configurable backend.
|
||||||
|
|
||||||
- [`powerdns_tsig_key`](roles/powerdns_tsig_key/README.md): Simple ansible role
|
|
||||||
for generating TSIG keys in PowerDNS.
|
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
[CNPLv7+](LICENSE.md): Cooperative Nonviolent Public License
|
[CNPLv7+](LICENSE.md): Cooperative Nonviolent Public License
|
||||||
|
14
galaxy.yml
14
galaxy.yml
@ -1,22 +1,12 @@
|
|||||||
namespace: finallycoffee
|
namespace: finallycoffee
|
||||||
name: base
|
name: base
|
||||||
version: 0.1.2
|
version: 0.0.2
|
||||||
readme: README.md
|
readme: README.md
|
||||||
authors:
|
authors:
|
||||||
- transcaffeine <transcaffeine@finally.coffee>
|
- transcaffeine <transcaffeine@finally.coffee>
|
||||||
description: Roles for base services which are common dependencies other services like databases
|
description: Roles for base services which are common dependencies other services like databases
|
||||||
dependencies:
|
|
||||||
"community.docker": "^3.0.0"
|
|
||||||
license_file: LICENSE.md
|
license_file: LICENSE.md
|
||||||
build_ignore:
|
build_ignore:
|
||||||
- '*.tar.gz'
|
- '*.tar.gz'
|
||||||
repository: https://git.finally.coffee/finallycoffee/base
|
repository: https://git.finally.coffee/finallycoffee/base
|
||||||
issues: https://codeberg.org/finallycoffee/ansible-collection-base/issues
|
issues: https://git.finally.coffee/finallycoffee/base/issues
|
||||||
tags:
|
|
||||||
- docker
|
|
||||||
- elastic
|
|
||||||
- lego
|
|
||||||
- mariadb
|
|
||||||
- minio
|
|
||||||
- nginx
|
|
||||||
- restic
|
|
||||||
|
@ -1,3 +1,3 @@
|
|||||||
---
|
---
|
||||||
|
|
||||||
requires_ansible: ">=2.15"
|
requires_ansible: ">=2.12"
|
||||||
|
@ -1,33 +0,0 @@
|
|||||||
# `finallycoffee.base.dns` ansible role
|
|
||||||
|
|
||||||
Simple role for wrapping around the
|
|
||||||
[`famedly.dns.update`](https://github.com/famedly/ansible-collection-dns/blob/main/plugins/modules/update.py)
|
|
||||||
ansible module.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Example playbook
|
|
||||||
```yaml
|
|
||||||
- target: "{{ target_hosts }}"
|
|
||||||
roles:
|
|
||||||
- role: finallycoffee.base.dns
|
|
||||||
vars:
|
|
||||||
dns_server: "dns.example.org"
|
|
||||||
dns_zone: "zone.example.org"
|
|
||||||
dns_records: "{{ dns_records }}"
|
|
||||||
dns_record_state: exact
|
|
||||||
dns_tsig_name: "mykeyname"
|
|
||||||
dns_tsig_algo: "hmac-sha256"
|
|
||||||
dns_tsig_key: "mykeycontent"
|
|
||||||
vars:
|
|
||||||
dns_records:
|
|
||||||
- type: A
|
|
||||||
name: gitea
|
|
||||||
content: "127.0.0.1"
|
|
||||||
- type: AAAA
|
|
||||||
name: gitea
|
|
||||||
content: "fe80::1"
|
|
||||||
- type: CNAME
|
|
||||||
name: "_acme_challenge.gitea"
|
|
||||||
content: "delegated-cname.challenge.example.org"
|
|
||||||
```
|
|
@ -1,2 +0,0 @@
|
|||||||
---
|
|
||||||
dns_record_state: present
|
|
@ -1,11 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
- name: Ensure DNS records in '{{ dns_zone }}' are up to date
|
|
||||||
famedly.dns.update:
|
|
||||||
primary_master: "{{ dns_server }}"
|
|
||||||
zone: "{{ dns_zone }}"
|
|
||||||
tsig_name: "{{ dns_tsig_name }}"
|
|
||||||
tsig_algo: "{{ dns_tsig_algo }}"
|
|
||||||
tsig_key: "{{ dns_tsig_key }}"
|
|
||||||
rr_set: "{{ dns_records }}"
|
|
||||||
state: "{{ dns_record_state }}"
|
|
@ -1,46 +0,0 @@
|
|||||||
# `finallycoffee.base.lego` ansible role
|
|
||||||
|
|
||||||
This role can be used to retrieve ACME certificates on the target host. It uses `lego` for that, and with systemd template units provides an easy way to configure and monitor the status for each certificate.
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
|
|
||||||
- `systemd`
|
|
||||||
- write access to /tmp to unpack the lego release tarball during installation
|
|
||||||
- write access to /opt/lego (or whatever `lego_base_path` is set to) for configuration and certificate data
|
|
||||||
- `become` privileges of the `ansible_user` on the target
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Required configuration
|
|
||||||
|
|
||||||
- `lego_instance` - used for allowing multiple lego jobs to run with systemd template units. recommended to be set to the CN / first SAN of the certificate.
|
|
||||||
- `lego_cert_domains` - list of FQDNs to request a certificate for
|
|
||||||
- `lego_acme_account_email` - when using letsencrypt, a contact email is mandatory
|
|
||||||
|
|
||||||
### Proxies / Registries
|
|
||||||
|
|
||||||
The role ensure `lego` is downloaded from the github release page. If you are behind a proxy or use a registry like Nexus3, set `lego_release_archive_server`.
|
|
||||||
|
|
||||||
### ACME server
|
|
||||||
|
|
||||||
Per default, the Letsencrypt Staging ACME server is configured. Set `lego_acme_server_url` from `lego_letsencrypt_server_urls.{qa,prod}` or configure your own ACME v2 server directly.
|
|
||||||
|
|
||||||
### Certificate
|
|
||||||
|
|
||||||
To set for which domains to request a certificate for, set them as a list of SANs in `lego_cert_domains`. The default key type is EC256 and can be overridden using `lego_cert_key_type`.
|
|
||||||
|
|
||||||
Set the type of challenge in `lego_acme_challenge_type` (to either `http` or `dns`), and `lego_acme_challenge_provider` to, for example, `rfc2136` for DNS challenges using the DNSUPDATE mechanism. If your challenge needs additional data, set that in `lego_command_config` as a dictionary analog to `lego_base_command_config` (see [defaults](defaults/main.yml)).
|
|
||||||
|
|
||||||
## Trivia
|
|
||||||
|
|
||||||
### Architecture
|
|
||||||
|
|
||||||
By default, the lego distribution for `linux` on `amd64` is downloaded. If your target needs a different architecture or target OS, adjust this in `lego_os` and `lego_architecture`, cross-checking with the [lego GitHub release page](https://github.com/go-acme/lego/releases/tag/v4.17.4) for upstream availability.
|
|
||||||
|
|
||||||
### User management
|
|
||||||
|
|
||||||
The role will attempt to create user+group for each seperate lego instance for data isolation (i.e. to avoid leaking a TSIG key from one lego instance to other services). The user and group are of the form `acme-{{ lego_instance }}`. Beware that changing this in `lego_cert_{user,group}` also requires `lego_systemd_{user,group}` to be adjusted!
|
|
||||||
|
|
||||||
### Binding to ports < 1024 (HTTP-01 challenge)
|
|
||||||
|
|
||||||
Set `lego_binary_allow_net_bind_service: true` to allow the lego binary to bind to ports in the 'privileged' (< 1024) port range.
|
|
@ -1,71 +0,0 @@
|
|||||||
---
|
|
||||||
lego_user: "lego"
|
|
||||||
lego_version: "4.18.0"
|
|
||||||
lego_instance: default
|
|
||||||
lego_base_path: "/opt/lego"
|
|
||||||
lego_cert_user: "acme-{{ lego_instance }}"
|
|
||||||
lego_cert_group: "{{ lego_cert_user }}"
|
|
||||||
lego_cert_mode: "0640" # rw-r-----
|
|
||||||
lego_systemd_user: "acme-%i"
|
|
||||||
lego_systemd_group: "{{ lego_systemd_user }}"
|
|
||||||
lego_instance_base_path: "{{ lego_base_path }}/instances"
|
|
||||||
lego_instance_path: "{{ lego_instance_base_path }}/{{ lego_instance }}"
|
|
||||||
|
|
||||||
lego_cert_domains: []
|
|
||||||
lego_cert_key_type: ec256
|
|
||||||
lego_cert_days_to_renew: 30
|
|
||||||
lego_acme_account_email: ~
|
|
||||||
lego_acme_challenge_type: http
|
|
||||||
lego_acme_challenge_provider: ~
|
|
||||||
lego_letsencrypt_server_urls:
|
|
||||||
qa: "https://acme-staging-v02.api.letsencrypt.org/directory"
|
|
||||||
prod: "https://acme-v02.api.letsencrypt.org/directory"
|
|
||||||
lego_acme_server_url: "{{ lego_letsencrypt_server_urls.qa }}"
|
|
||||||
|
|
||||||
lego_base_environment:
|
|
||||||
LEGO_CERT_USER: "{{ lego_cert_user }}"
|
|
||||||
LEGO_CERT_GROUP: "{{ lego_cert_group }}"
|
|
||||||
LEGO_CERT_MODE: "{{ lego_cert_mode }}"
|
|
||||||
LEGO_CERT_STORE_PATH: "{{ lego_instance_path }}"
|
|
||||||
LEGO_CERT_DAYS_TO_RENEW: "{{ lego_cert_days_to_renew }}"
|
|
||||||
LEGO_KEY_TYPE: "{{ lego_cert_key_type }}"
|
|
||||||
LEGO_ACME_CHALLENGE_TYPE: "{{ lego_acme_challenge_type }}"
|
|
||||||
LEGO_ACME_SERVER: "{{ lego_acme_server_url }}"
|
|
||||||
LEGO_COMMAND_ARGS: "{{ lego_command_args }}"
|
|
||||||
|
|
||||||
lego_base_command_config:
|
|
||||||
server: "{{ lego_acme_server_url }}"
|
|
||||||
accept_tos: true
|
|
||||||
email: "{{ lego_acme_account_email }}"
|
|
||||||
path: "{{ lego_instance_path }}"
|
|
||||||
key_type: "{{ lego_cert_key_type }}"
|
|
||||||
|
|
||||||
lego_acme_challenge_config: >-
|
|
||||||
{{ {lego_acme_challenge_type: lego_acme_challenge_provider} }}
|
|
||||||
|
|
||||||
lego_systemd_unit_path: "/etc/systemd/system"
|
|
||||||
lego_systemd_template_unit_name: "lego@.service"
|
|
||||||
lego_systemd_template_unit_file: "{{ lego_systemd_template_unit_name }}.j2"
|
|
||||||
lego_systemd_service_name: "lego@{{ lego_instance }}.service"
|
|
||||||
lego_systemd_environment: >-
|
|
||||||
{{ lego_base_environment | combine(lego_environment | default({})) }}
|
|
||||||
lego_full_command_config: >-
|
|
||||||
{{ lego_base_command_config
|
|
||||||
| combine(lego_acme_challenge_config)
|
|
||||||
| combine(lego_command_config | default({})) }}
|
|
||||||
|
|
||||||
lego_systemd_timer_name: "lego-{{ lego_instance }}.timer"
|
|
||||||
lego_systemd_timer_template: lego.timer.j2
|
|
||||||
lego_systemd_timer_calendar: "*-*-* *:00/15:00"
|
|
||||||
|
|
||||||
lego_architecture: "{{ 'arm64' if ansible_architecture == 'aarch64' else 'amd64' }}"
|
|
||||||
lego_os: "linux"
|
|
||||||
lego_binary_allow_net_bind_service: false
|
|
||||||
|
|
||||||
lego_release_archive_server: "https://github.com"
|
|
||||||
lego_release_archive_filename: >-
|
|
||||||
lego_v{{ lego_version }}_{{ lego_os }}_{{ lego_architecture }}.tar.gz
|
|
||||||
lego_release_archive_url: >-
|
|
||||||
{{ lego_release_archive_server }}/go-acme/lego/releases/download/v{{ lego_version }}/{{ lego_release_archive_filename }}
|
|
||||||
lego_release_archive_file_path: "/tmp/{{ lego_release_archive_filename }}"
|
|
||||||
lego_release_archive_path: "/tmp/lego_v{{ lego_version }}_{{ lego_os }}_{{ lego_architecture }}"
|
|
@ -1,22 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
LEGO_BINARY=$(/usr/bin/env which lego)
|
|
||||||
|
|
||||||
if [[ -n "$LEGO_HTTP_FALLBACK_PORT" ]]; then
|
|
||||||
nc -z 127.0.0.1 $LEGO_HTTP_PORT;
|
|
||||||
if [[ $? -eq 0 ]]; then
|
|
||||||
LEGO_HTTP_PORT=$LEGO_HTTP_FALLBACK_PORT
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
LEGO_COMMAND_ARGS_EXPANDED=$(bash -c "echo $LEGO_COMMAND_ARGS") # This is a bit icky
|
|
||||||
|
|
||||||
FILES_IN_DIR=$(find "$LEGO_CERT_STORE_PATH/certificates" | wc -l)
|
|
||||||
if [[ $FILES_IN_DIR -gt 2 ]]; then
|
|
||||||
$LEGO_BINARY $LEGO_COMMAND_ARGS_EXPANDED renew --days=$LEGO_CERT_DAYS_TO_RENEW
|
|
||||||
else
|
|
||||||
$LEGO_BINARY $LEGO_COMMAND_ARGS_EXPANDED run
|
|
||||||
fi
|
|
||||||
|
|
||||||
ls "$LEGO_CERT_STORE_PATH/certificates" | xargs -I{} -n 1 chmod "$LEGO_CERT_MODE" "$LEGO_CERT_STORE_PATH/certificates/{}"
|
|
||||||
ls "$LEGO_CERT_STORE_PATH/certificates" | xargs -I{} -n 1 chown "$LEGO_CERT_USER":"$LEGO_CERT_GROUP" "$LEGO_CERT_STORE_PATH/certificates/{}"
|
|
@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Ensure systemd daemon is reloaded
|
|
||||||
ansible.builtin.systemd:
|
|
||||||
daemon_reload: true
|
|
||||||
listen: systemd_reload
|
|
@ -1,157 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Ensure lego cert group is created
|
|
||||||
ansible.builtin.group:
|
|
||||||
name: "{{ lego_cert_group }}"
|
|
||||||
state: present
|
|
||||||
system: true
|
|
||||||
|
|
||||||
- name: Ensure lego cert user is created
|
|
||||||
ansible.builtin.user:
|
|
||||||
name: "{{ lego_cert_user }}"
|
|
||||||
state: present
|
|
||||||
system: true
|
|
||||||
create_home: false
|
|
||||||
groups:
|
|
||||||
- "{{ lego_cert_group }}"
|
|
||||||
append: true
|
|
||||||
|
|
||||||
- name: Ensure lego user is created
|
|
||||||
ansible.builtin.user:
|
|
||||||
name: "{{ lego_user }}"
|
|
||||||
state: present
|
|
||||||
system: true
|
|
||||||
create_home: false
|
|
||||||
groups:
|
|
||||||
- "{{ lego_cert_group }}"
|
|
||||||
append: true
|
|
||||||
|
|
||||||
- name: Ensure lego is installed
|
|
||||||
block:
|
|
||||||
- name: Check if lego is present
|
|
||||||
ansible.builtin.command:
|
|
||||||
cmd: which lego
|
|
||||||
changed_when: false
|
|
||||||
failed_when: false
|
|
||||||
register: lego_binary_info
|
|
||||||
|
|
||||||
- name: Download lego from source
|
|
||||||
ansible.builtin.get_url:
|
|
||||||
url: "{{ lego_release_archive_url }}"
|
|
||||||
url_username: "{{ lego_release_archive_url_username | default(omit) }}"
|
|
||||||
url_password: "{{ lego_release_archive_url_password | default(omit) }}"
|
|
||||||
dest: "{{ lego_release_archive_file_path }}"
|
|
||||||
when: lego_binary_info.rc != 0
|
|
||||||
|
|
||||||
- name: Create folder to uncompress into
|
|
||||||
ansible.builtin.file:
|
|
||||||
dest: "{{ lego_release_archive_path }}"
|
|
||||||
state: directory
|
|
||||||
when: lego_binary_info.rc != 0
|
|
||||||
|
|
||||||
- name: Uncompress lego source archive
|
|
||||||
ansible.builtin.unarchive:
|
|
||||||
src: "{{ lego_release_archive_file_path }}"
|
|
||||||
dest: "{{ lego_release_archive_path }}"
|
|
||||||
remote_src: true
|
|
||||||
when: lego_binary_info.rc != 0
|
|
||||||
|
|
||||||
- name: Ensure lego binary is present in PATH
|
|
||||||
ansible.builtin.copy:
|
|
||||||
src: "{{ lego_release_archive_path }}/lego"
|
|
||||||
dest: "/usr/local/bin/lego"
|
|
||||||
mode: "u+rwx,g+rx,o+rx"
|
|
||||||
remote_src: true
|
|
||||||
when: lego_binary_info.rc != 0
|
|
||||||
|
|
||||||
- name: Ensure lego is allowed to bind to ports < 1024
|
|
||||||
community.general.capabilities:
|
|
||||||
path: "/usr/local/bin/lego"
|
|
||||||
capability: "cap_net_bind_service+ep"
|
|
||||||
state: present
|
|
||||||
when: lego_binary_allow_net_bind_service
|
|
||||||
|
|
||||||
- name: Ensure intermediate data is gone
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ item }}"
|
|
||||||
state: absent
|
|
||||||
loop:
|
|
||||||
- "{{ lego_release_archive_path }}"
|
|
||||||
- "{{ lego_release_archive_file_path }}"
|
|
||||||
when: lego_binary_info.rc != 0
|
|
||||||
|
|
||||||
- name: Ensure lego base path exists
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ lego_base_path }}"
|
|
||||||
state: directory
|
|
||||||
mode: "0755"
|
|
||||||
|
|
||||||
- name: Ensure template unit file is present
|
|
||||||
ansible.builtin.template:
|
|
||||||
src: "{{ lego_systemd_template_unit_file }}"
|
|
||||||
dest: "{{ lego_systemd_unit_path }}/{{ lego_systemd_template_unit_name }}"
|
|
||||||
notify:
|
|
||||||
- systemd_reload
|
|
||||||
|
|
||||||
- name: Ensure env file is templated
|
|
||||||
ansible.builtin.copy:
|
|
||||||
content: |+
|
|
||||||
{% for entry in lego_systemd_environment | dict2items %}
|
|
||||||
{{ entry.key }}={{ entry.value }}
|
|
||||||
{% endfor %}
|
|
||||||
dest: "{{ lego_base_path }}/{{ lego_instance }}.conf"
|
|
||||||
|
|
||||||
- name: Ensure timer unit is templated
|
|
||||||
ansible.builtin.template:
|
|
||||||
src: "{{ lego_systemd_timer_template }}"
|
|
||||||
dest: "{{ lego_systemd_unit_path }}/{{ lego_systemd_timer_name }}"
|
|
||||||
notify:
|
|
||||||
- systemd_reload
|
|
||||||
|
|
||||||
- name: Ensure handling script is templated
|
|
||||||
ansible.builtin.copy:
|
|
||||||
src: "lego_run.sh"
|
|
||||||
dest: "{{ lego_base_path }}/run.sh"
|
|
||||||
mode: "0755"
|
|
||||||
|
|
||||||
- name: Ensure per-instance base path is created
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ lego_instance_path }}"
|
|
||||||
state: directory
|
|
||||||
owner: "{{ lego_cert_user }}"
|
|
||||||
group: "{{ lego_cert_group }}"
|
|
||||||
mode: "0755"
|
|
||||||
|
|
||||||
- name: Ensure per-instance sub folders are created with correct permissions
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ item.path }}"
|
|
||||||
state: directory
|
|
||||||
owner: "{{ item.owner | default(lego_cert_user) }}"
|
|
||||||
group: "{{ item.group | default(lego_cert_group) }}"
|
|
||||||
mode: "{{ item.mode }}"
|
|
||||||
loop:
|
|
||||||
- path: "{{ lego_instance_path }}/secrets"
|
|
||||||
mode: "0750"
|
|
||||||
- path: "{{ lego_instance_path }}/accounts"
|
|
||||||
mode: "0770"
|
|
||||||
- path: "{{ lego_instance_path }}/certificates"
|
|
||||||
mode: "0775"
|
|
||||||
loop_control:
|
|
||||||
label: "{{ item.path }}"
|
|
||||||
|
|
||||||
- name: Ensure systemd daemon is reloaded
|
|
||||||
meta: flush_handlers
|
|
||||||
|
|
||||||
- name: Ensure systemd timer is enabled
|
|
||||||
ansible.builtin.systemd_service:
|
|
||||||
name: "{{ lego_systemd_timer_name }}"
|
|
||||||
enabled: true
|
|
||||||
|
|
||||||
- name: Ensure systemd timer is started
|
|
||||||
ansible.builtin.systemd_service:
|
|
||||||
name: "{{ lego_systemd_timer_name }}"
|
|
||||||
state: "started"
|
|
||||||
|
|
||||||
- name: Ensure systemd service is started once to obtain the certificate
|
|
||||||
ansible.builtin.systemd_service:
|
|
||||||
name: "{{ lego_systemd_service_name }}"
|
|
||||||
state: "started"
|
|
@ -1,9 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=Run lego@{{ lego_instance}}.service
|
|
||||||
|
|
||||||
[Timer]
|
|
||||||
OnCalendar={{ lego_systemd_timer_calendar }}
|
|
||||||
Unit=lego@{{ lego_instance }}.service
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=timers.target
|
|
@ -1,14 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=Run lego (letsencrypt client in go)
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=oneshot
|
|
||||||
EnvironmentFile={{ lego_base_path }}/%i.conf
|
|
||||||
User={{ lego_systemd_user }}
|
|
||||||
Group={{ lego_systemd_group }}
|
|
||||||
ExecStart={{ lego_base_path }}/run.sh
|
|
||||||
AmbientCapabilities=CAP_NET_BIND_SERVICE
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=basic.target
|
|
||||||
DefaultInstance=default
|
|
@ -1,16 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
lego_domain_command_args: >-
|
|
||||||
{% for domain in lego_cert_domains %}
|
|
||||||
--domains={{ domain }}
|
|
||||||
{%- endfor %}
|
|
||||||
|
|
||||||
lego_config_command_args: >-
|
|
||||||
{% for key in lego_full_command_config %}
|
|
||||||
--{{ key | replace("_", "-") }}
|
|
||||||
{%- if lego_full_command_config[key] != None and lego_full_command_config[key] != '' -%}
|
|
||||||
={{ lego_full_command_config[key] }}
|
|
||||||
{%- endif -%}
|
|
||||||
{%- endfor -%}
|
|
||||||
|
|
||||||
lego_command_args: "{{ lego_domain_command_args }} {{ lego_config_command_args }}"
|
|
@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
|
|
||||||
mariadb_version: "10.11.9"
|
mariadb_version: "10.6.11"
|
||||||
mariadb_base_path: /var/lib/mariadb
|
mariadb_base_path: /var/lib/mariadb
|
||||||
mariadb_data_path: "{{ mariadb_base_path }}/{{ mariadb_version }}"
|
mariadb_data_path: "{{ mariadb_base_path }}/{{ mariadb_version }}"
|
||||||
|
|
||||||
|
@ -30,8 +30,7 @@ minio_container_labels: {}
|
|||||||
minio_container_command:
|
minio_container_command:
|
||||||
- "server"
|
- "server"
|
||||||
- "{{ minio_container_data_path }}"
|
- "{{ minio_container_data_path }}"
|
||||||
- "--console-address"
|
- "--console-address \":{{ minio_container_listen_port_console }}\""
|
||||||
- ":{{ minio_container_listen_port_console }}"
|
|
||||||
minio_container_restart_policy: "unless-stopped"
|
minio_container_restart_policy: "unless-stopped"
|
||||||
minio_container_image_force_source: "{{ (minio_container_image_tag == 'latest')|bool }}"
|
minio_container_image_force_source: "{{ (minio_container_image_tag == 'latest')|bool }}"
|
||||||
|
|
||||||
|
@ -26,8 +26,3 @@ For exposing this server to the host and/or internet, the `nginx_container_ports
|
|||||||
from host to container), `nginx_container_networks` (docker networking) or `nginx_container_labels`
|
from host to container), `nginx_container_networks` (docker networking) or `nginx_container_labels`
|
||||||
(for label-based routing discovery like traefik) can be used. The options correspond to the arguments
|
(for label-based routing discovery like traefik) can be used. The options correspond to the arguments
|
||||||
of the `community.docker.docker_container` module.
|
of the `community.docker.docker_container` module.
|
||||||
|
|
||||||
## Deployment methods
|
|
||||||
|
|
||||||
Set `nginx_deployment_method` to either `docker` or `podman` to use the respective ansible modules for
|
|
||||||
creating and managing the container and its image. See all supported methods in `nginx_deployment_methods`.
|
|
||||||
|
@ -1,10 +1,9 @@
|
|||||||
---
|
---
|
||||||
nginx_version: "1.27.2"
|
|
||||||
|
nginx_version: "1.25.1"
|
||||||
nginx_flavour: alpine
|
nginx_flavour: alpine
|
||||||
nginx_base_path: /opt/nginx
|
nginx_base_path: /opt/nginx
|
||||||
nginx_config_file: "{{ nginx_base_path }}/nginx.conf"
|
nginx_config_file: "{{ nginx_base_path }}/nginx.conf"
|
||||||
nginx_state: present
|
|
||||||
nginx_deployment_method: docker
|
|
||||||
|
|
||||||
nginx_container_name: nginx
|
nginx_container_name: nginx
|
||||||
nginx_container_image_reference: >-
|
nginx_container_image_reference: >-
|
||||||
@ -27,9 +26,6 @@ nginx_container_image_repository: >-
|
|||||||
nginx_container_image_registry: "docker.io"
|
nginx_container_image_registry: "docker.io"
|
||||||
nginx_container_image_name: "nginx"
|
nginx_container_image_name: "nginx"
|
||||||
nginx_container_image_tag: ~
|
nginx_container_image_tag: ~
|
||||||
nginx_container_image_source: pull
|
|
||||||
nginx_container_state: >-2
|
|
||||||
{{ (nginx_state == 'present') | ternary('started', 'absent') }}
|
|
||||||
|
|
||||||
nginx_container_restart_policy: "unless-stopped"
|
nginx_container_restart_policy: "unless-stopped"
|
||||||
nginx_container_volumes:
|
nginx_container_volumes:
|
||||||
|
@ -1,12 +0,0 @@
|
|||||||
---
|
|
||||||
allow_duplicates: true
|
|
||||||
dependencies: []
|
|
||||||
galaxy_info:
|
|
||||||
role_name: nginx
|
|
||||||
description: Deploy nginx, a webserver
|
|
||||||
galaxy_tags:
|
|
||||||
- nginx
|
|
||||||
- http
|
|
||||||
- webserver
|
|
||||||
- docker
|
|
||||||
- podman
|
|
@ -1,28 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Ensure docker container image '{{ nginx_container_image_reference }}' is {{ nginx_state }}
|
|
||||||
community.docker.docker_image:
|
|
||||||
name: "{{ nginx_container_image_reference }}"
|
|
||||||
state: "{{ nginx_state }}"
|
|
||||||
source: "{{ nginx_container_image_source }}"
|
|
||||||
force_source: >-2
|
|
||||||
{{ nginx_container_image_force_source
|
|
||||||
| default(nginx_container_image_tag | default(false, true)) }}
|
|
||||||
register: nginx_container_image_info
|
|
||||||
until: nginx_container_image_info is success
|
|
||||||
retries: 5
|
|
||||||
delay: 3
|
|
||||||
|
|
||||||
- name: Ensure docker container '{{ nginx_container_name }}' is {{ nginx_container_state }}
|
|
||||||
community.docker.docker_container:
|
|
||||||
name: "{{ nginx_container_name }}"
|
|
||||||
image: "{{ nginx_container_image_reference }}"
|
|
||||||
env: "{{ nginx_container_env | default(omit, true) }}"
|
|
||||||
user: "{{ nginx_container_user | default(omit, true) }}"
|
|
||||||
ports: "{{ nginx_container_ports | default(omit, true) }}"
|
|
||||||
labels: "{{ nginx_container_labels | default(omit, true) }}"
|
|
||||||
volumes: "{{ nginx_container_volumes | default(omit, true) }}"
|
|
||||||
etc_hosts: "{{ nginx_container_etc_hosts | default(omit, true) }}"
|
|
||||||
networks: "{{ nginx_container_networks | default(omit, true) }}"
|
|
||||||
purge_networks: "{{ nginx_container_purge_networks | default(omit, true) }}"
|
|
||||||
restart_policy: "{{ nginx_container_restart_policy }}"
|
|
||||||
state: "{{ nginx_container_state }}"
|
|
@ -1,27 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Ensure container image '{{ nginx_container_image_reference }}' is {{ nginx_state }}
|
|
||||||
containers.podman.podman_image:
|
|
||||||
name: "{{ nginx_container_image_reference }}"
|
|
||||||
state: "{{ nginx_state }}"
|
|
||||||
pull: "{{ nginx_container_image_source == 'pull' }}"
|
|
||||||
force: >-2
|
|
||||||
{{ nginx_container_image_force_source
|
|
||||||
| default(nginx_container_image_tag | default(false, true)) }}
|
|
||||||
register: nginx_container_image_info
|
|
||||||
until: nginx_container_image_info is success
|
|
||||||
retries: 5
|
|
||||||
delay: 3
|
|
||||||
|
|
||||||
- name: Ensure container '{{ nginx_container_name }}' is {{ nginx_container_state }}
|
|
||||||
containers.podman.podman_container:
|
|
||||||
name: "{{ nginx_container_name }}"
|
|
||||||
image: "{{ nginx_container_image_reference }}"
|
|
||||||
env: "{{ nginx_container_env | default(omit, true) }}"
|
|
||||||
user: "{{ nginx_container_user | default(omit, true) }}"
|
|
||||||
ports: "{{ nginx_container_ports | default(omit, true) }}"
|
|
||||||
labels: "{{ nginx_container_labels | default(omit, true) }}"
|
|
||||||
volumes: "{{ nginx_container_volumes | default(omit, true) }}"
|
|
||||||
etc_hosts: "{{ nginx_container_etc_hosts | default(omit, true) }}"
|
|
||||||
network: "{{ nginx_container_networks | default(omit, true) }}"
|
|
||||||
restart_policy: "{{ nginx_container_restart_policy }}"
|
|
||||||
state: "{{ nginx_container_state }}"
|
|
@ -1,30 +1,10 @@
|
|||||||
---
|
---
|
||||||
- name: Check if state is supported
|
|
||||||
ansible.builtin.fail:
|
|
||||||
msg: >-2
|
|
||||||
Unsupported state '{{ nginx_state }}'. Supported
|
|
||||||
states are {{ nginx_states | join(', ') }}.
|
|
||||||
when: nginx_state not in nginx_states
|
|
||||||
|
|
||||||
- name: Check if deployment_method is supported
|
- name: Ensure base path '{{ nginx_base_path }}' exists
|
||||||
ansible.builtin.fail:
|
|
||||||
msg: >-2
|
|
||||||
Unsupported state '{{ nginx_deployment_method }}'. Supported
|
|
||||||
states are {{ nginx_deployment_methods | join(', ') }}.
|
|
||||||
when: nginx_deployment_method not in nginx_deployment_methods
|
|
||||||
|
|
||||||
- name: Ensure nginx config file is {{ nginx_state }}
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ nginx_config_file }}"
|
|
||||||
state: "{{ nginx_state }}"
|
|
||||||
when: nginx_state == 'absent'
|
|
||||||
|
|
||||||
- name: Ensure base path '{{ nginx_base_path }}' is {{ nginx_state }}
|
|
||||||
ansible.builtin.file:
|
ansible.builtin.file:
|
||||||
path: "{{ nginx_base_path }}"
|
path: "{{ nginx_base_path }}"
|
||||||
mode: "0755"
|
state: directory
|
||||||
state: >-2
|
mode: 0755
|
||||||
{{ (nginx_state == 'present') | ternary('directory', 'absent') }}
|
|
||||||
|
|
||||||
- name: Ensure nginx config file is templated
|
- name: Ensure nginx config file is templated
|
||||||
ansible.builtin.copy:
|
ansible.builtin.copy:
|
||||||
@ -33,8 +13,25 @@
|
|||||||
mode: 0640
|
mode: 0640
|
||||||
notify:
|
notify:
|
||||||
- restart-nginx
|
- restart-nginx
|
||||||
when: nginx_state == 'present'
|
|
||||||
|
|
||||||
- name: Deploy using {{ nginx_deployment_method }}
|
- name: Ensure docker container image is present
|
||||||
ansible.builtin.include_tasks:
|
community.docker.docker_image:
|
||||||
file: "deploy-{{ nginx_deployment_method }}.yml"
|
name: "{{ nginx_container_image_reference }}"
|
||||||
|
state: present
|
||||||
|
source: pull
|
||||||
|
force_source: "{{ nginx_container_image_tag is defined and nginx_container_image_tag | string != '' }}"
|
||||||
|
|
||||||
|
- name: Ensure docker container '{{ nginx_container_name }}' is running
|
||||||
|
community.docker.docker_container:
|
||||||
|
name: "{{ nginx_container_name }}"
|
||||||
|
image: "{{ nginx_container_image_reference }}"
|
||||||
|
env: "{{ nginx_container_env | default(omit, true) }}"
|
||||||
|
user: "{{ nginx_container_user | default(omit, true) }}"
|
||||||
|
ports: "{{ nginx_container_ports | default(omit, true) }}"
|
||||||
|
labels: "{{ nginx_container_labels | default(omit, true) }}"
|
||||||
|
volumes: "{{ nginx_container_volumes | default(omit, true) }}"
|
||||||
|
etc_hosts: "{{ nginx_container_etc_hosts | default(omit, true) }}"
|
||||||
|
networks: "{{ nginx_container_networks | default(omit, true) }}"
|
||||||
|
purge_networks: "{{ nginx_container_purge_networks | default(omit, true) }}"
|
||||||
|
restart_policy: "{{ nginx_container_restart_policy }}"
|
||||||
|
state: started
|
||||||
|
@ -1,7 +0,0 @@
|
|||||||
---
|
|
||||||
nginx_states:
|
|
||||||
- present
|
|
||||||
- absent
|
|
||||||
nginx_deployment_methods:
|
|
||||||
- docker
|
|
||||||
- podman
|
|
@ -1,25 +0,0 @@
|
|||||||
# `finallycoffee.base.powerdns_tsig_key`
|
|
||||||
|
|
||||||
Simple ansible role for ensuring a TSIG key is present in a given PowerDNS-
|
|
||||||
instance.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
The usage example below assumes `powerdns` is running in a container named `powerdns` (as supplied to `powerdns_tsig_key_container_name`.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- hosts: "{{ target_hosts }}"
|
|
||||||
become: true
|
|
||||||
roles:
|
|
||||||
- role: finallycoffee.base.powerdns_tsig_key
|
|
||||||
vars:
|
|
||||||
powerdns_tsig_key_name: "nameofmykey"
|
|
||||||
powerdns_tsig_key_path: "/var/lib/myapp/tsig.key"
|
|
||||||
powernds_tsig_key_algo: "hmac-sha512"
|
|
||||||
powerdns_tsig_key_path_owner: "myappuser"
|
|
||||||
powerdns_tsig_key_path_group: "myappgroup"
|
|
||||||
powerdns_tsig_key_container_name: 'powerdns'
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> Support for non-docker deployments is pending.
|
|
@ -1,2 +0,0 @@
|
|||||||
---
|
|
||||||
powerdns_tsig_key_container_name: powerdns
|
|
@ -1,104 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Ensure unix group '{{ powerdns_tsig_key_path_group }}' exists
|
|
||||||
ansible.builtin.group:
|
|
||||||
name: "{{ powerdns_tsig_key_path_group }}"
|
|
||||||
state: "present"
|
|
||||||
system: true
|
|
||||||
register: powerdns_tsig_key_path_group_info
|
|
||||||
when: powerdns_tsig_key_path_group is defined
|
|
||||||
|
|
||||||
- name: Ensure unix user '{{ powerdns_tsig_key_path_owner }}' exists
|
|
||||||
ansible.builtin.user:
|
|
||||||
name: "{{ powerdns_tsig_key_path_owner }}"
|
|
||||||
state: "present"
|
|
||||||
system: true
|
|
||||||
create_home: false
|
|
||||||
groups: "{{ powerdns_tsig_key_path_group is defined | ternary([powerdns_tsig_key_path_group], omit) }}"
|
|
||||||
append: "{{ powerdns_tsig_key_path_group is defined | ternary(true, omit) }}"
|
|
||||||
register: powerdns_tsig_key_path_owner_info
|
|
||||||
when: powerdns_tsig_key_path_owner is defined
|
|
||||||
|
|
||||||
- name: Check if TSIG key is already present
|
|
||||||
ansible.builtin.stat:
|
|
||||||
path: "{{ powerdns_tsig_key_path }}"
|
|
||||||
register: powerdns_tsig_key_info
|
|
||||||
|
|
||||||
- name: Ensure TSIG key directory is present
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ powerdns_tsig_key_path | dirname }}"
|
|
||||||
state: directory
|
|
||||||
owner: "{{ powerdns_tsig_key_path_owner | default(omit) }}"
|
|
||||||
group: "{{ powerdns_tsig_key_path_group | default(omit) }}"
|
|
||||||
mode: "u+rwX,g+rX"
|
|
||||||
recurse: true
|
|
||||||
|
|
||||||
- name: Ensure a TSIG key is configured and persisted
|
|
||||||
when: >-
|
|
||||||
not powerdns_tsig_key_info.stat.exists
|
|
||||||
or powerdns_tsig_key_info.stat.size == 0
|
|
||||||
block:
|
|
||||||
- name: Ensure TSIG key is not already present
|
|
||||||
community.docker.docker_container_exec:
|
|
||||||
container: "{{ powerdns_tsig_key_container_name }}"
|
|
||||||
command: "pdnsutil list-tsig-keys"
|
|
||||||
delegate_to: "{{ powerdns_tsig_key_hostname }}"
|
|
||||||
register: powerdns_tsig_key_powerdns_info
|
|
||||||
changed_when: false
|
|
||||||
check_mode: false
|
|
||||||
become: true
|
|
||||||
|
|
||||||
- name: Ensure TSIG key is generated in powerdns
|
|
||||||
community.docker.docker_container_exec:
|
|
||||||
container: "{{ powerdns_tsig_key_container_name }}"
|
|
||||||
command: "pdnsutil generate-tsig-key '{{ powerdns_tsig_key_name }}' '{{ powerdns_tsig_key_algo }}'"
|
|
||||||
when: >-
|
|
||||||
(powerdns_tsig_key_name ~ '. ' ~ powerdns_tsig_key_algo ~ '. ')
|
|
||||||
not in powerdns_tsig_key_powerdns_info.stdout
|
|
||||||
delegate_to: "{{ powerdns_tsig_key_hostname }}"
|
|
||||||
register: powerdns_tsig_key_powerdns_generated_tsig_key
|
|
||||||
throttle: 1
|
|
||||||
become: true
|
|
||||||
|
|
||||||
- name: Ensure PowerDNS is restarted
|
|
||||||
community.docker.docker_container:
|
|
||||||
name: "{{ powerdns_tsig_key_container_name }}"
|
|
||||||
state: started
|
|
||||||
restart: true
|
|
||||||
when: >-
|
|
||||||
(powerdns_tsig_key_name ~ '. ' ~ powerdns_tsig_key_algo ~ '. ')
|
|
||||||
not in powerdns_tsig_key_powerdns_info.stdout
|
|
||||||
delegate_to: "{{ powerdns_tsig_key_hostname }}"
|
|
||||||
throttle: 1
|
|
||||||
become: true
|
|
||||||
|
|
||||||
- name: Extract TSIG key into variable
|
|
||||||
ansible.builtin.set_fact:
|
|
||||||
powerdns_tsig_key_key: >-
|
|
||||||
{{
|
|
||||||
(powerdns_tsig_key_powerdns_generated_tsig_key.stdout | trim | split(' ') | list | last)
|
|
||||||
if (powerdns_tsig_key_name ~ '. ' ~ powerdns_tsig_key_algo ~ '. ')
|
|
||||||
not in powerdns_tsig_key_powerdns_info.stdout
|
|
||||||
else (powerdns_generated_tsig_key | trim | split(' ') | list | last)
|
|
||||||
}}
|
|
||||||
vars:
|
|
||||||
powerdns_generated_tsig_key: >-
|
|
||||||
{% for line in powerdns_tsig_key_powerdns_info.stdout_lines %}
|
|
||||||
{% if powerdns_tsig_key_name in line %}
|
|
||||||
{{ line }}
|
|
||||||
{% endif %}
|
|
||||||
{% endfor %}
|
|
||||||
|
|
||||||
- name: Ensure TSIG key is persisted into {{ powerdns_tsig_key_path }}
|
|
||||||
ansible.builtin.copy:
|
|
||||||
content: "{{ powerdns_tsig_key_key }}"
|
|
||||||
dest: "{{ powerdns_tsig_key_path }}"
|
|
||||||
owner: "{{ powerdns_tsig_key_path_owner | default(omit) }}"
|
|
||||||
group: "{{ powerdns_tsig_key_path_group | default(omit) }}"
|
|
||||||
mode: "0640"
|
|
||||||
|
|
||||||
- name: Ensure TSIG key permissions on {{ powerdns_tsig_key_path }} are correct
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ powerdns_tsig_key_path }}"
|
|
||||||
owner: "{{ powerdns_tsig_key_path_owner | default(omit) }}"
|
|
||||||
group: "{{ powerdns_tsig_key_path_group | default(omit) }}"
|
|
||||||
mode: "u+rwX,g+rwX"
|
|
13
roles/redis/README.md
Normal file
13
roles/redis/README.md
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
# `finallycoffee.base.redis` ansible role
|
||||||
|
|
||||||
|
Ansible role to deploy redis. Can use systemd or docker, depending on the
|
||||||
|
value of `redis_deployment_method`. Supports running the role multiple times
|
||||||
|
by setting `redis_instance` to a unique string to avoid namespace-collisions.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Extra configurations keys for redis can be provided as key-value pairs
|
||||||
|
in `redis_config`. For all configuration keys, consult the upstream example
|
||||||
|
redis.conf.
|
51
roles/redis/defaults/main.yml
Normal file
51
roles/redis/defaults/main.yml
Normal file
@ -0,0 +1,51 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
redis_instance: ''
|
||||||
|
redis_version: "7.2"
|
||||||
|
redis_user: "redis{{ '-' ~ redis_instance }}"
|
||||||
|
redis_deployment_method: docker
|
||||||
|
redis_config_file: "/etc/redis/redis{{ '-' ~ redis_instance }}.conf"
|
||||||
|
redis_data_directory: "/var/lib/redis/"
|
||||||
|
|
||||||
|
redis_config_dbfilename: "redis{{ '-' ~ redis_instance }}.rdb"
|
||||||
|
redis_config_dir: "{{ redis_data_directory }}"
|
||||||
|
redis_config_bind:
|
||||||
|
- -::1
|
||||||
|
- "{{ (redis_deployment_method == 'docker') | ternary('0.0.0.0', '127.0.0.1') }}"
|
||||||
|
- "{{ (redis_deployment_method == 'docker') | ternary('-::*', '::1') }}"
|
||||||
|
redis_config_port: "6379"
|
||||||
|
redis_config_procted_mode: true
|
||||||
|
#redis_config_maxmemory_bytes: 100mb
|
||||||
|
#redis_config_maxmemory_policy: noeviction
|
||||||
|
redis_config_unix_socket: "/run/redis.sock"
|
||||||
|
redis_config_unix_socket_perm: "700"
|
||||||
|
|
||||||
|
redis_container_name: "redis{{ '_' ~ redis_instance }}"
|
||||||
|
redis_container_image_flavour: alpine
|
||||||
|
redis_container_image_registry: "docker.io"
|
||||||
|
redis_container_image_namespace: ~
|
||||||
|
redis_container_image_name: "redis"
|
||||||
|
redis_container_image_reference: >-
|
||||||
|
{{ redis_container_image_repository ~ ':'
|
||||||
|
~ redis_container_image_tage | default(
|
||||||
|
redis_version ~ (redis_container_image_flavour | ternary(
|
||||||
|
'-' ~ redis_container_image_flavour, '')), true) }}
|
||||||
|
redis_container_image_repository: >-
|
||||||
|
{{ redis_container_image_registry ~ '/'
|
||||||
|
~ (redis_container_image_namespace | ternary(redis_container_image_namespace ~ '/'))
|
||||||
|
~ redis_container_image_name }}
|
||||||
|
redis_container_ports:
|
||||||
|
- "127.0.0.1:{{ redis_config_port }}:{{ redis_config_port }}"
|
||||||
|
- "[i::1]:{{ redis_config_port }}:{{ redis_config_port }}"
|
||||||
|
redis_container_restart_policy: "unless-stopped"
|
||||||
|
redis_container_state: "started"
|
||||||
|
|
||||||
|
redis_container_base_labels:
|
||||||
|
version: "{{ redis_version }}"
|
||||||
|
redis_container_all_labels: >-
|
||||||
|
{{ redis_container_base_labels | combine(redis_container_labels | default({})) }}
|
||||||
|
redis_container_base_volumes:
|
||||||
|
- "{{ redis_config_file }}:/usr/local/etc/redis/redis.conf:ro"
|
||||||
|
- "{{ redis_data_directory }}:{{ redis_data_directory }}:rw"
|
||||||
|
redis_container_all_volumes: >-
|
||||||
|
{{ redis_container_base_volumes + redis_container_volumes | default([]) }}
|
11
roles/redis/handlers/main.yml
Normal file
11
roles/redis/handlers/main.yml
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- name: Ensure redis container '{{ redis_container_name }}' is restarted
|
||||||
|
listen: restart-redis
|
||||||
|
community.docker.docker_container:
|
||||||
|
name: "{{ redis_container_image }}"
|
||||||
|
state: "started"
|
||||||
|
restart: true
|
||||||
|
when:
|
||||||
|
- redis_deployment_method == "docker"
|
||||||
|
- not redis_container_info.changed
|
50
roles/redis/tasks/main.yml
Normal file
50
roles/redis/tasks/main.yml
Normal file
@ -0,0 +1,50 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- name: Ensure redis user '{{ redis_user }}' is present
|
||||||
|
ansible.builtin.user:
|
||||||
|
name: "{{ redis_user }}"
|
||||||
|
state: "present"
|
||||||
|
system: true
|
||||||
|
create_home: false
|
||||||
|
groups: "{{ redis_user_groups | default(omit) }}"
|
||||||
|
append: "{{ redis_user_groups is defined | ternary('true', omit) }}"
|
||||||
|
register: redis_user_info
|
||||||
|
|
||||||
|
- name: Ensure redis configuration is written out
|
||||||
|
ansible.builtin.copy:
|
||||||
|
content: |+
|
||||||
|
{% for key, value in redis_config_to_write %}
|
||||||
|
{{ key }} {{ value }}
|
||||||
|
{% endfor %}
|
||||||
|
dest: "{{ redis_config_file }}"
|
||||||
|
owner: "{{ redis_user_info.uid | default(redis_user) }}"
|
||||||
|
group: "{{ redis_user_info.group | default(redis_user) }}"
|
||||||
|
mode: "0640"
|
||||||
|
notify:
|
||||||
|
- restart-redis
|
||||||
|
|
||||||
|
- name: Ensure container image is present on host
|
||||||
|
community.docker.docker_image:
|
||||||
|
name: "{{ redis_container_image_reference }}"
|
||||||
|
state: "present"
|
||||||
|
source: "pull"
|
||||||
|
force_source: "{{ redis_container_image_tag | bool }}"
|
||||||
|
when: "redis_deployment_method == 'docker'"
|
||||||
|
|
||||||
|
- name: Ensure redis container '{{ redis_container_name }}' is '{{ redis_container_state }}'
|
||||||
|
community.docker.docker_container:
|
||||||
|
name: "{{ redis_container_name }}"
|
||||||
|
image: "{{ redis_container_image_reference }}"
|
||||||
|
env: "{{ redis_container_env | default(omit) }}"
|
||||||
|
ports: "{{ redis_container_ports | default(omit) }}"
|
||||||
|
labels: "{{ redis_container_all_labels }}"
|
||||||
|
volumes: "{{ redis_container_all_volumes }}"
|
||||||
|
networks: "{{ redis_container_networks | default(omit) }}"
|
||||||
|
purge_networks: "{{ redis_container_purge_networks | default(omit) }}"
|
||||||
|
etc_hosts: "{{ redis_container_etc_hosts | default(omit) }}"
|
||||||
|
memory: "{{ redis_container_memory | default(omit) }}"
|
||||||
|
memory_swap: "{{ redis_container_memory_swap | default(omit) }}"
|
||||||
|
restart_policy: "{{ redis_container_restart_policy }}"
|
||||||
|
state: "{{ redis_container_state }}"
|
||||||
|
register: redis_container_info
|
||||||
|
when: "redis_deployment_method == 'docker'"
|
13
roles/redis/vars/main.yml
Normal file
13
roles/redis/vars/main.yml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
redis_base_config:
|
||||||
|
dbfilename: "{{ redis_config_dbfilename }}"
|
||||||
|
dir: "{{ redis_data_directory }}"
|
||||||
|
bind: "{{ redis_config_bind | join(' ') }}"
|
||||||
|
port: "{{ redis_config_port }}"
|
||||||
|
"protected-mode": "{{ redis_config_protected_mode | ternary('yes', 'no') }}"
|
||||||
|
unixsocket: "{{ redis_config_unix_socket }}"
|
||||||
|
unixsocketperm: "{{ redis_config_unix_socket_perm }}"
|
||||||
|
|
||||||
|
redis_config_to_write: >-
|
||||||
|
{{ redis_base_config | combine(redis_config | default({}), recursive=True) }}
|
@ -10,41 +10,18 @@ restic_backup_stdin_command: ~
|
|||||||
restic_backup_stdin_command_filename: ~
|
restic_backup_stdin_command_filename: ~
|
||||||
|
|
||||||
restic_policy_keep_all_within: 1d
|
restic_policy_keep_all_within: 1d
|
||||||
restic_policy_keep_hourly: 12
|
restic_policy_keep_hourly: 6
|
||||||
restic_policy_keep_daily: 7
|
restic_policy_keep_daily: 2
|
||||||
restic_policy_keep_weekly: 6
|
restic_policy_keep_weekly: 7
|
||||||
restic_policy_keep_monthly: 6
|
restic_policy_keep_monthly: 4
|
||||||
restic_policy_keep_yearly: 5
|
|
||||||
restic_policy_backup_frequency: hourly
|
restic_policy_backup_frequency: hourly
|
||||||
|
|
||||||
restic_base_environment:
|
|
||||||
RESTIC_JOBNAME: "{{ restic_job_name | default('unknown') }}"
|
|
||||||
RESTIC_FORGET_KEEP_WITHIN: "{{ restic_policy_keep_all_within }}"
|
|
||||||
RESTIC_FORGET_KEEP_HOURLY: "{{ restic_policy_keep_hourly }}"
|
|
||||||
RESTIC_FORGET_KEEP_DAILY: "{{ restic_policy_keep_daily }}"
|
|
||||||
RESTIC_FORGET_KEEP_WEEKLY: "{{ restic_policy_keep_weekly }}"
|
|
||||||
RESTIC_FORGET_KEEP_MONTHLY: "{{ restic_policy_keep_monthly }}"
|
|
||||||
RESTIC_FORGET_KEEP_YEARLY: "{{ restic_policy_keep_yearly }}"
|
|
||||||
|
|
||||||
restic_s3_environment:
|
|
||||||
AWS_ACCESS_KEY_ID: "{{ restic_s3_key_id }}"
|
|
||||||
AWS_SECRET_ACCESS_KEY: "{{ restic_s3_access_key }}"
|
|
||||||
|
|
||||||
restic_complete_environment: >-
|
|
||||||
{{
|
|
||||||
restic_base_environment
|
|
||||||
| combine((restic_s3_environment
|
|
||||||
if (restic_s3_key_id and restic_s3_access_key) else {}) | default({}))
|
|
||||||
| combine(restic_environment | default({}))
|
|
||||||
}}
|
|
||||||
|
|
||||||
restic_policy:
|
restic_policy:
|
||||||
keep_within: "{{ restic_policy_keep_all_within }}"
|
keep_within: "{{ restic_policy_keep_all_within }}"
|
||||||
hourly: "{{ restic_policy_keep_hourly }}"
|
hourly: "{{ restic_policy_keep_hourly }}"
|
||||||
daily: "{{ restic_policy_keep_daily }}"
|
daily: "{{ restic_policy_keep_daily }}"
|
||||||
weekly: "{{ restic_policy_keep_weekly }}"
|
weekly: "{{ restic_policy_keep_weekly }}"
|
||||||
monthly: "{{ restic_policy_keep_monthly }}"
|
monthly: "{{ restic_policy_keep_monthly }}"
|
||||||
yearly: "{{ restic_policy_keep_yearly }}"
|
|
||||||
frequency: "{{ restic_policy_backup_frequency }}"
|
frequency: "{{ restic_policy_backup_frequency }}"
|
||||||
|
|
||||||
restic_user: root
|
restic_user: root
|
||||||
|
@ -8,7 +8,7 @@
|
|||||||
when: restic_create_user
|
when: restic_create_user
|
||||||
|
|
||||||
- name: Ensure either backup_paths or backup_stdin_command is populated
|
- name: Ensure either backup_paths or backup_stdin_command is populated
|
||||||
when: restic_backup_paths|length > 0 and restic_backup_stdin_command and false
|
when: restic_backup_paths|length > 0 and restic_backup_stdin_command
|
||||||
fail:
|
fail:
|
||||||
msg: "Setting both `restic_backup_paths` and `restic_backup_stdin_command` is not supported"
|
msg: "Setting both `restic_backup_paths` and `restic_backup_stdin_command` is not supported"
|
||||||
|
|
||||||
|
@ -2,50 +2,27 @@
|
|||||||
Description={{ restic_job_description }}
|
Description={{ restic_job_description }}
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
Type=simple
|
Type=oneshot
|
||||||
User={{ restic_user }}
|
User={{ restic_user }}
|
||||||
WorkingDirectory={{ restic_systemd_working_directory }}
|
WorkingDirectory={{ restic_systemd_working_directory }}
|
||||||
SyslogIdentifier={{ restic_systemd_syslog_identifier }}
|
SyslogIdentifier={{ restic_systemd_syslog_identifier }}
|
||||||
|
|
||||||
Environment=RESTIC_REPOSITORY={{ restic_repo_url }}
|
Environment=RESTIC_REPOSITORY={{ restic_repo_url }}
|
||||||
Environment=RESTIC_PASSWORD={{ restic_repo_password }}
|
Environment=RESTIC_PASSWORD={{ restic_repo_password }}
|
||||||
{% for kv in restic_complete_environment | dict2items %}
|
{% if restic_s3_key_id and restic_s3_access_key %}
|
||||||
Environment={{ kv.key }}={{ kv.value }}
|
Environment=AWS_ACCESS_KEY_ID={{ restic_s3_key_id }}
|
||||||
{% endfor %}
|
Environment=AWS_SECRET_ACCESS_KEY={{ restic_s3_access_key }}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{% if restic_init | default(true) %}
|
|
||||||
ExecStartPre=-/bin/sh -c '/usr/bin/restic snapshots || /usr/bin/restic init'
|
ExecStartPre=-/bin/sh -c '/usr/bin/restic snapshots || /usr/bin/restic init'
|
||||||
{% endif %}
|
|
||||||
{% if restic_unlock_before_backup | default(false) %}
|
|
||||||
ExecStartPre=-/bin/sh -c 'sleep 3 && /usr/bin/restic unlock'
|
|
||||||
{% endif %}
|
|
||||||
{% if restic_backup_pre_hook | default(false) %}
|
|
||||||
ExecStartPre=-{{ restic_backup_pre_hook }}
|
|
||||||
{% endif %}
|
|
||||||
{% if restic_backup_stdin_command %}
|
{% if restic_backup_stdin_command %}
|
||||||
ExecStart=/bin/sh -c '{{ restic_backup_stdin_command }} | /usr/bin/restic backup \
|
ExecStart=/bin/sh -c '{{ restic_backup_stdin_command }} | /usr/bin/restic backup --verbose --stdin --stdin-filename {{ restic_backup_stdin_command_filename }}'
|
||||||
--retry-lock {{ restic_retry_lock | default('5m') }} \
|
|
||||||
--verbose --stdin \
|
|
||||||
--stdin-filename {{ restic_backup_stdin_command_filename }}'
|
|
||||||
{% else %}
|
{% else %}
|
||||||
ExecStart=/opt/restic-backup-directories.sh {{ restic_backup_paths | join(' ') }}
|
ExecStart=/usr/bin/restic --verbose backup {{ restic_backup_paths | join(' ') }}
|
||||||
{% endif %}
|
|
||||||
{% if restic_forget_prune | default(true) %}
|
|
||||||
ExecStartPost=/usr/bin/restic forget --prune \
|
|
||||||
--retry-lock {{ restic_retry_lock | default('5m') }} \
|
|
||||||
--keep-within={{ restic_policy.keep_within }} \
|
|
||||||
--keep-hourly={{ restic_policy.hourly }} \
|
|
||||||
--keep-daily={{ restic_policy.daily }} \
|
|
||||||
--keep-weekly={{ restic_policy.weekly }} \
|
|
||||||
--keep-monthly={{ restic_policy.monthly }} \
|
|
||||||
--keep-yearly={{ restic_policy.yearly }}
|
|
||||||
{% endif %}
|
|
||||||
{% if restic_list_snapshots | default(true) %}
|
|
||||||
ExecStartPost=-/usr/bin/restic snapshots --retry-lock {{ restic_retry_lock | default('5m') }}
|
|
||||||
{% endif %}
|
|
||||||
{% if restic_backup_post_hook | default(false) %}
|
|
||||||
ExecStartPost=-{{ restic_backup_post_hook }}
|
|
||||||
{% endif %}
|
|
||||||
{% if restic_check | default(true) %}
|
|
||||||
ExecStartPost=/usr/bin/restic check --retry-lock {{ restic_retry_lock | default('5m') }}
|
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
ExecStartPost=/usr/bin/restic forget --prune --keep-within={{ restic_policy.keep_within }} --keep-hourly={{ restic_policy.hourly }} --keep-daily={{ restic_policy.daily }} --keep-weekly={{ restic_policy.weekly }} --keep-monthly={{ restic_policy.monthly }}
|
||||||
|
ExecStartPost=-/usr/bin/restic snapshots
|
||||||
|
ExecStartPost=/usr/bin/restic check
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
@ -1,8 +1,9 @@
|
|||||||
[Unit]
|
[Unit]
|
||||||
Description=Run {{ restic_timer_description | default(restic_job_name) }}
|
Description=Run {{ restic_job_name }}
|
||||||
|
|
||||||
[Timer]
|
[Timer]
|
||||||
OnCalendar={{ restic_policy.frequency }}
|
OnCalendar={{ restic_policy.frequency }}
|
||||||
|
Persistent=True
|
||||||
Unit={{ restic_systemd_unit_naming_scheme }}.service
|
Unit={{ restic_systemd_unit_naming_scheme }}.service
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
|
Loading…
Reference in New Issue
Block a user