Compare commits

..

1 Commits

Author SHA1 Message Date
36ceb40fac chore: update metadata 2023-07-17 20:55:47 +02:00
139 changed files with 716 additions and 3228 deletions

View File

@@ -1,4 +1,4 @@
# `finallycoffee.services` ansible collection
# `finallycoffee.service` ansible collection
## Overview
@@ -8,48 +8,24 @@ concise area of concern.
## Roles
- [`authelia`](roles/authelia/README.md): Deploys an [authelia.com](https://www.authelia.com)
- [`roles/authelia`](roles/authelia/README.md): Deploys an [authelia.com](https://www.authelia.com)
instance, an authentication provider with beta OIDC provider support.
- [`ghost`](roles/ghost/README.md): Deploys [ghost.org](https://ghost.org/), a simple to use
blogging and publishing platform.
- [`roles/elasticsearch`](roles/elasticsearch/README.md): Deploy [elasticsearch](https://www.docker.elastic.co/r/elasticsearch/elasticsearch-oss),
a popular (distributed) search and analytics engine, mostly known by it's
letter "E" in the ELK-stack.
- [`gitea`](roles/gitea/README.md): Deploy [gitea.io](https://gitea.io), a
- [`roles/gitea`](roles/gitea/README.md): Deploy [gitea.io](https://gitea.io), a
lightweight, self-hosted git service.
- [`hedgedoc`](roles/hedgedoc/README.md): Deploy [hedgedoc](https://hedgedoc.org/),
a collaborative real-time markdown editor using websockts
- [`jellyfin`](roles/jellyfin/README.md): Deploy [jellyfin.org](https://jellyfin.org),
- [`roles/jellyfin`](roles/jellyfin/README.md): Deploy [jellyfin.org](https://jellyfin.org),
the free software media system for streaming stored media to any device.
- [`keycloak`](roles/keycloak/README.md): Deploy [keycloak](https://www.keycloak.org/),
the open source identity and access management solution.
- [`roles/restic`](roles/restic/README.md): Manage backups using restic
and persist them to a configurable backend.
- [`openproject`](roles/openproject/README.md): Deploys an [openproject.org](https://www.openproject.org)
installation using the upstream provided docker-compose setup.
- [`pretix`](roles/pretix/README.md): Deploy [pretix](https://pretix.eu), the open source online ticketing solution.
- [`snipe_it`](roles/snipe_it/README.md): Deploys [Snipe-IT](https://snipeitapp.com/),
the free and open-source IT asset (and license) management with a powerful REST API
- [`vaultwarden`](roles/vaultwarden/README.md): Deploy [vaultwarden](https://github.com/dani-garcia/vaultwarden/),
an open-source implementation of the Bitwarden Server (formerly Bitwarden\_RS).
- [`vouch_proxy`](roles/vouch_proxy/README.md): Deploys [vouch-proxy](https://github.com/vouch/vouch-proxy),
an authorization proxy for arbitrary webapps working with `nginx`s' `auth_request` module.
## Playbooks
- [`authelia`](playbooks/authelia.md)
- [`hedgedoc`](playbooks/hedgedoc.md)
- [`jellyfin`](playbooks/jellyfin.md)
- [`keycloak`](playbooks/keycloak.md)
- [`gitea`](playbooks/gitea.md)
- [`phpldapadmin`](playbooks/phpldapadmin.md)
- [`snipe_it`](playbooks/snipe_it.md)
- [`vaultwarden`](playbooks/vaultwarden.md)
- [`roles/minio`](roles/minio/README.md): Deploy [min.io](https://min.io), an
s3-compatible object storage server, using docker containers.
## License

View File

@@ -1,28 +1,15 @@
namespace: finallycoffee
name: services
version: "0.2.2"
version: 0.0.1
readme: README.md
authors:
- transcaffeine <transcaffeine@finally.coffee>
description: Various ansible roles useful for automating infrastructure
dependencies:
"community.general": "^11.0.0"
"community.crypto": "^3.0.3"
"community.docker": "^4.7.0"
"containers.podman": "^1.16.0"
license_file: LICENSE.md
"community.docker": "^1.10.0"
license:
- CNPLv7+
build_ignore:
- '*.tar.gz'
repository: https://git.finally.coffee/finallycoffee/services
issues: https://codeberg.org/finallycoffee/ansible-collection-services/issues
tags:
- authelia
- gitea
- hedgedoc
- jellyfin
- vaultwarden
- snipeit
- docker
- phpldapadmin
- pretix
- keycloak
issues: https://git.finally.coffee/finallycoffee/services/issues

View File

@@ -1,3 +0,0 @@
---
requires_ansible: ">=2.15"

View File

@@ -1,7 +0,0 @@
# `finallycoffee.services.authelia` ansible playbook
## Feature toggles
- `authelia_configure_postgesql_client` (default `false`)
- `authelia_configure_lego_rfc2136` (default `false`)
- `authelia_configure_caddy_reverse_proxy` (default `false`)

View File

@@ -1,114 +0,0 @@
---
- import_playbook: finallycoffee.databases.postgresql_client
when: authelia_configure_postgresql_client | default(false)
vars:
postgresql_hosts: >-2
{{ authelia_postgresql_hosts | default(authelia_hosts | default('authelia')) }}
postgresql_become: >-2
{{ authelia_postgresql_become | default(authelia_become | default(false)) }}
postgresql_client_username: "{{ authelia_database_user }}"
postgresql_client_password: "{{ authelia_database_pass }}"
postgresql_client_database: "{{ authelia_database_name }}"
postgresql_client_database_lc_ctype: 'C'
postgresql_client_database_lc_collate: 'C'
tags:
- authelia
- authelia-postgresql-client
- import_playbook: finallycoffee.base.lego_certificate
when: authelia_configure_lego_rfc2136 | default(false)
vars:
target_domains:
- "{{ authelia_domain }}"
target_acme_zone: "{{ acme_domain }}"
target_acme_account_email: "{{ authelia_lego_acme_account_email }}"
target_dns_server: "{{ dns_server }}"
target_dns_tsig_key: "{{ dns_tsig_keydata }}"
target_dns_additional_records: "{{ authelia_dns_records }}"
target_hosts: >-2
{{ authelia_lego_hosts | default(authelia_hosts | default('authelia')) }}
target_become: >-2
{{ authelia_lego_become | default(authelia_become | default(false)) }}
target_gather_facts: >-2
{{ authelia_lego_gather_facts | default(false) }}
tags:
- authelia
- authelia-lego
- name: Install and configure authelia
hosts: "{{ authelia_hosts | default('authelia') }}"
become: "{{ authelia_become | default(false) }}"
gather_facts: "{{ authelia_gather_facts | default(false) }}"
pre_tasks:
- name: Ensure valkey user exists
ansible.builtin.user:
name: "{{ valkey_user }}"
state: present
system: true
create_home: false
register: valkey_user_info
when: valkey_state == 'present'
tags:
- authelia
- authelia-valkey
- name: Create host folder for valkey unix socket
ansible.builtin.file:
path: "{{ authelia_redis_unix_socket }}"
state: directory
mode: "0755"
owner: "{{ valkey_user_info.uid | default(valkey_user) }}"
group: "{{ valkey_user_info.group | default(valkey_user) }}"
when: valkey_state == 'present'
tags:
- authelia
- authelia-valkey
roles:
- name: finallycoffee.databases.valkey
vars:
valkey_secret: "{{ authelia_redis_pass }}"
valkey_config_user:
- "default on +@all -DEBUG ~* >{{ valkey_secret }}"
valkey_config_unixsocketperm: 666
valkey_container_networks: []
valkey_container_purge_networks: true
valkey_container_volumes:
- "{{ authelia_redis_unix_socket }}:{{ authelia_redis_unix_socket }}"
valkey_container_image_registry: "{{ nexus_docker_hub_domain }}"
tags:
- authelia
- authelia-valkey
- name: finallycoffee.services.authelia
vars:
authelia_redis_host: "{{ valkey_config_unixsocket }}"
authelia_redis_port: ~
authelia_container_extra_volumes:
- "{{ authelia_redis_unix_socket }}:{{ authelia_redis_unix_socket }}"
- "{{ authelia_postgres_unix_socket }}:{{ authelia_postgres_unix_socket }}"
authelia_container_ports:
- "{{ authelia_host_bind_ip }}:{{ authelia_container_listen_port }}"
tags:
- authelia
vars:
valkey_instance: >-2
{{ authelia_instance_name | default('authelia') }}
authelia_redis_unix_socket: >-2
{{ authelia_redis_unix_socket_path
| default('/var/run/redis-' + valkey_instance + '-socket', true) }}
valkey_config_unixsocket: >-2
{{ authelia_valkey_config_unixsocket
| default(authelia_redis_unix_socket + '/redis.sock') }}
- import_playbook: finallycoffee.base.caddy_reverse_proxy
when: authelia_configure_caddy_reverse_proxy | default(false)
vars:
caddy_site_name: "{{ authelia_domain }}"
caddy_reverse_proxy_backend_addr: "http://{{ authelia_host_bind_ip }}"
target_hosts: >-2
{{ authelia_caddy_hosts | default(authelia_hosts | default('authelia')) }}
target_become: >-2
{{ authelia_caddy_become | default(authelia_become | default(false)) }}
target_gather_facts: >-2
{{ authelia_caddy_gather_facts | default(false) }}
tags:
- authelia
- authelia-caddy

View File

@@ -1,7 +0,0 @@
# `finallycoffee.services.gitea` ansible playbook
## Feature toggles
- `gitea_configure_postgesql_client` (default `true`)
- `gitea_configure_lego_rfc2136` (default `true`)
- `gitea_configure_caddy_reverse_proxy` (default `false`)

View File

@@ -1,63 +0,0 @@
---
- import_playbook: finallycoffee.databases.postgresql_client
when: gitea_configure_postgresql_client | default(true) | bool
vars:
postgresql_become: "{{ gitea_postgresql_client_become | default(true) }}"
postgresql_hosts: >-2
{{ gitea_postgresql_hosts | default(gitea_hosts | default('gitea')) }}
postgresql_client_username: "{{ gitea_database_user }}"
postgresql_client_password: "{{ gitea_database_pass }}"
postgresql_client_database: "{{ gitea_database_name }}"
postgresql_client_database_lc_collate: >-2
{{ gitea_postgresql_database_lc_collate | default('en_US.UTF-8') }}
postgresql_client_database_lc_ctype: >-2
{{ gitea_postgresql_database_lc_ctype | default('en_US.UTF-8') }}
tags:
- gitea-postgresql
- import_playbook: finallycoffee.base.lego_certificate
when: gitea_configure_lego_rfc2136 | default(true) | bool
vars:
target_domains:
- "{{ gitea_domain }}"
target_acme_zone: "{{ acme_domain }}"
target_acme_account_email: "{{ gitea_lego_acme_account_email }}"
target_dns_server: "{{ dns_server }}"
target_dns_additional_records: "{{ gitea_dns_records }}"
target_dns_tsig_key: "{{ dns_tsig_keydata }}"
target_hosts: >-2
{{ gitea_lego_hosts | default(gitea_hosts | default('gitea')) }}
target_gather_facts: >-2
{{ gitea_gather_facts | default(false) | bool }}
tags:
- gitea-lego
- name: Install and configure gitea
hosts: "{{ gitea_hosts | default('gitea') }}"
become: "{{ gitea_become | default(true, true) }}"
gather_facts: "{{ gitea_gather_facts | default(false) | bool }}"
pre_tasks:
- name: Ensure referenced docker container networks are present
community.docker.docker_network:
name: "{{ network.name }}"
state: "present"
loop: "{{ gitea_container_networks | default([]) }}"
loop_control:
loop_var: "network"
label: "{{ network.name }}"
roles:
- name: finallycoffee.services.gitea
- import_playbook: finallycoffee.base.caddy_reverse_proxy
when: gitea_configure_caddy_reverse_proxy | default(false)
vars:
caddy_site_name: "{{ gitea_domain }}"
caddy_reverse_proxy_backend_addr: "http://{{ gitea_host_bind_ip }}"
target_hosts: >-2
{{ gitea_caddy_hosts | default(gitea_hosts | default('gitea')) }}
target_become: >-2
{{ gitea_caddy_become | default(gitea_become | default(true, true)) }}
target_gather_facts: >-2
{{ gitea_caddy_gather_facts | default(false) }}
tags:
- gitea-caddy

View File

@@ -1,7 +0,0 @@
# `finallycoffee.services.hedgedoc` ansible playbook
## Feature toggles
- `hedgedoc_configure_postgesql_client` (default `true`)
- `hedgedoc_configure_lego_rfc2136` (default `true`)
- `hedgedoc_configure_caddy_reverse_proxy` (default `false`)

View File

@@ -1,60 +0,0 @@
---
- import_playbook: finallycoffee.databases.postgresql_client
when: hedgedoc_configure_postgresql_client | default(true)
vars:
postgresql_hosts: >-2
{{ hedgedoc_postgresql_hosts | default(hedgedoc_hosts | default('hedgedoc')) }}
postgresql_become: >-2
{{ hedgedoc_postgresql_become | default(hedgedoc_become | default(true)) }}
postgresql_client_username: "{{ hedgedoc_database_user }}"
postgresql_client_password: "{{ hedgedoc_database_pass }}"
postgresql_client_database: "{{ hedgedoc_database_name }}"
postgresql_client_database_lc_collate: "en_US.UTF-8"
postgresql_client_database_lc_ctype: "en_US.UTF-8"
tags:
- hedgedoc
- hedgedoc-postgresql
- import_playbook: finallycoffee.base.lego_certificate
when: hedgedoc_configure_lego_rfc2136 | default(true)
vars:
target_hosts: >-2
{{ hedgedoc_lego_hosts | default(hedgedoc_hosts | default('hedgedoc')) }}
target_gather_facts: >-2
{{ hedgedoc_lego_gather_facts | default(hedgedoc_gather_facts | default(false)) }}
target_become: >-2
{{ hedgedoc_lego_become | default(hedgedoc_become | default(true, false)) }}
target_domains:
- "{{ hedgedoc_domain }}"
target_acme_zone: "{{ acme_domain }}"
target_acme_account_email: "{{ hedgedoc_lego_acme_account_email }}"
target_dns_server: "{{ dns_server }}"
target_dns_additional_records: "{{ hedgedoc_dns_records }}"
target_dns_tsig_key: "{{ dns_tsig_keydata }}"
tags:
- hedgedoc
- hedgedoc-lego
- name: Deploy Hedgedoc
hosts: "{{ hedgedoc_hosts | default('hedgedoc') }}"
become: "{{ hedgedoc_become | default(true, false) }}"
gather_facts: "{{ hedgedoc_gather_facts | default(false) }}"
roles:
- role: finallycoffee.services.hedgedoc
tags:
- hedgedoc
- import_playbook: finallycoffee.base.caddy_reverse_proxy
when: hedgedoc_configure_caddy_reverse_proxy | default(false)
vars:
caddy_site_name: "{{ hedgedoc_domain }}"
caddy_reverse_proxy_backend_addr: "http://{{ hedgedoc_host_bind_ip }}"
target_hosts: >-2
{{ hedgedoc_caddy_hosts | default(hedgedoc_hosts | default('hedgedoc')) }}
target_become: >-2
{{ hedgedoc_caddy_become | default(hedgedoc_become | default(true, false)) }}
target_gather_facts: >-2
{{ hedgedoc_caddy_gather_facts | default(false) }}
tags:
- hedgedoc
- hedgedoc-caddy

View File

@@ -1,6 +0,0 @@
# `finallycoffee.services.jellyfin` ansible playbook
## Feature toggles
- `jellyfin_configure_lego_rfc2136` (default `false`)
- `jellyfin_configure_caddy_reverse_proxy` (default `false`)

View File

@@ -1,44 +0,0 @@
---
- import_playbook: finallycoffee.base.lego_certificate
when: jellyfin_configure_lego_rfc2136 | default(false)
vars:
target_domains:
- "{{ jellyfin_domain }}"
target_acme_zone: "{{ acme_domain }}"
target_acme_account_email: "{{ jellyfin_lego_acme_account_email }}"
target_dns_server: "{{ dns_server }}"
target_dns_tsig_key: "{{ dns_tsig_keydata }}"
target_dns_additional_records: "{{ jellyfin_dns_records }}"
target_hosts: >-2
{{ jellyfin_lego_hosts | default(jellyfin_hosts | default('jellyfin')) }}
target_become: >-2
{{ jellyfin_lego_become | default(jellyfin_become | default(false)) }}
target_gather_facts: >-2
{{ jellyfin_lego_gather_facts | default(false) }}
tags:
- jellyfin
- jellyfin-lego
- name: Install jellyfin, a selfhosted media streaming platform
hosts: "{{ jellyfin_hosts | default('jellyfin') }}"
become: "{{ jellyfin_become | default(false) }}"
gather_facts: "{{ jellyfin_gather_facts | default(false) }}"
roles:
- role: finallycoffee.services.jellyfin
tags:
- jellyfin
- import_playbook: finallycoffee.base.caddy_reverse_proxy
when: jellyfin_configure_caddy_reverse_proxy | default(false)
vars:
caddy_site_name: "{{ jellyfin_domain }}"
caddy_reverse_proxy_backend_addr: "http://{{ jellyfin_host_bind_ip }}"
target_hosts: >-2
{{ jellyfin_caddy_hosts | default(jellyfin_hosts | default('jellyfin')) }}
target_become: >-2
{{ jellyfin_caddy_become | default(jellyfin_become | default(false)) }}
target_gather_facts: >-2
{{ jellyfin_caddy_gather_facts | default(false) }}
tags:
- jellyfin
- jellyfin-caddy

View File

@@ -1,7 +0,0 @@
# `finallycoffee.services.keycloak` ansible playbook
## Feature toggles
- `keycloak_configure_postgesql_client` (default `false`)
- `keycloak_configure_lego_rfc2136` (default `true`)
- `keycloak_configure_caddy_reverse_proxy` (default `false`)

View File

@@ -1,66 +0,0 @@
---
- import_playbook: finallycoffee.databases.postgresql_client
when: keycloak_configure_postgresql_client | default(false)
vars:
postgresql_hosts: >-2
{{ keycloak_postgresql_client_host | default(keycloak_hosts | default('keycloak')) }}
postgresql_become: >-2
{{ keycloak_postgresql_client_become | default(keycloak_become | default(false)) }}
postgresql_client_username: "{{ keycloak_database_username }}"
postgresql_client_password: "{{ keycloak_database_password }}"
postgresql_client_database: "{{ keycloak_database_database }}"
postgresql_client_database_lc_ctype: 'C'
postgresql_client_database_lc_collate: 'C'
postgresql_client_database_contype: host
postgresql_client_address: "172.17.0.0/24"
tags:
- keycloak
- keycloak-postgresql
- import_playbook: finallycoffee.base.lego_certificate
when: keycloak_configure_lego_rfc2136 | default(true) | bool
vars:
target_domains:
- "{{ keycloak_domain }}"
target_acme_zone: "{{ acme_domain }}"
target_acme_account_email: "{{ keycloak_lego_acme_account_email }}"
target_dns_server: "{{ dns_server }}"
target_dns_additional_records: "{{ keycloak_dns_records }}"
target_dns_tsig_key: "{{ dns_tsig_keydata }}"
target_hosts: >-2
{{ keycloak_lego_hosts | default(keycloak_hosts | default('keycloak')) }}
target_become: >-2
{{ keycloak_lego_become | default(keycloak_become | default(false)) }}
target_gather_facts: >-2
{{ keycloak_lego_gather_facts | default(false) | bool }}
tags:
- keycloak
- keycloak-lego
- name: Set up and configure keycloak
hosts: "{{ keycloak_hosts | default('keycloak') }}"
become: "{{ keycloak_become | default(false) }}"
gather_facts: "{{ keycloak_gather_facts | default(false) }}"
roles:
- role: finallycoffee.services.keycloak
tags:
- keycloak
- import_playbook: finallycoffee.base.caddy_reverse_proxy
when: keycloak_configure_caddy_reverse_proxy | default(false)
vars:
caddy_site_name: "{{ keycloak_domain }}"
caddy_reverse_proxy_backend_addr: "http://{{ keycloak_host_bind_ip }}"
caddy_reverse_proxy_template_block: >-2
{{ keycloak_caddy_reverse_proxy_template_block | default(true, false) }}
caddy_reverse_proxy_block: >-2
{{ keycloak_caddy_reverse_proxy_block | default('') }}
target_hosts: >-2
{{ keycloak_caddy_hosts | default(keycloak_hosts | default('keycloak')) }}
target_become: >-2
{{ keycloak_caddy_become | default(keycloak_become | default(false)) }}
target_gather_facts: >-2
{{ keycloak_caddy_gather_facts | default(false) }}
tags:
- keycloak
- keycloak-caddy

View File

@@ -1,6 +0,0 @@
---
- name: Install openproject
hosts: "{{ openproject_hosts | default('openproject') }}"
become: "{{ openproject_become | default(true, false) }}"
roles:
- role: finallycoffee.services.openproject

View File

@@ -1,6 +0,0 @@
# `finallycoffee.services.phpldapadmin` ansible playbook
## Feature toggles
- `phpldapadmin_configure_lego_rfc2136` (default `false`)
- `phpldapadmin_configure_caddy_reverse_proxy` (default `false`)

View File

@@ -1,44 +0,0 @@
---
- import_playbook: finallycoffee.base.lego_certificate
when: phpldapadmin_configure_lego_rfc2136 | default(false)
vars:
target_domains:
- "{{ phpldapadmin_domain }}"
target_acme_zone: "{{ acme_domain }}"
target_acme_account_email: "{{ phpldapadmin_lego_acme_account_email }}"
target_dns_server: "{{ dns_server }}"
target_dns_tsig_key: "{{ dns_tsig_keydata }}"
target_dns_additional_records: "{{ phpldapadmin_dns_records }}"
target_hosts: >-2
{{ phpldapadmin_lego_hosts | default(phpldapadmin_hosts | default('phpldapadmin')) }}
target_become: >-2
{{ phpldapadmin_lego_become | default(phpldapadmin_become | default(false)) }}
target_gather_facts: >-2
{{ phpldapadmin_lego_gather_facts | default(false) }}
tags:
- phpldapadmin
- phpldapadmin-lego
- name: Configure and run phpldapadmin
hosts: "{{ phpldapadmin_hosts | default('phpldapadmin', true) }}"
become: "{{ phpldapadmin_become | default(false) }}"
gather_facts: "{{ phpldapadmin_gather_facts | default(false) }}"
roles:
- role: finallycoffee.services.phpldapadmin
tags:
- phpldapadmin
- import_playbook: finallycoffee.base.caddy_reverse_proxy
when: phpldapadmin_configure_caddy_reverse_proxy | default(false)
vars:
caddy_site_name: "{{ phpldapadmin_domain }}"
caddy_reverse_proxy_backend_addr: "http://{{ phpldapadmin_host_bind_ip }}"
target_hosts: >-2
{{ phpldapadmin_caddy_hosts | default(phpldapadmin_hosts | default('phpldapadmin')) }}
target_become: >-2
{{ phpldapadmin_caddy_become | default(phpldapadmin_become | default(false)) }}
target_gather_facts: >-2
{{ phpldapadmin_caddy_gather_facts | default(false) }}
tags:
- phpldapadmin
- phpldapadmin-caddy

View File

@@ -1,129 +0,0 @@
---
- import_playbook: finallycoffee.databases.postgresql_client
when: pretix_configure_postgresql | default(true)
vars:
postgresql_hosts: "{{ pretix_hosts | default('pretix') }}"
postgresql_become: >-2
{{ pretix_postgresql_client_become | default(pretix_become | default(true)) }}
postgresql_client_database: "{{ pretix_postgresql_database | default('pretix') }}"
postgresql_client_username: "{{ pretix_postgresql_user | default('pretix') }}"
postgresql_client_password: >-2
{{ pretix_postgresql_password | mandatory(msg='pretix postgresql password is required') }}
- import_playbook: finallycoffee.base.lego_certificate
when: pretix_acquire_lego_certificate | default(false)
vars:
target_hosts: "pretix"
target_domains:
- "{{ pretix_domain }}"
target_acme_zone: "{{ acme_domain }}"
target_acme_account_email: "{{ pretix_lego_acme_account_email }}"
target_dns_server: "{{ dns_server }}"
target_dns_additional_records: "{{ pretix_dns_records }}"
target_dns_tsig_key: "{{ dns_tsig_keydata }}"
target_gather_facts: "{{ pretix_gather_facts | default(false) }}"
- import_playbook: finallycoffee.databases.valkey
when: pretix_configure_valkey | default(true)
vars:
valkey_hosts: "{{ pretix_hosts | default('pretix') }}"
valkey_instance: "pretix"
valkey_secret: "{{ pretix_redis_secret | mandatory(msg='pretix valkey secret is required') }}"
valkey_config_user:
- "default on +@all -DEBUG ~* &* >{{ pretix_redis_secret }}"
valkey_container_ports:
- "{{ pretix_redis_bind_addr | default('127.0.10.1:6739') }}:{{ valkey_config_port }}"
valkey_config_bind:
- "0.0.0.0"
- "-::"
tags:
- pretix-valkey
- name: Install and configure pretix
hosts: "{{ pretix_hosts | default('pretix') }}"
become: "{{ pretix_become | default(true) }}"
gather_facts: "{{ pretix_gather_facts | default(false) }}"
roles:
- role: finallycoffee.services.pretix
vars:
pretix_config_url: "https://{{ pretix_domain }}"
pretix_config_database_name: "{{ pretix_postgresql_database | default('pretix') }}"
pretix_config_database_user: "{{ pretix_postgresql_user | default('pretix') }}"
pretix_config_database_password: "{{ pretix_postgresql_password }}"
pretix_config_redis_location: >-2
redis://:{{ pretix_redis_secret }}@{{ pretix_redis_bind_addr }}/0
pretix_config_celery_backend: >-2
redis://:{{ pretix_redis_secret }}@{{ pretix_redis_bind_addr }}/1
pretix_config_celery_broker: >-2
redis://:{{ pretix_redis_secret }}@{{ pretix_redis_bind_addr }}/2
- role: finallycoffee.base.nginx
when: pretix_configure_nginx | default(true)
vars:
nginx_container_name: "nginx-pretix"
nginx_container_labels: "{{ pretix_nginx_container_labels | default({}, true) }}"
nginx_container_ports: "{{ pretix_nginx_container_ports | default([], true) }}"
nginx_config_file: "{{ nginx_base_path }}/nginx-pretix.conf"
nginx_config: |+
server {
listen 80 default_server;
server_name {{ pretix_domain }};
add_header Referrer-Policy same-origin;
add_header X-Content-Type-Options nosniff;
location / {
proxy_pass http://{{ pretix_config_wsgi_bind_addr }};
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
}
location /media/ {
alias {{ pretix_media_dir }}/;
expires 7d;
access_log off;
}
location ^~ /media/cachedfiles {
deny all;
return 404;
}
location ^~ /media/invoices {
deny all;
return 404;
}
location /static/staticfiles.json {
deny all;
return 404;
}
location /static/CACHE/manifest.json {
deny all;
return 404;
}
location /static/ {
alias {{ pretix_static_asset_dir }};
access_log off;
expires 365d;
add_header Cache-Control "public";
}
}
pretix_detected_python_version: >-2
python{{ ansible_python.version.major }}.{{ ansible_python.version.minor }}
pretix_static_asset_dir: >-2
{{ pretix_virtualenv_dir }}/lib/{{ pretix_python_version | default(pretix_detected_python_version) }}/site-packages/pretix/static.dist/
nginx_container_volumes:
- "{{ nginx_config_file }}:/etc/nginx/conf.d/nginx.conf:ro"
- "{{ pretix_media_dir }}:{{ pretix_media_dir }}:ro"
- "{{ pretix_static_asset_dir }}:{{ pretix_static_asset_dir }}:ro"
vars:
pretix_redis_bind_addr: "127.0.10.1:6739"
- import_playbook: finallycoffee.base.caddy_reverse_proxy
when: pretix_configure_caddy_reverse_proxy | default(false)
vars:
caddy_site_name: "{{ pretix_domain }}"
caddy_reverse_proxy_backend_addr: "http://{{ pretix_host_bind_addr }}"
target_hosts: >-2
{{ pretix_caddy_hosts | default(pretix_hosts | default('pretix')) }}
target_become: >-2
{{ pretix_caddy_become | default(pretix_become | default(true, true)) }}
target_gather_facts: >-2
{{ pretix_caddy_gather_facts | default(false) }}
tags:
- pretix-caddy

View File

@@ -1,7 +0,0 @@
# `finallycoffee.services.snipe_it` ansible playbook
## Feature toggles
- `snipe_it_configure_mariadb` (default `true`)
- `snipe_it_configure_lego_rfc2136` (default `false`)
- `snipe_it_configure_caddy_reverse_proxy` (default `false`)

View File

@@ -1,58 +0,0 @@
---
- import_playbook: finallycoffee.base.lego_certificate
when: snipe_it_configure_lego_rfc2136 | default(false)
vars:
target_domains:
- "{{ snipe_it_domain }}"
target_acme_zone: "{{ acme_domain }}"
target_acme_account_email: "{{ snipe_it_lego_acme_account_email }}"
target_dns_server: "{{ dns_server }}"
target_dns_tsig_key: "{{ dns_tsig_keydata }}"
target_dns_additional_records: "{{ snipe_it_dns_records }}"
target_hosts: >-2
{{ snipe_it_lego_hosts | default(snipe_it_hosts | default('snipe_it')) }}
target_become: >-2
{{ snipe_it_lego_become | default(snipe_it_become | default(false)) }}
target_gather_facts: >-2
{{ snipe_it_lego_gather_facts | default(false) }}
tags:
- snipe-it
- snipe-it-lego
- name: Set up snipe-it, an asset management system
hosts: "{{ snipe_it_hosts | default('snipe_it') }}"
become: "{{ snipe_it_become | default(false) }}"
roles:
- role: finallycoffee.databases.mariadb
when: snipe_it_configure_mariadb | default(true, true)
vars:
mariadb_root_password: "{{ snipe_it_builtin_database_root_pass }}"
mariadb_database: "{{ snipe_it_config_db_database }}"
mariadb_username: "{{ snipe_it_config_db_username }}"
mariadb_password: "{{ snipe_it_config_db_password }}"
mariadb_container_name: "snipe-it-mysql"
mariadb_container_ports: "{{ snipe_it_builtin_database_container_ports }}"
mariadb_base_path: "/databases/snipe-it/mariadb"
tags:
- snipe-it
- snipe-it-mariadb
- role: finallycoffee.services.snipe_it
tags:
- snipe-it
- import_playbook: finallycoffee.base.caddy_reverse_proxy
when: snipe_it_configure_caddy_reverse_proxy | default(false)
vars:
caddy_site_name: "{{ snipe_it_domain }}"
caddy_reverse_proxy_backend_addr: "http://{{ snipe_it_host_bind_addr }}"
caddy_reverse_proxy_extra_config: >-2
{{ snipe_it_caddy_reverse_proxy_extra_config | default('') }}
target_hosts: >-2
{{ snipe_it_caddy_hosts | default(snipe_it_hosts | default('snipe_it')) }}
target_become: >-2
{{ snipe_it_caddy_become | default(snipe_it_become | default(false)) }}
target_gather_facts: >-2
{{ snipe_it_caddy_gather_facts | default(false) }}
tags:
- snipe-it
- snipe-it-caddy

View File

@@ -1,43 +0,0 @@
---
- import_playbook: finallycoffee.base.lego_certificate
when: unifi_controller_enable_lego_certificate | default(false) | bool
vars:
target_become: "{{ unifi_controller_become | default(false) }}"
target_hosts: "{{ unifi_controller_hosts | default('unifi_controller') }}"
target_gather_facts: "{{ unifi_controller_gather_facts | default(false) }}"
target_domains:
- "{{ unifi_controller_domain }}"
target_acme_zone: "{{ acme_domain }}"
target_acme_account_email: "{{ unifi_controller_lego_acme_account_email }}"
target_dns_server: "{{ dns_server }}"
target_dns_additional_records: "{{ unifi_controller_dns_records | default([]) }}"
target_dns_tsig_key: "{{ dns_tsig_keydata }}"
tags:
- unifi-controller
- unifi-controller-lego
- name: Deploy unifi controller
hosts: "{{ unifi_controller_hosts | default('unifi_controller') }}"
become: "{{ unifi_controller_become | default(false) }}"
gather_facts: "{{ unifi_controller_gather_facts | default(false) }}"
roles:
- role: finallycoffee.services.unifi_controller
tags:
- unifi-controller
- import_playbook: finallycoffee.base.caddy_reverse_proxy
when: unifi_controller_configure_caddy_reverse_proxy | default(false)
vars:
caddy_site_name: "{{ unifi_controller_domain }}"
caddy_reverse_proxy_backend_addr: "https://{{ unifi_controller_bind_addr }}"
caddy_reverse_proxy_extra_config: >-2
{{ unifi_controller_caddy_reverse_proxy_extra_config | default('', true) }}
target_hosts: >-2
{{ unifi_controller_caddy_hosts | default(unifi_controller_hosts | default('unifi_controller')) }}
target_become: >-2
{{ unifi_controller_caddy_become | default(unifi_controller_become | default(false)) }}
target_gather_facts: >-2
{{ unifi_controller_caddy_gather_facts | default(false) }}
tags:
- unifi-controller
- unifi-controller-caddy

View File

@@ -1,6 +0,0 @@
# `finallycoffee.services.vaultwarden` ansible playbook
## Feature toggles
- `vaultwarden_configure_lego_rfc2136` (default `false`)
- `vaultwarden_configure_caddy_reverse_proxy` (default `false`)

View File

@@ -1,53 +0,0 @@
---
- import_playbook: finallycoffee.base.lego_certificate
when: vaultwarden_configure_lego_rfc2136 | default(false)
vars:
target_domains: "{{ vaultwarden_lego_cert_domains }}"
target_acme_zone: "{{ acme_domain }}"
target_acme_account_email: "{{ vaultwarden_lego_acme_account_email }}"
target_dns_server: "{{ dns_server }}"
target_dns_tsig_key: "{{ dns_tsig_keydata }}"
target_dns_additional_records: "{{ vaultwarden_dns_records }}"
target_hosts: >-2
{{ vaultwarden_lego_hosts | default(vaultwarden_hosts | default('vaultwarden')) }}
target_become: >-2
{{ vaultwarden_lego_become | default(vaultwarden_become | default(false)) }}
target_gather_facts: >-2
{{ vaultwarden_lego_gather_facts | default(false) }}
tags:
- vaultwarden
- vaultwarden-lego
- name: Install and configure vaultwarden
hosts: "{{ vaultwarden_hosts | default('vaultwarden') }}"
become: "{{ vaultwarden_become | default(false) }}"
gather_facts: "{{ vaultwarden_gather_facts | default(false) }}"
pre_tasks:
- name: Ensure host directories are created
file:
path: "{{ item }}"
state: directory
mode: 0750
loop:
- "{{ vaultwarden_base_dir }}"
- "{{ vaultwarden_config_dir }}"
when: vaultwarden_state == 'present'
roles:
- role: finallycoffee.services.vaultwarden
tags:
- vaultwarden
- import_playbook: finallycoffee.base.caddy_reverse_proxy
when: vaultwarden_configure_caddy_reverse_proxy | default(false)
vars:
caddy_site_name: "{{ vaultwarden_domain }}"
caddy_reverse_proxy_backend_addr: "http://{{ vaultwarden_host_bind_ip }}"
target_hosts: >-2
{{ vaultwarden_caddy_hosts | default(vaultwarden_hosts | default('vaultwarden')) }}
target_become: >-2
{{ vaultwarden_caddy_become | default(vaultwarden_become | default(false)) }}
target_gather_facts: >-2
{{ vaultwarden_caddy_gather_facts | default(false) }}
tags:
- vaultwarden
- vaultwarden-caddy

View File

@@ -1,10 +1,9 @@
---
authelia_version: "4.39.15"
authelia_version: 4.37.5
authelia_user: authelia
authelia_base_dir: /opt/authelia
authelia_domain: authelia.example.org
authelia_state: present
authelia_deployment_method: docker
authelia_config_dir: "{{ authelia_base_dir }}/config"
authelia_config_file: "{{ authelia_config_dir }}/config.yaml"
@@ -15,24 +14,13 @@ authelia_notification_storage_file: "{{ authelia_data_dir }}/notifications.txt"
authelia_user_storage_file: "{{ authelia_data_dir }}/user_database.yml"
authelia_container_name: authelia
authelia_container_image_server: ghcr.io
authelia_container_image_namespace: authelia
authelia_container_image_name: authelia
authelia_container_image: >-2
{{
[
authelia_container_image_server,
authelia_container_image_namespace,
authelia_container_image_name
] | join('/')
}}
authelia_container_image_name: docker.io/authelia/authelia
authelia_container_image_tag: ~
authelia_container_image_ref: >-2
{{ authelia_container_image }}:{{ authelia_container_image_tag | default(authelia_version, true) }}
authelia_container_image_ref: "{{ authelia_container_image_name }}:{{ authelia_container_image_tag | default(authelia_version, true) }}"
authelia_container_image_force_pull: "{{ authelia_container_image_tag | default(false, True) }}"
authelia_container_env:
PUID: "{{ authelia_run_user | string }}"
PGID: "{{ authelia_run_group | string }}"
PUID: "{{ authelia_run_user }}"
PGID: "{{ authelia_run_group }}"
authelia_container_labels: >-2
{{ authelia_container_base_labels | combine(authelia_container_extra_labels) }}
authelia_container_extra_labels: {}
@@ -54,22 +42,12 @@ authelia_config_jwt_secret: ~
authelia_config_default_redirection_url: ~
authelia_config_server_host: 0.0.0.0
authelia_config_server_port: "{{ authelia_container_listen_port }}"
authelia_config_server_address: >-2
{{ authelia_config_server_host }}:{{ authelia_config_server_port }}
authelia_config_server_path: ""
authelia_config_server_asset_path: "/config/assets/"
authelia_config_server_buffers_read: 4096
authelia_config_server_read_buffer_size: >-2
{{ authelia_config_server_buffers_read }}
authelia_config_server_buffers_write: 4096
authelia_config_server_write_buffer_size: >-2
{{ authelia_config_server_buffers_write }}
authelia_config_server_endpoints_enable_pprof: true
authelia_config_server_enable_pprof: >-2
{{ authelia_config_server_endpoints_enable_pprof }}
authelia_config_server_endpoints_enable_expvars: true
authelia_config_server_enable_expvars: >-2
{{ authelia_config_server_endpoints_enable_expvars }}
authelia_config_server_read_buffer_size: 4096
authelia_config_server_write_buffer_size: 4096
authelia_config_server_enable_pprof: true
authelia_config_server_enable_expvars: true
authelia_config_server_disable_healthcheck:
authelia_config_server_tls_key: ~
authelia_config_server_tls_certificate: ~
@@ -92,11 +70,7 @@ authelia_config_webauthn_disable: true
authelia_config_webauthn_timeout: 60s
authelia_config_webauthn_display_name: "Authelia ({{ authelia_domain }})"
authelia_config_webauthn_attestation_conveyance_preference: indirect
authelia_config_webauthn_user_verification: "preferred"
authelia_config_webauthn_selection_criteria_user_verification: >-2
{{ authelia_config_webauthn_user_verification }}
authelia_config_webauthn_selection_criteria_discoverability: "preferred"
authelia_config_webauthn_selection_criteria_attachment: ""
authelia_config_webauthn_user_verification: preferred
authelia_config_duo_api_hostname: ~
authelia_config_duo_api_integration_key: ~
authelia_config_duo_api_secret_key: ~
@@ -111,8 +85,6 @@ authelia_config_authentication_backend_password_reset_disable: false
authelia_config_authentication_backend_password_reset_custom_url: ~
authelia_config_authentication_backend_ldap_implementation: custom
authelia_config_authentication_backend_ldap_url: ldap://127.0.0.1:389
authelia_config_authentication_backend_ldap_address: >-2
{{ authelia_config_authentication_backend_ldap_url }}
authelia_config_authentication_backend_ldap_timeout: 5s
authelia_config_authentication_backend_ldap_start_tls: false
authelia_config_authentication_backend_ldap_tls_skip_verify: false
@@ -122,18 +94,10 @@ authelia_config_authentication_backend_ldap_additional_users_dn: "ou=users"
authelia_config_authentication_backend_ldap_users_filter: "(&(|({username_attribute}={input})({mail_attribute}={input}))(objectClass=inetOrgPerson))"
authelia_config_authentication_backend_ldap_additional_groups_dn: "ou=groups"
authelia_config_authentication_backend_ldap_groups_filter: "(member={dn})"
authelia_config_authentication_backend_ldap_attributes_username: uid
authelia_config_authentication_backend_ldap_username_attribute: >-2
{{ authelia_config_authentication_backend_ldap_attributes_username }}
authelia_config_authentication_backend_ldap_attributes_mail: mail
authelia_config_authentication_backend_ldap_mail_attribute: >-2
{{ authelia_config_authentication_backend_ldap_attributes_mail }}
authelia_config_authentication_backend_ldap_attributes_display_name: displayName
authelia_config_authentication_backend_ldap_display_name_attribute: >-2
{{ authelia_config_authentication_backend_ldap_attributes_display_name }}
authelia_config_authentication_backend_ldap_group_name_attribute: cn
authelia_config_authentication_backend_ldap_attributes_group_name: >-2
{{ authelia_config_authentication_backend_ldap_group_name_attribute }}
authelia_config_authentication_backend_ldap_username_attribute: uid
authelia_config_authentication_backend_ldap_mail_attribute: mail
authelia_config_authentication_backend_ldap_display_name_attribute: displayName
authelia_config_authentication_backend_ldap_user: ~
authelia_config_authentication_backend_ldap_password: ~
authelia_config_authentication_backend_file_path: ~
@@ -161,21 +125,6 @@ authelia_config_session_secret: ~
authelia_config_session_expiration: 1h
authelia_config_session_inactivity: 5m
authelia_config_session_remember_me_duration: 1M
authelia_config_session_remember_me: >-2
{{ authelia_config_session_remember_me_duration }}
authelia_config_session_cookies:
- "{{ authelia_config_session_cookies_default }}"
authelia_config_session_cookies_default_domain: >-2
{{ authelia_config_session_domain }}
authelia_config_session_cookies_default_authelia_url: >-2
https://{{ authelia_config_session_cookies_default_domain }}
authelia_config_session_cookies_default_default_redirection_url: >-2
{{ authelia_config_default_redirection_url }}
authelia_config_session_cookies_default:
domain: "{{ authelia_config_session_cookies_default_domain }}"
authelia_url: "{{ authelia_config_session_cookies_default_authelia_url }}"
default_redirection_url: >-2
{{ authelia_config_session_cookies_default_default_redirection_url }}
authelia_config_session_redis_host: "{{ authelia_redis_host }}"
authelia_config_session_redis_port: "{{ authelia_redis_port }}"
authelia_config_session_redis_username: "{{ authelia_redis_user }}"
@@ -200,7 +149,8 @@ authelia_config_storage_postgres_ssl_certificate: disable
authelia_config_storage_postgres_ssl_key: disable
authelia_config_notifier_disable_startup_check: false
authelia_config_notifier_filesystem_filename: ~
authelia_config_notifier_smtp_address: "{{ authelia_smtp_host }}:{{ authelia_stmp_port }}"
authelia_config_notifier_smtp_host: "{{ authelia_smtp_host }}"
authelia_config_notifier_smtp_port: "{{ authelia_stmp_port }}"
authelia_config_notifier_smtp_username: "{{ authelia_smtp_user }}"
authelia_config_notifier_smtp_password: "{{ authelia_smtp_pass }}"
authelia_config_notifier_smtp_timeout: 5s
@@ -212,19 +162,10 @@ authelia_config_notifier_smtp_disable_require_tls: false
authelia_config_notifier_smtp_disable_html_emails: false
authelia_config_notifier_smtp_tls_skip_verify: false
authelia_config_notifier_smtp_tls_minimum_version: "{{ authelia_tls_minimum_version }}"
authelia_config_identity_validation_reset_password_jwt_secret: >-2
{{ authelia_config_jwt_secret }}
authelia_config_identity_validation_reset_password_jwt_lifespan: "5 minutes"
authelia_config_identity_validation_reset_password_jwt_algorithm: "HS256"
#authelia_config_identity_provider_
authelia_database_type: ~
authelia_database_host: ~
authelia_database_port: ~
authelia_database_address: >-2
{{ authelia_database_host }}{{
(authelia_database_port | default(false, true) | bool)
| ternary(':' + authelia_database_port, '')
}}
authelia_database_user: authelia
authelia_database_pass: ~
authelia_database_name: authelia

View File

@@ -4,7 +4,5 @@
docker_container:
name: "{{ authelia_container_name }}"
state: started
restart: true
comparisons:
'*': ignore
restart: yes
listen: restart-authelia

View File

@@ -1,9 +0,0 @@
---
allow_duplicates: true
dependencies: []
galaxy_info:
role_name: authelia
description: Ansible role to deploy authelia using docker
galaxy_tags:
- authelia
- docker

View File

@@ -1,61 +0,0 @@
---
- name: Ensure container mounts are present
when: authelia_state == 'present'
block:
- name: Ensure sqlite database file exists before mounting it
ansible.builtin.file:
path: "{{ authelia_sqlite_storage_file }}"
state: touch
owner: "{{ authelia_run_user }}"
group: "{{ authelia_run_group }}"
mode: "0640"
access_time: preserve
modification_time: preserve
when: authelia_config_storage_local_path | default(false, true)
- name: Ensure user database exists before mounting it
ansible.builtin.file:
path: "{{ authelia_user_storage_file }}"
state: touch
owner: "{{ authelia_run_user }}"
group: "{{ authelia_run_group }}"
mode: "0640"
access_time: preserve
modification_time: preserve
when: authelia_config_authentication_backend_file_path | default(false, true)
- name: Ensure notification reports file exists before mounting it
ansible.builtin.file:
path: "{{ authelia_notification_storage_file }}"
state: touch
owner: "{{ authelia_run_user }}"
group: "{{ authelia_run_group }}"
mode: "0640"
access_time: preserve
modification_time: preserve
when: authelia_config_notifier_filesystem_filename | default(false, true)
- name: Ensure authelia container image is {{ authelia_state }}
community.docker.docker_image:
name: "{{ authelia_container_image_ref }}"
state: "{{ authelia_state }}"
source: pull
force_source: "{{ authelia_container_image_force_pull }}"
register: authelia_container_image_info
- name: Ensure authelia container is {{ authelia_container_state }}
community.docker.docker_container:
name: "{{ authelia_container_name }}"
image: "{{ authelia_container_image_ref }}"
env: "{{ authelia_container_env }}"
user: "{{ authelia_run_user }}:{{ authelia_run_group }}"
ports: "{{ authelia_container_ports | default(omit, true) }}"
labels: "{{ authelia_container_labels }}"
volumes: "{{ authelia_container_volumes }}"
networks: "{{ authelia_container_networks | default(omit, true) }}"
etc_hosts: "{{ authelia_container_etc_hosts | default(omit, true) }}"
purge_networks: "{{ authelia_container_purge_networks | default(omit, true)}}"
restart_policy: "{{ authelia_container_restart_policy }}"
recreate: "{{ authelia_container_recreate | default(omit, true) }}"
state: "{{ authelia_container_state }}"
register: authelia_container_info

View File

@@ -1,33 +1,20 @@
---
- name: Check for valid state
ansible.builtin.fail:
msg: >-2
Unsupported state '{{ authelia_state }}'.
Supported states are {{ authelia_states | join(', ') }}.
when: authelia_state not in authelia_states
- name: Check for valid authelia deployment method
ansible.builtin.fail:
msg: >-2
Unsupported deployment method '{{ authelia_deployment_method }}'.
Supported states are {{ authelia_deployment_methods | join(', ') }}.
when: authelia_deployment_method not in authelia_deployment_methods
- name: Ensure user {{ authelia_user }} is {{ authelia_state }}
ansible.builtin.user:
- name: Ensure user {{ authelia_user }} exists
user:
name: "{{ authelia_user }}"
state: "{{ authelia_state }}"
state: present
system: true
create_home: false
register: authelia_user_info
- name: Ensure host directories are {{ authelia_state }}
ansible.builtin.file:
- name: Ensure host directories are created with correct permissions
file:
path: "{{ item.path }}"
state: "{{ (authelia_state == 'present') | ternary('directory', 'absent') }}"
state: directory
owner: "{{ item.owner | default(authelia_user) }}"
group: "{{ item.group | default(authelia_user) }}"
mode: "{{ item.mode | default('0750') }}"
when: item.path | default(false, true) | bool
loop:
- path: "{{ authelia_base_dir }}"
mode: "0755"
@@ -38,16 +25,67 @@
- path: "{{ authelia_asset_dir }}"
mode: "0750"
- name: Ensure config file is {{ authelia_state }}
ansible.builtin.copy:
- name: Ensure config file is generated
copy:
content: "{{ authelia_config | to_nice_yaml(indent=2, width=10000) }}"
dest: "{{ authelia_config_file }}"
owner: "{{ authelia_run_user }}"
group: "{{ authelia_run_group }}"
mode: "0640"
notify: restart-authelia
when: authelia_state == 'present'
- name: Deploy authelia using {{ authelia_deployment_method }}
ansible.builtin.include_tasks:
file: "deploy-{{ authelia_deployment_method }}.yml"
- name: Ensure sqlite database file exists before mounting it
file:
path: "{{ authelia_sqlite_storage_file }}"
state: touch
owner: "{{ authelia_run_user }}"
group: "{{ authelia_run_group }}"
mode: "0640"
access_time: preserve
modification_time: preserve
when: authelia_config_storage_local_path | default(false, true)
- name: Ensure user database exists before mounting it
file:
path: "{{ authelia_user_storage_file }}"
state: touch
owner: "{{ authelia_run_user }}"
group: "{{ authelia_run_group }}"
mode: "0640"
access_time: preserve
modification_time: preserve
when: authelia_config_authentication_backend_file_path | default(false, true)
- name: Ensure notification reports file exists before mounting it
file:
path: "{{ authelia_notification_storage_file }}"
state: touch
owner: "{{ authelia_run_user }}"
group: "{{ authelia_run_group }}"
mode: "0640"
access_time: preserve
modification_time: preserve
when: authelia_config_notifier_filesystem_filename | default(false, true)
- name: Ensure authelia container image is present
community.docker.docker_image:
name: "{{ authelia_container_image_ref }}"
state: present
source: pull
force_source: "{{ authelia_container_image_force_pull }}"
register: authelia_container_image_info
- name: Ensure authelia container is running
docker_container:
name: "{{ authelia_container_name }}"
image: "{{ authelia_container_image_ref }}"
env: "{{ authelia_container_env }}"
user: "{{ authelia_run_user }}:{{ authelia_run_group }}"
ports: "{{ authelia_container_ports | default(omit, true) }}"
labels: "{{ authelia_container_labels }}"
volumes: "{{ authelia_container_volumes }}"
networks: "{{ authelia_container_networks | default(omit, true) }}"
purge_networks: "{{ authelia_container_purge_networks | default(omit, true)}}"
restart_policy: "{{ authelia_container_restart_policy }}"
state: "{{ authelia_container_state }}"
register: authelia_container_info

View File

@@ -1,9 +1,4 @@
---
authelia_states:
- "present"
- "absent"
authelia_deployment_methods:
- "docker"
authelia_run_user: "{{ (authelia_user_info.uid) if authelia_user_info is defined else authelia_user }}"
authelia_run_group: "{{ (authelia_user_info.group) if authelia_user_info is defined else authelia_user }}"
@@ -25,6 +20,7 @@ authelia_container_base_labels:
authelia_config: "{{ authelia_base_config | combine(authelia_extra_config, recursive=True) }}"
authelia_top_level_config:
theme: "{{ authelia_config_theme }}"
jwt_secret: "{{ authelia_config_jwt_secret }}"
log: "{{ authelia_config_log }}"
telemetry: "{{ authelia_config_telemetry }}"
totp: "{{ authelia_config_totp }}"
@@ -38,11 +34,12 @@ authelia_top_level_config:
regulation: "{{ authelia_config_regulation }}"
storage: "{{ authelia_config_storage }}"
notifier: "{{ authelia_config_notifier }}"
identity_validation: "{{ authelia_config_identity_validation }}"
authelia_base_config: >-2
{{
authelia_top_level_config
| combine({"default_redirection_url": authelia_config_default_redirection_url}
if authelia_config_default_redirection_url | default(false, true) else {})
| combine(({"server": authelia_config_server })
| combine({"tls": authelia_config_server_tls}
if authelia_config_server_tls_key | default(false, true) else {}))
@@ -51,20 +48,18 @@ authelia_base_config: >-2
authelia_config_server: >-2
{{
{
"address": authelia_config_server_address,
"host": authelia_config_server_host,
"port": authelia_config_server_port,
"path": authelia_config_server_path,
"asset_path": authelia_config_server_asset_path,
"read_buffer_size": authelia_config_server_read_buffer_size,
"write_buffer_size": authelia_config_server_write_buffer_size,
"enable_pprof": authelia_config_server_enable_pprof,
"enable_expvars": authelia_config_server_enable_expvars,
"disable_healthcheck": authelia_config_server_disable_healthcheck,
"endpoints": authelia_config_server_endpoints,
"buffers": authelia_config_server_buffers,
} | combine({"headers": {"csp_template": authelia_config_server_headers_csp_template}}
if authelia_config_server_headers_csp_template | default(false, true) else {})
}}
authelia_config_server_endpoints:
enable_expvars: "{{ authelia_config_server_endpoints_enable_expvars }}"
enable_pprof: "{{ authelia_config_server_endpoints_enable_pprof }}"
authelia_config_server_buffers:
read: "{{ authelia_config_server_buffers_read }}"
write: "{{ authelia_config_server_buffers_write }}"
authelia_config_server_tls:
key: "{{ authelia_config_server_tls_key }}"
certificate: "{{ authelia_config_server_tls_certificate }}"
@@ -97,10 +92,7 @@ authelia_config_webauthn:
timeout: "{{ authelia_config_webauthn_timeout }}"
display_name: "{{ authelia_config_webauthn_display_name }}"
attestation_conveyance_preference: "{{ authelia_config_webauthn_attestation_conveyance_preference }}"
selection_criteria:
attachment: "{{ authelia_config_webauthn_selection_criteria_attachment }}"
discoverability: "{{ authelia_config_webauthn_selection_criteria_discoverability }}"
user_verification: "{{ authelia_config_webauthn_selection_criteria_user_verification }}"
user_verification: "{{ authelia_config_webauthn_user_verification }}"
authelia_config_duo_api:
hostname: "{{ authelia_config_duo_api_hostname }}"
integration_key: "{{ authelia_config_duo_api_integration_key }}"
@@ -129,7 +121,7 @@ authelia_config_authentication_backend_password_reset:
disable: "{{ authelia_config_authentication_backend_password_reset_disable }}"
authelia_config_authentication_backend_ldap:
implementation: "{{ authelia_config_authentication_backend_ldap_implementation }}"
address: "{{ authelia_config_authentication_backend_ldap_address }}"
url: "{{ authelia_config_authentication_backend_ldap_url }}"
timeout: "{{ authelia_config_authentication_backend_ldap_timeout }}"
start_tls: "{{ authelia_config_authentication_backend_ldap_start_tls }}"
tls:
@@ -140,11 +132,10 @@ authelia_config_authentication_backend_ldap:
additional_groups_dn: "{{ authelia_config_authentication_backend_ldap_additional_groups_dn }}"
users_filter: "{{ authelia_config_authentication_backend_ldap_users_filter }}"
groups_filter: "{{ authelia_config_authentication_backend_ldap_groups_filter }}"
attributes:
username: "{{ authelia_config_authentication_backend_ldap_attributes_username }}"
mail: "{{ authelia_config_authentication_backend_ldap_attributes_mail }}"
display_name: "{{ authelia_config_authentication_backend_ldap_attributes_display_name }}"
group_name: "{{ authelia_config_authentication_backend_ldap_attributes_group_name }}"
group_name_attribute: "{{ authelia_config_authentication_backend_ldap_group_name_attribute }}"
username_attribute: "{{ authelia_config_authentication_backend_ldap_username_attribute }}"
mail_attribute: "{{ authelia_config_authentication_backend_ldap_mail_attribute }}"
display_name_attribute: "{{ authelia_config_authentication_backend_ldap_display_name_attribute }}"
user: "{{ authelia_config_authentication_backend_ldap_user }}"
password: "{{ authelia_config_authentication_backend_ldap_password }}"
authelia_config_authentication_backend_file:
@@ -176,19 +167,14 @@ authelia_config_access_control:
default_policy: "{{ authelia_config_access_control_default_policy }}"
networks: "{{ authelia_config_access_control_networks }}"
rules: "{{ authelia_config_access_control_rules }}"
authelia_config_session: >-2
{{ authelia_config_session_base
| combine(({'redis': authelia_config_session_redis}
if authelia_config_session_redis_host else {}), recursive=true)
}}
authelia_config_session_base:
authelia_config_session:
name: "{{ authelia_config_session_name }}"
domain: "{{ authelia_config_session_domain }}"
same_site: "{{ authelia_config_session_same_site }}"
secret: "{{ authelia_config_session_secret }}"
expiration: "{{ authelia_config_session_expiration }}"
inactivity: "{{ authelia_config_session_inactivity }}"
remember_me: "{{ authelia_config_session_remember_me }}"
cookies: "{{ authelia_config_session_cookies }}"
remember_me_duration: "{{ authelia_config_session_remember_me_duration }}"
authelia_config_session_redis: >-2
{{
{
@@ -232,13 +218,15 @@ authelia_config_storage: >-2
authelia_config_storage_local:
path: "{{ authelia_config_storage_local_path }}"
authelia_config_storage_mysql:
host: "{{ authelia_database_address }}"
host: "{{ authelia_database_host }}"
port: "{{ authelia_config_storage_mysql_port }}"
database: "{{ authelia_database_name }}"
username: "{{ authelia_database_user }}"
password: "{{ authelia_database_pass }}"
timeout: "{{ authelia_database_timeout }}"
authelia_config_storage_postgres:
address: "{{ authelia_database_address }}"
host: "{{ authelia_database_host }}"
port: "{{ authelia_config_storage_postgres_port }}"
database: "{{ authelia_database_name }}"
schema: public
username: "{{ authelia_database_user }}"
@@ -262,7 +250,8 @@ authelia_config_notifier: >-2
authelia_config_notifier_filesystem:
filename: "{{ authelia_config_notifier_filesystem_filename }}"
authelia_config_notifier_smtp:
address: "{{ authelia_config_notifier_smtp_address }}"
host: "{{ authelia_config_notifier_smtp_host }}"
port: "{{ authelia_config_notifier_smtp_port }}"
timeout: "{{ authelia_config_notifier_smtp_timeout }}"
username: "{{ authelia_config_notifier_smtp_username }}"
password: "{{ authelia_config_notifier_smtp_password }}"
@@ -275,9 +264,3 @@ authelia_config_notifier_smtp:
tls:
skip_verify: "{{ authelia_config_notifier_smtp_tls_skip_verify }}"
minimum_version: "{{ authelia_config_notifier_smtp_tls_minimum_version }}"
authelia_config_identity_validation:
reset_password: "{{ authelia_config_identity_validation_reset_password }}"
authelia_config_identity_validation_reset_password:
jwt_secret: "{{ authelia_config_identity_validation_reset_password_jwt_secret }}"
jwt_lifespan: "{{ authelia_config_identity_validation_reset_password_jwt_lifespan }}"
jwt_algorithm: "{{ authelia_config_identity_validation_reset_password_jwt_algorithm }}"

View File

@@ -0,0 +1,22 @@
# `finallycoffee.services.elastiscsearch`
A simple ansible role which deploys a single-node elastic container to provide
an easy way to do some indexing.
## Usage
Per default, `/opt/elasticsearch/data` is used to persist data, it is
customizable by using either `elasticsearch_base_path` or `elasticsearch_data_path`.
As elasticsearch be can be quite memory heavy, the maximum amount of allowed RAM
can be configured using `elasticsearch_allocated_ram_mb`, defaulting to 512 (mb).
The cluster name and discovery type can be overridden using
`elasticsearch_config_cluster_name` (default: elastic) and
`elasticsearch_config_discovery_type` (default: single-node), should one
need a multi-node elasticsearch deployment.
Per default, no ports or networks are mapped, and explizit mapping using
either ports (`elasticsearch_container_ports`) or networks
(`elasticsearch_container_networks`) is required in order for other services
to use elastic.

View File

@@ -0,0 +1,35 @@
---
elasticsearch_version: 7.17.7
elasticsearch_base_path: /opt/elasticsearch
elasticsearch_data_path: "{{ elasticsearch_base_path }}/data"
elasticsearch_config_cluster_name: elastic
elasticsearch_config_discovery_type: single-node
elasticsearch_config_boostrap_memory_lock: true
elasticsearch_allocated_ram_mb: 512
elasticsearch_container_image_name: docker.elastic.co/elasticsearch/elasticsearch-oss
elasticsearch_container_image_tag: ~
elasticsearch_container_image: >-
{{ elasticsearch_container_image_name }}:{{ elasticsearch_container_image_tag | default(elasticsearch_version, true) }}
elasticsearch_container_name: elasticsearch
elasticsearch_container_env:
"ES_JAVA_OPTS": "-Xms{{ elasticsearch_allocated_ram_mb }}m -Xmx{{ elasticsearch_allocated_ram_mb }}m"
"cluster.name": "{{ elasticsearch_config_cluster_name }}"
"discovery.type": "{{ elasticsearch_config_discovery_type }}"
"bootstrap.memory_lock": "{{ 'true' if elasticsearch_config_boostrap_memory_lock else 'false' }}"
elasticsearch_container_user: ~
elasticsearch_container_ports: ~
elasticsearch_container_labels:
version: "{{ elasticsearch_version }}"
elasticsearch_container_ulimits:
# - "memlock:{{ (1.5 * 1024 * elasticsearch_allocated_ram_mb) | int }}:{{ (1.5 * 1024 * elasticsearch_allocated_ram_mb) | int }}"
- "memlock:-1:-1"
elasticsearch_container_volumes:
- "{{ elasticsearch_data_path }}:/usr/share/elasticsearch/data:z"
elasticsearch_container_networks: ~
elasticsearch_container_purge_networks: ~
elasticsearch_container_restart_policy: unless-stopped

View File

@@ -0,0 +1,32 @@
---
- name: Ensure host directories are present
file:
path: "{{ item }}"
state: directory
mode: "0777"
loop:
- "{{ elasticsearch_base_path }}"
- "{{ elasticsearch_data_path }}"
- name: Ensure elastic container image is present
docker_image:
name: "{{ elasticsearch_container_image }}"
state: present
source: pull
force_source: "{{ elasticsearch_container_image_tag|default(false, true)|bool }}"
- name: Ensure elastic container is running
docker_container:
name: "{{ elasticsearch_container_name }}"
image: "{{ elasticsearch_container_image }}"
env: "{{ elasticsearch_container_env | default(omit, True) }}"
user: "{{ elasticsearch_container_user | default(omit, True) }}"
ports: "{{ elasticsearch_container_ports | default(omit, True) }}"
labels: "{{ elasticsearch_container_labels | default(omit, True) }}"
volumes: "{{ elasticsearch_container_volumes }}"
ulimits: "{{ elasticsearch_container_ulimits }}"
networks: "{{ elasticsearch_container_networks | default(omit, True) }}"
purge_networks: "{{ elasticsearch_container_purge_networks | default(omit, True) }}"
restart_policy: "{{ elasticsearch_container_restart_policy }}"
state: started

View File

@@ -1,18 +0,0 @@
# `finallycoffee.services.ghost` ansible role
[Ghost](https://ghost.org/) is a self-hosted blog with rich media capabilities,
which this role deploys in a docker container.
## Requirements
Ghost requires a MySQL-database (like mariadb) for storing it's data, which
can be configured using the `ghost_database_(host|username|password|database)` variables.
Setting `ghost_domain` to a fully-qualified domain on which ghost should be reachable
is also required.
Ghosts configuration can be changed using the `ghost_config` variable.
Container arguments which are equivalent to `community.docker.docker_container` can be
provided in the `ghost_container_[...]` syntax (e.g. `ghost_container_ports` to expose
ghosts port to the host).

View File

@@ -1,6 +1,7 @@
---
ghost_domain: ~
ghost_version: "6.10.3"
ghost_version: "5.33.6"
ghost_user: ghost
ghost_user_group: ghost
ghost_base_path: /opt/ghost
@@ -35,4 +36,3 @@ ghost_container_restart_policy: "unless-stopped"
ghost_container_networks: ~
ghost_container_purge_networks: ~
ghost_container_etc_hosts: ~
ghost_container_state: started

View File

@@ -1,10 +0,0 @@
---
allow_duplicates: true
dependencies: []
galaxy_info:
role_name: ghost
description: Ansible role to deploy ghost (https://ghost.org) using docker
galaxy_tags:
- ghost
- blog
- docker

View File

@@ -16,16 +16,15 @@
- name: Ensure host paths for docker volumes exist for ghost
ansible.builtin.file:
path: "{{ item.path }}"
path: "{{ item }}"
state: directory
mode: "0750"
owner: "{{ item.owner | default(ghost_user) }}"
group: "{{ item.group | default(ghost_user_group) }}"
owner: "{{ ghost_user }}"
group: "{{ ghost_user_group }}"
loop:
- path: "{{ ghost_base_path }}"
- path: "{{ ghost_data_path }}"
owner: "1000"
- path: "{{ ghost_config_path }}"
- "{{ ghost_base_path }}"
- "{{ ghost_data_path }}"
- "{{ ghost_config_path }}"
- name: Ensure ghost configuration file is templated
ansible.builtin.template:
@@ -42,7 +41,7 @@
source: pull
force_source: "{{ ghost_container_image_tag is defined }}"
- name: Ensure ghost container '{{ ghost_container_name }}' is {{ ghost_container_state }}
- name: Ensure ghost container is running
community.docker.docker_container:
name: "{{ ghost_container_name }}"
image: "{{ ghost_container_image }}"
@@ -54,4 +53,4 @@
networks: "{{ ghost_container_networks | default(omit, true) }}"
purge_networks: "{{ ghost_container_purge_networks | default(omit, true) }}"
restart_policy: "{{ ghost_container_restart_policy }}"
state: "{{ ghost_container_state }}"
state: started

View File

@@ -1,7 +1,7 @@
---
gitea_version: "1.25.3"
gitea_version: "1.19.4"
gitea_user: git
gitea_run_user: "{{ gitea_user }}"
gitea_base_path: "/opt/gitea"
gitea_data_path: "{{ gitea_base_path }}/data"
@@ -9,30 +9,17 @@ gitea_data_path: "{{ gitea_base_path }}/data"
gitea_domain: ~
# container config
gitea_container_name: "{{ gitea_user }}"
gitea_container_image_server: "docker.io"
gitea_container_image_name: "gitea"
gitea_container_image_namespace: gitea
gitea_container_image_fq_name: >-
{{
[
gitea_container_image_server,
gitea_container_image_namespace,
gitea_container_image_name
] | join('/')
}}
gitea_container_name: "git"
gitea_container_image_name: "docker.io/gitea/gitea"
gitea_container_image_tag: "{{ gitea_version }}"
gitea_container_image: >-2
{{ gitea_container_image_fq_name }}:{{ gitea_container_image_tag }}
gitea_container_image: "{{ gitea_container_image_name }}:{{ gitea_container_image_tag }}"
gitea_container_networks: []
gitea_container_purge_networks: ~
gitea_container_restart_policy: "unless-stopped"
gitea_container_extra_env: {}
gitea_container_extra_labels: {}
gitea_contianer_extra_labels: {}
gitea_container_extra_ports: []
gitea_container_extra_volumes: []
gitea_container_state: started
gitea_container_user: ~
# container defaults
gitea_container_base_volumes:
@@ -44,8 +31,8 @@ gitea_container_base_ports:
- "127.0.0.1:{{ git_container_port_ssh }}:{{ git_container_port_ssh }}"
gitea_container_base_env:
USER_UID: "{{ gitea_user_res.uid | default(gitea_user) | string }}"
USER_GID: "{{ gitea_user_res.group | default(gitea_user) | string }}"
USER_UID: "{{ gitea_user_res.uid | default(gitea_user) }}"
USER_GID: "{{ gitea_user_res.group | default(gitea_user) }}"
gitea_container_base_labels:
version: "{{ gitea_version }}"
@@ -53,10 +40,10 @@ gitea_container_base_labels:
gitea_config_mailer_enabled: false
gitea_config_mailer_type: ~
gitea_config_mailer_from_addr: ~
gitea_config_mailer_smtp_addr: ~
gitea_config_mailer_host: ~
gitea_config_mailer_user: ~
gitea_config_mailer_passwd: ~
gitea_config_mailer_protocol: ~
gitea_config_mailer_tls: ~
gitea_config_mailer_sendmail_path: ~
gitea_config_metrics_enabled: false

View File

@@ -1,10 +0,0 @@
---
allow_duplicates: true
dependencies: []
galaxy_info:
role_name: gitea
description: Ansible role to deploy gitea using docker
galaxy_tags:
- gitea
- git
- docker

View File

@@ -1,14 +1,14 @@
---
- name: Ensure gitea user '{{ gitea_user }}' is present
ansible.builtin.user:
- name: Create gitea user
user:
name: "{{ gitea_user }}"
state: "present"
system: false
create_home: true
state: present
system: no
register: gitea_user_res
- name: Ensure host directories exist
ansible.builtin.file:
file:
path: "{{ item }}"
owner: "{{ gitea_user_res.uid }}"
group: "{{ gitea_user_res.group }}"
@@ -18,7 +18,7 @@
- "{{ gitea_data_path }}"
- name: Ensure .ssh folder for gitea user exists
ansible.builtin.file:
file:
path: "/home/{{ gitea_user }}/.ssh"
state: directory
owner: "{{ gitea_user_res.uid }}"
@@ -37,16 +37,16 @@
register: gitea_user_ssh_key
- name: Create forwarding script
ansible.builtin.copy:
copy:
dest: "/usr/local/bin/gitea"
owner: "{{ gitea_user_res.uid }}"
group: "{{ gitea_user_res.group }}"
mode: 0700
content: |
ssh -p {{ gitea_public_ssh_server_port }} -o StrictHostKeyChecking=no {{ gitea_run_user }}@127.0.0.1 -i /home/{{ gitea_user }}/.ssh/id_ssh_ed25519 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"
ssh -p {{ gitea_public_ssh_server_port }} -o StrictHostKeyChecking=no {{ gitea_user }}@127.0.0.1 -i /home/{{ gitea_user }}/.ssh/id_ssh_ed25519 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"
- name: Add host pubkey to git users authorized_keys file
ansible.builtin.lineinfile:
lineinfile:
path: "/home/{{ gitea_user }}/.ssh/authorized_keys"
line: "{{ gitea_user_ssh_key.public_key }} Gitea:Host2Container"
state: present
@@ -56,28 +56,26 @@
mode: 0600
- name: Ensure gitea container image is present
community.docker.docker_image:
docker_image:
name: "{{ gitea_container_image }}"
state: present
source: pull
force_source: "{{ gitea_container_image.endswith(':latest') }}"
- name: Ensure container '{{ gitea_container_name }}' with gitea is {{ gitea_container_state }}
community.docker.docker_container:
- name: Ensure container '{{ gitea_container_name }}' with gitea is running
docker_container:
name: "{{ gitea_container_name }}"
image: "{{ gitea_container_image }}"
env: "{{ gitea_container_env }}"
labels: "{{ gitea_container_labels }}"
volumes: "{{ gitea_container_volumes }}"
networks: "{{ gitea_container_networks | default(omit, True) }}"
purge_networks: "{{ gitea_container_purge_networks | default(omit, True) }}"
published_ports: "{{ gitea_container_ports }}"
restart_policy: "{{ gitea_container_restart_policy }}"
state: "{{ gitea_container_state }}"
user: "{{ gitea_container_user | default(omit, true) }}"
state: started
- name: Ensure given configuration is set in the config file
ansible.builtin.ini_file:
ini_file:
path: "{{ gitea_data_path }}/gitea/conf/app.ini"
section: "{{ section }}"
option: "{{ option }}"

View File

@@ -14,7 +14,7 @@ gitea_container_port_ssh: 22
gitea_config_base:
RUN_MODE: prod
RUN_USER: "{{ gitea_run_user }}"
RUN_USER: "{{ gitea_user }}"
server:
SSH_DOMAIN: "{{ gitea_domain }}"
DOMAIN: "{{ gitea_domain }}"
@@ -24,11 +24,11 @@ gitea_config_base:
mailer:
ENABLED: "{{ gitea_config_mailer_enabled }}"
MAILER_TYP: "{{ gitea_config_mailer_type }}"
SMTP_ADDR: "{{ gitea_config_mailer_smtp_addr }}"
HOST: "{{ gitea_config_mailer_host }}"
USER: "{{ gitea_config_mailer_user }}"
PASSWD: "{{ gitea_config_mailer_passwd }}"
PROTOCOL: "{{ gitea_config_mailer_protocol }}"
FROM: "{{ gitea_config_mailer_from }}"
IS_TLS_ENABLED: "{{ gitea_config_mailer_tls }}"
FROM: "{{ gitea_config_mailer_from_addr }}"
SENDMAIL_PATH: "{{ gitea_config_mailer_sendmail_path }}"
metrics:
ENABLED: "{{ gitea_config_metrics_enabled }}"

View File

@@ -1,21 +0,0 @@
# `finallycoffee.services.hedgedoc` ansible role
Role to deploy and configure hedgedoc using `docker` or `podman`.
To configure hedgedoc, set either the config as complex data
directly in `hedgedoc_config` or use the flattened variables
from the `hedgedoc_config_*` prefix (see
[defaults/main/config.yml](defaults/main/config.yml)).
To remove hedgedoc, set `hedgedoc_state: absent`. Note that this
will delete all data directories aswell, removing any traces this
role created on the target (except database contents).
# Required configuration
- `hedgedoc_config_domain` - Domain of the hedgedoc instance
- `hedgedoc_config_session_secret` - session secret for hedgedoc
## Deployment methods
To set the desired deployment method, set `hedgedoc_deployment_method` to a
supported deployment methods (see [vars/main.yml](vars/main.yml#5)).

View File

@@ -1,52 +0,0 @@
---
hedgedoc_config_domain: ~
hedgedoc_config_log_level: "info"
hedgedoc_config_session_secret: ~
hedgedoc_config_protocol_use_ssl: true
hedgedoc_config_hsts_enable: true
hedgedoc_config_csp_enable: true
hedgedoc_config_cookie_policy: 'lax'
hedgedoc_config_allow_free_url: true
hedgedoc_config_allow_email_register: false
hedgedoc_config_allow_anonymous: true
hedgedoc_config_allow_gravatar: true
hedgedoc_config_require_free_url_authentication: true
hedgedoc_config_default_permission: 'full'
hedgedoc_config_db_username: hedgedoc
hedgedoc_config_db_password: ~
hedgedoc_config_db_database: hedgedoc
hedgedoc_config_db_host: localhost
hedgedoc_config_db_port: 5432
hedgedoc_config_db_dialect: postgres
hedgedoc_config_database:
username: "{{ hedgedoc_config_db_username }}"
password: "{{ hedgedoc_config_db_password }}"
database: "{{ hedgedoc_config_db_database }}"
host: "{{ hedgedoc_config_db_host }}"
port: "{{ hedgedoc_config_db_port | int }}"
dialect: "{{ hedgedoc_config_db_dialect }}"
hedgedoc_config_base:
production:
domain: "{{ hedgedoc_config_domain }}"
loglevel: "{{ hedgedoc_config_log_level }}"
sessionSecret: "{{ hedgedoc_config_session_secret }}"
protocolUseSSL: "{{ hedgedoc_config_protocol_use_ssl }}"
cookiePolicy: "{{ hedgedoc_config_cookie_policy }}"
allowFreeURL: "{{ hedgedoc_config_allow_free_url }}"
allowAnonymous: "{{ hedgedoc_config_allow_anonymous }}"
allowEmailRegister: "{{ hedgedoc_config_allow_email_register }}"
allowGravatar: "{{ hedgedoc_config_allow_gravatar }}"
requireFreeURLAuthentication: >-2
{{ hedgedoc_config_require_free_url_authentication }}
defaultPermission: "{{ hedgedoc_config_default_permission }}"
hsts:
enable: "{{ hedgedoc_config_hsts_enable }}"
csp:
enable: "{{ hedgedoc_config_csp_enable }}"
db: "{{ hedgedoc_config_database }}"
hedgedoc_config: ~
hedgedoc_full_config: >-2
{{ hedgedoc_config_base | default({}, true)
| combine(hedgedoc_config | default({}, true), recursive=True) }}

View File

@@ -1,57 +0,0 @@
---
hedgedoc_container_image_registry: quay.io
hedgedoc_container_image_namespace: hedgedoc
hedgedoc_container_image_name: hedgedoc
hedgedoc_container_image_flavour: alpine
hedgedoc_container_image_tag: ~
hedgedoc_container_image: >-2
{{
([
hedgedoc_container_image_registry,
hedgedoc_container_image_namespace | default([], true),
hedgedoc_container_image_name,
] | flatten | join('/'))
+ ':'
+ hedgedoc_container_image_tag | default(
hedgedoc_version + (
((hedgedoc_container_image_flavour is string)
and (hedgedoc_container_image_flavour | length > 0))
| ternary('-' +
hedgedoc_container_image_flavour | default('', true),
''
)
),
true
)
}}
hedgedoc_container_image_source: pull
hedgedoc_container_name: hedgedoc
hedgedoc_container_state: >-2
{{ (hedgedoc_state == 'present') | ternary('started', 'absent') }}
hedgedoc_container_config_file: "/hedgedoc/config.json"
hedgedoc_container_upload_path: "/hedgedoc/public/uploads"
hedgedoc_container_env: ~
hedgedoc_container_user: >-2
{{ hedgedoc_run_user_id }}:{{ hedgedoc_run_group_id }}
hedgedoc_container_ports: ~
hedgedoc_container_networks: ~
hedgedoc_container_etc_hosts: ~
hedgedoc_container_base_volumes:
- "{{ hedgedoc_config_file }}:{{ hedgedoc_container_config_file }}:ro"
- "{{ hedgedoc_uploads_path }}:{{ hedgedoc_container_upload_path }}:rw"
hedgedoc_container_volumes: ~
hedgedoc_container_all_volumes: >-2
{{ hedgedoc_container_base_volumes | default([], true)
+ hedgedoc_container_volumes | default([], true) }}
hedgedoc_container_base_labels:
version: "{{ hedgedoc_container_tag | default(hedgedoc_version, true) }}"
hedgedoc_container_labels: ~
hedgedoc_container_network_mode: ~
hedgedoc_container_all_labels: >-2
{{ hedgedoc_container_base_labels | default({}, true)
| combine(hedgedoc_container_labels | default({}, true)) }}
hedgedoc_container_restart_policy: >-2
{{ (hedgedoc_deployment_method == 'docker')
| ternary('unless-stopped', 'on-failure') }}

View File

@@ -1,9 +0,0 @@
---
hedgedoc_user: hedgedoc
hedgedoc_version: "1.10.5"
hedgedoc_state: present
hedgedoc_deployment_method: docker
hedgedoc_config_file: "/etc/hedgedoc/config.json"
hedgedoc_uploads_path: "/var/lib/hedgedoc-uploads"

View File

@@ -1,5 +0,0 @@
---
hedgedoc_run_user_id: >-2
{{ hedgedoc_user_info.uid | default(hedgedoc_user) }}
hedgedoc_run_group_id: >-2
{{ hedgedoc_user_info.group | default(hedgedoc_user) }}

View File

@@ -1,12 +0,0 @@
---
allow_duplicates: true
dependencies: []
galaxy_info:
role_name: hedgedoc
description: >-2
Deploy hedgedoc, a collaborative markdown editor, using docker
galaxy_tags:
- hedgedoc
- markdown
- collaboration
- docker

View File

@@ -1,23 +0,0 @@
---
- name: Check for valid state
ansible.builtin.fail:
msg: >-2
Unsupported state '{{ hedgedoc_state }}'. Supported
states are {{ hedgedoc_states | join(', ') }}.
when: hedgedoc_state not in hedgedoc_states
- name: Check for valid deployment method
ansible.builtin.fail:
msg: >-2
Deployment method '{{ hedgedoc_deployment_method }}'
is not supported. Supported are:
{{ hedgedoc_deployment_methods | join(', ') }}
when: hedgedoc_deployment_method not in hedgedoc_deployment_methods
- name: Ensure required variables are given
ansible.builtin.fail:
msg: "Required variable '{{ item }}' is undefined!"
loop: "{{ hedgedoc_required_arguments }}"
when: >-2
item not in hostvars[inventory_hostname]
or hostvars[inventory_hostname][item] | length == 0

View File

@@ -1,31 +0,0 @@
---
- name: Ensure container image '{{ hedgedoc_container_image }}' is {{ hedgedoc_state }}
community.docker.docker_image:
name: "{{ hedgedoc_container_image }}"
state: "{{ hedgedoc_state }}"
source: "{{ hedgedoc_container_image_source }}"
force_source: >-2
{{ hedgedoc_container_force_source | default(
hedgedoc_container_image_tag | default(false, true), true) }}
register: hedgedoc_container_image_info
until: hedgedoc_container_image_info is success
retries: 5
delay: 3
- name: Ensure container '{{ hedgedoc_container_name }}' is {{ hedgedoc_container_state }}
community.docker.docker_container:
name: "{{ hedgedoc_container_name }}"
image: "{{ hedgedoc_container_image }}"
env: "{{ hedgedoc_container_env | default(omit, true) }}"
user: "{{ hedgedoc_container_user | default(omit, true) }}"
ports: "{{ hedgedoc_container_ports | default(omit, true) }}"
labels: "{{ hedgedoc_container_all_labels }}"
volumes: "{{ hedgedoc_container_all_volumes }}"
etc_hosts: "{{ hedgedoc_container_etc_hosts | default(omit, true) }}"
dns_servers: >-2
{{ hedgedoc_container_dns_servers | default(omit, true) }}
network_mode: >-2
{{ hedgedoc_container_network_mode | default(omit, true) }}
restart_policy: >-2
{{ hedgedoc_container_restart_policy | default(omit, true) }}
state: "{{ hedgedoc_container_state }}"

View File

@@ -1,31 +0,0 @@
---
- name: Ensure container image '{{ hedgedoc_container_image }}' is {{ hedgedoc_state }}
containers.podman.podman_image:
name: "{{ hedgedoc_container_image }}"
state: "{{ hedgedoc_state }}"
pull: "{{ (hedgedoc_container_image_source == 'pull') | bool }}"
force: >-2
{{ hedgedoc_container_force_source | default(
hedgedoc_container_image_tag | default(false, true), true) }}
register: hedgedoc_container_image_info
until: hedgedoc_container_image_info is success
retries: 5
delay: 3
- name: Ensure container '{{ hedgedoc_container_name }}' is {{ hedgedoc_container_state }}
containers.podman.podman_container:
name: "{{ hedgedoc_container_name }}"
image: "{{ hedgedoc_container_image }}"
env: "{{ hedgedoc_container_env | default(omit, true) }}"
user: "{{ hedgedoc_container_user | default(omit, true) }}"
ports: "{{ hedgedoc_container_ports | default(omit, true) }}"
labels: "{{ hedgedoc_container_all_labels }}"
volumes: "{{ hedgedoc_container_all_volumes }}"
etc_hosts: "{{ hedgedoc_container_etc_hosts | default(omit, true) }}"
dns_servers: >-2
{{ hedgedoc_container_dns_servers | default(omit, true) }}
network_mode: >-2
{{ hedgedoc_container_network_mode | default(omit, true) }}
restart_policy: >-2
{{ hedgedoc_container_restart_policy | default(omit, true) }}
state: "{{ hedgedoc_container_state }}"

View File

@@ -1,21 +0,0 @@
---
- name: Check preconditions
ansible.builtin.include_tasks:
file: "check.yml"
- name: Ensure user '{{ hedgedoc_user }}' is {{ hedgedoc_state }}
ansible.builtin.user:
name: "{{ hedgedoc_user }}"
state: "{{ hedgedoc_state }}"
system: "{{ hedgedoc_user_system | default(true, false) }}"
register: hedgedoc_user_info
- name: Ensure configuration file '{{ hedgedoc_config_file }}' is {{ hedgedoc_state }}
ansible.builtin.copy:
dest: "{{ hedgedoc_config_file }}"
content: "{{ hedgedoc_full_config | to_nice_json }}"
when: hedgedoc_state == 'present'
- name: Ensure hedgedoc is deployed using {{ hedgedoc_deployment_method }}
ansible.builtin.include_tasks:
file: "deploy-{{ hedgedoc_deployment_method }}.yml"

View File

@@ -1,11 +0,0 @@
---
hedgedoc_states:
- present
- absent
hedgedoc_deployment_methods:
- docker
- podman
hedgedoc_required_arguments:
- hedgedoc_config_domain
- hedgedoc_config_session_secret

View File

@@ -1,15 +0,0 @@
# `finallycoffee.services.jellyfin` ansible role
This role runs [Jellyfin](https://jellyfin.org/), a free software media system,
in a docker container.
## Usage
`jellyfin_domain` contains the FQDN which jellyfin should listen to. Most configuration
is done in the software itself.
Jellyfin runs in host networking mode by default, as that is needed for some features like
network discovery with chromecasts and similar.
Media can be mounted into jellyfin using `jellyfin_media_volumes`, taking a list of strings
akin to `community.docker.docker_container`'s `volumes` key.

View File

@@ -1,8 +1,7 @@
---
jellyfin_user: jellyfin
jellyfin_version: "10.11.5"
jellyfin_state: present
jellyfin_deployment_method: docker
jellyfin_version: 10.8.6
jellyfin_base_path: /opt/jellyfin
jellyfin_config_path: "{{ jellyfin_base_path }}/config"
@@ -13,11 +12,7 @@ jellyfin_media_volumes: []
jellyfin_container_name: jellyfin
jellyfin_container_image_name: "docker.io/jellyfin/jellyfin"
jellyfin_container_image_tag: ~
jellyfin_container_image_ref: >-2
{{ jellyfin_container_image_name }}:{{ jellyfin_container_image_tag | default(jellyfin_version, true) }}
jellyfin_container_image_source: pull
jellyfin_container_state: >-2
{{ (jellyfin_state == 'present') | ternary('started', 'absent') }}
jellyfin_container_image_ref: "{{ jellyfin_container_image_name }}:{{ jellyfin_container_image_tag | default(jellyfin_version, true) }}"
jellyfin_container_network_mode: host
jellyfin_container_networks: ~
jellyfin_container_volumes: "{{ jellyfin_container_base_volumes + jellyfin_media_volumes }}"

View File

@@ -1,10 +0,0 @@
---
allow_duplicates: true
dependencies: []
galaxy_info:
role_name: jellyfin
description: Ansible role to deploy jellyfin using docker
galaxy_tags:
- jellyfin
- streaming
- docker

View File

@@ -1,26 +0,0 @@
---
- name: Ensure container image '{{ jellyfin_container_image_ref }}' is {{ jellyfin_state }}
community.docker.docker_image:
name: "{{ jellyfin_container_image_ref }}"
state: "{{ jellyfin_state }}"
source: "{{ jellyfin_container_image_source }}"
force_source: "{{ jellyfin_container_image_tag | default(false, true) }}"
register: jellyfin_container_image_pull_result
until: jellyfin_container_image_pull_result is succeeded
retries: 5
delay: 3
- name: Ensure container '{{ jellyfin_container_name }}' is {{ jellyfin_container_state }}
community.docker.docker_container:
name: "{{ jellyfin_container_name }}"
image: "{{ jellyfin_container_image_ref }}"
env: "{{ jellyfin_container_env | default(omit, true) }}"
user: "{{ jellyfin_uid }}:{{ jellyfin_gid }}"
labels: "{{ jellyfin_container_labels }}"
volumes: "{{ jellyfin_container_volumes }}"
ports: "{{ jellyfin_container_ports | default(omit, true) }}"
networks: "{{ jellyfin_container_networks | default(omit, true) }}"
network_mode: "{{ jellyfin_container_network_mode }}"
etc_hosts: "{{ jellyfin_container_etc_hosts | default(omit, true) }}"
restart_policy: "{{ jellyfin_container_restart_policy }}"
state: "{{ jellyfin_container_state }}"

View File

@@ -1,22 +0,0 @@
---
- name: Ensure container image '{{ jellyfin_container_image_ref }}' is {{ jellyfin_state }}
containers.podman.podman_image:
name: "{{ jellyfin_container_image_ref }}"
state: "{{ jellyfin_state }}"
pull: "{{ (jellyfin_container_image_source == 'pull') | bool }}"
force: "{{ jellyfin_container_image_tag | default(false, true) }}"
register: jellyfin_container_image_pull_result
until: jellyfin_container_image_pull_result is succeeded
retries: 5
delay: 3
- name: Ensure container '{{ jellyfin_container_name }}' is {{ jellyfin_container_state }}
containers.podman.podman_container:
name: "{{ jellyfin_container_name }}"
image: "{{ jellyfin_container_image_ref }}"
user: "{{ jellyfin_uid }}:{{ jellyfin_gid }}"
labels: "{{ jellyfin_container_labels }}"
volumes: "{{ jellyfin_container_volumes }}"
network: "{{ jellyfin_container_networks | default(omit, True) }}"
restart_policy: "{{ jellyfin_container_restart_policy }}"
state: "{{ jellyfin_container_state }}"

View File

@@ -1,35 +1,40 @@
---
- name: Check if state is valid
ansible.builtin.fail:
msg: >-2
Unsupported state '{{ jellyfin_state }}'. Supported
states are {{ jellyfin_states | join(', ') }}.
when: jellyfin_state not in jellyfin_states
- name: Check if deployment method is valid
ansible.builtin.fail:
msg: >-2
Unsupported state '{{ jellyfin_deployment_method }}'. Supported
states are {{ jellyfin_deployment_methods | join(', ') }}.
when: jellyfin_deployment_method not in jellyfin_deployment_methods
- name: Ensure jellyfin user '{{ jellyfin_user }}' is {{ jellyfin_state }}
ansible.builtin.user:
- name: Ensure user '{{ jellyfin_user }}' for jellyfin is created
user:
name: "{{ jellyfin_user }}"
state: "{{ jellyfin_state }}"
system: "{{ jellyfin_user_system | default(true, true) }}"
state: present
system: yes
register: jellyfin_user_info
- name: Ensure host directories for jellyfin are {{ jellyfin_state }}
ansible.builtin.file:
- name: Ensure host directories for jellyfin exist
file:
path: "{{ item.path }}"
state: >-2
{{ (jellyfin_state == 'present') | ternary('directory', 'absent') }}
state: directory
owner: "{{ item.owner | default(jellyfin_uid) }}"
group: "{{ item.group | default(jellyfin_gid) }}"
mode: "{{ item.mode }}"
loop: "{{ jellyfin_host_directories }}"
- name: Ensure jellyfin is deployed using {{ jellyfin_deployment_method }}
ansible.builtin.include_tasks:
file: "deploy-{{ jellyfin_deployment_method }}.yml"
- name: Ensure container image for jellyfin is available
docker_image:
name: "{{ jellyfin_container_image_ref }}"
state: present
source: pull
force_source: "{{ jellyfin_container_image_tag | default(false, true) }}"
register: jellyfin_container_image_pull_result
until: jellyfin_container_image_pull_result is succeeded
retries: 5
delay: 3
- name: Ensure container '{{ jellyfin_container_name }}' is running
docker_container:
name: "{{ jellyfin_container_name }}"
image: "{{ jellyfin_container_image_ref }}"
user: "{{ jellyfin_uid }}:{{ jellyfin_gid }}"
labels: "{{ jellyfin_container_labels }}"
volumes: "{{ jellyfin_container_volumes }}"
networks: "{{ jellyfin_container_networks | default(omit, True) }}"
network_mode: "{{ jellyfin_container_network_mode }}"
restart_policy: "{{ jellyfin_container_restart_policy }}"
state: started

View File

@@ -1,10 +1,4 @@
---
jellyfin_states:
- present
- absent
jellyfin_deployment_methods:
- docker
- podman
jellyfin_container_base_volumes:
- "{{ jellyfin_config_path }}:/config:z"

View File

@@ -1,16 +0,0 @@
# `finallycoffee.services.keycloak` ansible role
Ansible role for deploying keycloak, currently only supports docker.
Migrated from `entropia.sso.keycloak`.
## Required variables
- `keycloak_database_password` - password for the database user
- `keycloak_config_hostname` - public domain of the keycloak server
## Database configuration
- `keycloak_database_hostname` - hostname of the database server, defaults to `localhost`
- `keycloak_database_username` - username to use when connecting to the database server, defaults to `keycloak`
- `keycloak_database_database` - name of the database to use, defaults to `keycloak`

View File

@@ -1,51 +0,0 @@
---
keycloak_version: "26.4.7"
keycloak_container_name: keycloak
keycloak_container_image_upstream_registry: quay.io
keycloak_container_image_upstream_namespace: keycloak
keycloak_container_image_upstream_name: keycloak
keycloak_container_image_upstream: >-2
{{
([
keycloak_container_image_upstream_registry | default([]),
keycloak_container_image_upstream_namespace | default([]),
keycloak_container_image_upstream_name,
] | flatten | join('/'))
}}
keycloak_container_image_name: "keycloak:{{ keycloak_version }}-custom"
keycloak_container_database_vendor: postgres
keycloak_base_path: /opt/keycloak
keycloak_container_build_directory: "{{ keycloak_base_path }}/build"
keycloak_container_build_jar_directory: providers
keycloak_container_build_flags: {}
keycloak_provider_jars_directory: "{{ keycloak_base_path }}/providers"
keycloak_build_provider_jars_directory: "{{ keycloak_container_build_directory }}/{{ keycloak_container_build_jar_directory }}"
keycloak_database_hostname: localhost
keycloak_database_port: 5432
keycloak_database_username: keycloak
keycloak_database_password: ~
keycloak_database_database: keycloak
keycloak_container_env: {}
keycloak_container_labels: ~
keycloak_container_volumes: ~
keycloak_container_restart_policy: unless-stopped
keycloak_container_command: >-2
start
--db-username {{ keycloak_database_username }}
--db-password {{ keycloak_database_password }}
--db-url jdbc:postgresql://{{ keycloak_database_hostname }}{{ keycloak_database_port | ternary(':' ~ keycloak_database_port, '') }}/{{ keycloak_database_database }}
{{ keycloak_container_extra_start_flags | default([]) | join(' ') }}
--proxy-headers=xforwarded
--hostname {{ keycloak_config_hostname }}
--optimized
keycloak_config_health_enabled: true
keycloak_config_metrics_enabled: true
keycloak_config_hostname: localhost
keycloak_config_admin_username: admin
keycloak_config_admin_password: ~

View File

@@ -1,13 +0,0 @@
---
allow_duplicates: true
dependencies: []
galaxy_info:
role_name: keycloak
description: Deploy keycloak, the opensource identity and access management solution
galaxy_tags:
- keycloak
- sso
- oidc
- oauth2
- iam
- docker

View File

@@ -1,72 +0,0 @@
---
- name: Ensure build directory exists
ansible.builtin.file:
name: "{{ keycloak_container_build_directory }}"
state: directory
recurse: yes
mode: 0700
tags:
- keycloak-build-container
- name: Ensure provider jars directory exists
ansible.builtin.file:
name: "{{ keycloak_provider_jars_directory }}"
state: directory
mode: 0775
tags:
- keycloak-build-container
- name: Ensure Dockerfile is templated
ansible.builtin.template:
src: Dockerfile.j2
dest: "{{ keycloak_container_build_directory }}/Dockerfile"
mode: 0700
register: keycloak_buildfile_info
tags:
- keycloak-container
- keycloak-build-container
- name: Ensure upstream Keycloak container image '{{ keycloak_container_image_upstream }}:{{ keycloak_version }}' is present
community.docker.docker_image:
name: "{{ keycloak_container_image_upstream }}:{{ keycloak_version }}"
source: pull
state: present
register: keycloak_container_image_upstream_status
tags:
- keycloak-container
- keycloak-build-container
- name: Ensure custom keycloak container image '{{ keycloak_container_image_name }}' is built
community.docker.docker_image:
name: "{{ keycloak_container_image_name }}"
build:
args:
DB_VENDOR: "{{ keycloak_container_database_vendor }}"
KC_ADMIN_PASSWORD: "{{ keycloak_config_admin_password }}"
dockerfile: "{{ keycloak_container_build_directory }}/Dockerfile"
path: "{{ keycloak_container_build_directory }}"
source: build
state: present
force_source: "{{ keycloak_buildfile_info.changed or keycloak_container_image_upstream_status.changed or (keycloak_force_rebuild_container | default(false))}}"
register: keycloak_container_image_status
tags:
- keycloak-container
- keycloak-build-container
- name: Ensure keycloak container is running
community.docker.docker_container:
name: "{{ keycloak_container_name }}"
image: "{{ keycloak_container_image_name }}"
env: "{{ keycloak_container_env | default(omit, true) }}"
ports: "{{ keycloak_container_ports | default(omit, true) }}"
hostname: "{{ keycloak_container_hostname | default(omit) }}"
labels: "{{ keycloak_container_labels | default(omit, true) }}"
volumes: "{{ keycloak_container_volumes | default(omit, true) }}"
restart_policy: "{{ keycloak_container_restart_policy }}"
recreate: "{{ keycloak_container_force_recreate | default(false) or (keycloak_container_image_status.changed if keycloak_container_image_status is defined else false) }}"
etc_hosts: "{{ keycloak_container_etc_hosts | default(omit) }}"
state: started
command: "{{ keycloak_container_command }}"
tags:
- keycloak-container

View File

@@ -1,43 +0,0 @@
FROM {{ keycloak_container_image_upstream }}:{{ keycloak_version }} as builder
# Enable health and metrics support
ENV KC_HEALTH_ENABLED={{ keycloak_config_health_enabled | ternary('true', 'false') }}
ENV KC_METRICS_ENABLED={{ keycloak_config_metrics_enabled | ternary('true', 'false') }}
# Configure a database vendor
ARG DB_VENDOR
ENV KC_DB=$DB_VENDOR
WORKDIR {{ keycloak_container_working_directory }}
{% if keycloak_container_image_add_local_providers | default(true) %}
ADD ./providers/* providers/
{% endif %}
# Workaround to set correct mode on jar files
USER root
RUN chmod -R 0770 providers/*
USER keycloak
RUN {{ keycloak_container_working_directory }}/bin/kc.sh --verbose \
{% for argument in keycloak_container_build_flags | dict2items(key_name='flag', value_name='value') %}
--{{- argument['flag'] -}}{{- argument['value'] | default(false, true) | ternary('=' + argument['value'], '') }} \
{% endfor%}
build{% if keycloak_container_build_features | default([]) | length > 0 %} \
{% endif %}
{% if keycloak_container_build_features | default([]) | length > 0 %}
--features="{{ keycloak_container_build_features | join(',') }}"
{% endif %}
FROM {{ keycloak_container_image_upstream }}:{{ keycloak_version }}
COPY --from=builder {{ keycloak_container_working_directory }}/ {{ keycloak_container_working_directory }}/
ENV KC_HOSTNAME={{ keycloak_config_hostname }}
ENV KEYCLOAK_ADMIN={{ keycloak_config_admin_username }}
ARG KC_ADMIN_PASSWORD
{% if keycloak_version | split('.') | first | int > 21 %}
ENV KEYCLOAK_ADMIN_PASSWORD=$KC_ADMIN_PASSWORD
{% else %}
ENV KEYCLOAK_PASSWORD=$KC_ADMIN_PASSWORD
{% endif %}
ENTRYPOINT ["{{ keycloak_container_working_directory }}/bin/kc.sh"]

View File

@@ -1,3 +0,0 @@
---
keycloak_container_working_directory: /opt/keycloak

29
roles/minio/README.md Normal file
View File

@@ -0,0 +1,29 @@
# `finallycoffee.services.minio` ansible role
## Overview
This role deploys a [min.io](https://min.io) server (s3-compatible object storage server)
using the official docker container image.
## Configuration
The role requires setting the password for the `root` user (name can be changed by
setting `minio_root_username`) in `minio_root_password`. That user has full control
over the minio-server instance.
### Useful config hints
Most configuration is done by setting environment variables in
`minio_container_extra_env`, for example:
```yaml
minio_container_extra_env:
# disable the "console" web browser UI
MINIO_BROWSER: off
# enable public prometheus metrics on `/minio/v2/metrics/cluster`
MINIO_PROMETHEUS_AUTH_TYPE: public
```
When serving minio (or any s3-compatible server) on a "subfolder",
see https://docs.aws.amazon.com/AmazonS3/latest/userguide/RESTRedirect.html
and https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html

View File

@@ -0,0 +1,40 @@
---
minio_user: ~
minio_data_path: /opt/minio
minio_create_user: false
minio_manage_host_filesystem: false
minio_root_username: root
minio_root_password: ~
minio_container_name: minio
minio_container_image_name: docker.io/minio/minio
minio_container_image_tag: latest
minio_container_image: "{{ minio_container_image_name }}:{{ minio_container_image_tag }}"
minio_container_networks: []
minio_container_ports: []
minio_container_base_volumes:
- "{{ minio_data_path }}:{{ minio_container_data_path }}:z"
minio_container_extra_volumes: []
minio_container_base_env:
MINIO_ROOT_USER: "{{ minio_root_username }}"
MINIO_ROOT_PASSWORD: "{{ minio_root_password }}"
minio_container_extra_env: {}
minio_container_labels: {}
minio_container_command:
- "server"
- "{{ minio_container_data_path }}"
- "--console-address \":{{ minio_container_listen_port_console }}\""
minio_container_restart_policy: "unless-stopped"
minio_container_image_force_source: "{{ (minio_container_image_tag == 'latest')|bool }}"
minio_container_listen_port_api: 9000
minio_container_listen_port_console: 8900
minio_container_data_path: /storage

View File

@@ -0,0 +1,37 @@
---
- name: Ensure minio run user is present
user:
name: "{{ minio_user }}"
state: present
system: yes
when: minio_create_user
- name: Ensure filesystem mounts ({{ minio_data_path }}) for container volumes are present
file:
path: "{{ minio_data_path }}"
state: directory
user: "{{ minio_user|default(omit, True) }}"
group: "{{ minio_user|default(omit, True) }}"
when: minio_manage_host_filesystem
- name: Ensure container image for minio is present
community.docker.docker_image:
name: "{{ minio_container_image }}"
state: present
source: pull
force_source: "{{ minio_container_image_force_source }}"
- name: Ensure container {{ minio_container_name }} is running
docker_container:
name: "{{ minio_container_name }}"
image: "{{ minio_container_image }}"
volumes: "{{ minio_container_volumes }}"
env: "{{ minio_container_env }}"
labels: "{{ minio_container_labels }}"
networks: "{{ minio_container_networks }}"
ports: "{{ minio_container_ports }}"
user: "{{ minio_user|default(omit, True) }}"
command: "{{ minio_container_command }}"
restart_policy: "{{ minio_container_restart_policy }}"
state: started

View File

@@ -0,0 +1,5 @@
---
minio_container_volumes: "{{ minio_container_base_volumes + minio_container_extra_volumes }}"
minio_container_env: "{{ minio_container_base_env | combine(minio_container_extra_env) }}"

View File

@@ -0,0 +1,33 @@
---
nginx_version: "1.25.1"
nginx_flavour: alpine
nginx_base_path: /opt/nginx
nginx_config_file: "{{ nginx_base_path }}/nginx.conf"
nginx_container_name: nginx
nginx_container_image_reference: >-
{{
nginx_container_image_repository
+ ':' + (nginx_container_image_tag
| default(nginx_version
+ (('-' + nginx_flavour) if nginx_flavour is defined else ''), true))
}}
nginx_container_image_repository: >-
{{
(
container_registries[nginx_container_image_registry]
| default(nginx_container_image_registry)
)
+ '/'
+ nginx_container_image_namespace | default('')
+ nginx_container_image_name
}}
nginx_container_image_registry: "docker.io"
nginx_container_image_name: "nginx"
nginx_container_image_tag: ~
nginx_container_restart_policy: "unless-stopped"
nginx_container_volumes:
- "{{ nginx_config_file }}:/etc/nginx/conf.d/nginx.conf:ro"

View File

@@ -0,0 +1,8 @@
---
- name: Ensure nginx container '{{ nginx_container_name }}' is restarted
community.docker.docker_container:
name: "{{ nginx_container_name }}"
state: started
restart: true
listen: restart-nginx

View File

@@ -0,0 +1,37 @@
---
- name: Ensure base path '{{ nginx_base_path }}' exists
ansible.builtin.file:
path: "{{ nginx_base_path }}"
state: directory
mode: 0755
- name: Ensure nginx config file is templated
ansible.builtin.copy:
dest: "{{ nginx_config_file }}"
content: "{{ nginx_config }}"
mode: 0640
notify:
- restart-nginx
- name: Ensure docker container image is present
community.docker.docker_image:
name: "{{ nginx_container_image_reference }}"
state: present
source: pull
force_source: "{{ nginx_container_image_tag is defined and nginx_container_image_tag | string != '' }}"
- name: Ensure docker container '{{ nginx_container_name }}' is running
community.docker.docker_container:
name: "{{ nginx_container_name }}"
image: "{{ nginx_container_image_reference }}"
env: "{{ nginx_container_env | default(omit, true) }}"
user: "{{ nginx_container_user | default(omit, true) }}"
ports: "{{ nginx_container_ports | default(omit, true) }}"
labels: "{{ nginx_container_labels | default(omit, true) }}"
volumes: "{{ nginx_container_volumes | default(omit, true) }}"
etc_hosts: "{{ nginx_container_etc_hosts | default(omit, true) }}"
networks: "{{ nginx_container_networks | default(omit, true) }}"
purge_networks: "{{ nginx_container_purge_networks | default(omit, true) }}"
restart_policy: "{{ nginx_container_restart_policy }}"
state: started

View File

@@ -1,21 +0,0 @@
# `finallycoffee.services.openproject` ansible role
Deploys [openproject](https://www.openproject.org/) using docker-compose.
## Configuration
To set configuration variables for OpenProject, set them in `openproject_compose_overrides`:
```yaml
openproject_compose_overrides:
version: "3.7"
services:
proxy:
[...]
volumes:
pgdata:
driver: local
driver_opts:
o: bind
type: none
device: /var/lib/postgresql
```

View File

@@ -1,11 +0,0 @@
---
openproject_base_path: "/opt/openproject"
openproject_upstream_git_url: "https://github.com/opf/openproject-deploy.git"
openproject_upstream_git_branch: "stable/14"
openproject_compose_project_path: "{{ openproject_base_path }}"
openproject_compose_project_name: "openproject"
openproject_compose_project_env_file: "{{ openproject_compose_project_path }}/.env"
openproject_compose_project_override_file: "{{ openproject_compose_project_path }}/docker-compose.override.yml"
openproject_compose_project_env: {}

View File

@@ -1,38 +0,0 @@
---
- name: Ensure base directory '{{ openproject_base_path }}' is present
ansible.builtin.file:
path: "{{ openproject_base_path }}"
state: directory
- name: Ensure upstream repository is cloned
ansible.builtin.git:
dest: "{{ openproject_base_path }}"
repo: "{{ openproject_upstream_git_url }}"
version: "{{ openproject_upstream_git_branch }}"
clone: true
depth: 1
- name: Ensure environment is configured
ansible.builtin.lineinfile:
line: "{{ item.key}}={{ item.value}}"
path: "{{ openproject_compose_project_env_file }}"
state: present
create: true
loop: "{{ openproject_compose_project_env | dict2items(key_name='key', value_name='value') }}"
- name: Ensure docker compose overrides are set
ansible.builtin.copy:
dest: "{{ openproject_compose_project_override_file }}"
content: "{{ openproject_compose_overrides | default({}) | to_nice_yaml }}"
- name: Ensure containers are pulled
community.docker.docker_compose_v2:
project_src: "{{ openproject_compose_project_path }}"
project_name: "{{ openproject_compose_project_name }}"
pull: "missing"
- name: Ensure services are running
community.docker.docker_compose_v2:
project_src: "{{ openproject_compose_project_path }}"
project_name: "{{ openproject_compose_project_name }}"
state: "present"

View File

@@ -1,3 +0,0 @@
# `finallycoffee.services.phpldapadmin`
Role to deploy and configure [phpldapadmin](https://github.com/leenooks/phpLDAPadmin).

View File

@@ -1,39 +0,0 @@
---
phpldapadmin_container_name: phpldapadmin
phpldapadmin_container_image_registry: docker.io
phpldapadmin_container_image_namespace: phpldapadmin
phpldapadmin_container_image_name: phpldapadmin
phpldapadmin_container_image_repository: >-2
{{
[
phpldapadmin_container_image_registry | default([], true),
phpldapadmin_container_image_namespace | default([], true),
phpldapadmin_container_image_name
] | flatten | join('/')
}}
phpldapadmin_container_image: >-2
{{
[
phpldapadmin_container_image_repository,
phpldapadmin_container_image_tag | default(phpldapadmin_version, true)
] | join(':')
}}
phpldapadmin_container_image_tag: ~
phpldapadmin_container_image_source: pull
phpldapadmin_container_image_force_source: >-2
{{ phpldapadmin_container_image_tag | default(false, true) }}
phpldapadmin_container_env: ~
phpldapadmin_container_user: ~
phpldapadmin_container_ports: ~
phpldapadmin_container_labels: ~
phpldapadmin_container_volumes: ~
phpldapadmin_container_networks: ~
phpldapadmin_container_network_mode: ~
phpldapadmin_container_dns_servers: ~
phpldapadmin_container_etc_hosts: ~
phpldapadmin_container_memory: ~
phpldapadmin_container_memory_swap: ~
phpldapadmin_container_memory_reservation: ~
phpldapadmin_container_restart_policy: "on-failure"
phpldapadmin_container_state: >-2
{{ (phpldapadmin_state == 'present') | ternary('started', 'absent') }}

View File

@@ -1,5 +0,0 @@
---
phpldapadmin_version: "2.3.7"
phpldapadmin_state: present
phpldapadmin_deployment_method: docker

View File

@@ -1,27 +0,0 @@
---
- name: Ensure phpldapadmin container image '{{ phpldapadmin_container_image }}' is {{ phpldapadmin_state }}
community.docker.docker_image:
name: "{{ phpldapadmin_container_image }}"
state: "{{ phpldapadmin_state }}"
source: "{{ phpldapadmin_container_image_source }}"
force_source: "{{ phpldapadmin_container_image_force_source }}"
- name: Ensure phpldapadmin container '{{ phpldapadmin_container_name }}' is {{ phpldapadmin_container_state }}
community.docker.docker_container:
name: "{{ phpldapadmin_container_name }}"
image: "{{ phpldapadmin_container_image }}"
env: "{{ phpldapadmin_container_env | default(omit, true) }}"
user: "{{ phpldapadmin_container_user | default(omit, true) }}"
ports: "{{ phpldapadmin_container_ports | default(omit, true) }}"
labels: "{{ phpldapadmin_container_labels | default(omit, true) }}"
volumes: "{{ phpldapadmin_container_volumes | default(omit, true) }}"
networks: "{{ phpldapadmin_container_networks | default(omit, true) }}"
network_mode: "{{ phpldapadmin_container_network_mode | default(omit, true) }}"
dns_servers: "{{ phpldapadmin_container_dns_servers | default(omit, true) }}"
etc_hosts: "{{ phpldapadmin_container_etc_hosts | default(omit, true) }}"
memory: "{{ phpldapadmin_container_memory | default(omit, true) }}"
memory_swap: "{{ phpldapadmin_container_memory_swap | default(omit, true) }}"
memory_reservation: >-2
{{ phpldapadmin_container_memory_reservation | default(omit, true) }}
restart_policy: "{{ phpldapadmin_container_restart_policy | default(omit, true) }}"
state: "{{ phpldapadmin_container_state }}"

View File

@@ -1,18 +0,0 @@
---
- name: Ensure 'phpldapadmin_state' is valid
ansible.builtin.fail:
msg: >-2
Unsupported state '{{ phpldapadmin_state }}'!
Supported states are {{ phpldapadmin_states | join(', ') }}
when: phpldapadmin_state not in phpldapadmin_states
- name: Ensure 'phpldapadmin_deployment_method' is valid
ansible.builtin.fail:
msg: >-2
Unsupported deployment method '{{ phpldapadmin_deployment_method }}'!
Supported deployment methods are {{ phpldapadmin_deployment_methods | join(', ') }}
when: phpldapadmin_deployment_method not in phpldapadmin_deployment_methods
- name: Deploy using {{ phpldapadmin_deployment_method }}
ansible.builtin.import_tasks:
file: "deploy-{{ phpldapadmin_deployment_method }}.yml"

View File

@@ -1,6 +0,0 @@
---
phpldapadmin_states:
- "present"
- "absent"
phpldapadmin_deployment_methods:
- "docker"

View File

@@ -1,54 +0,0 @@
# `finallycoffee.services.pretix` ansible role
Deploy [pretix](https://pretix.eu) using ansible. Note that this
role does not configure pretix beyond its own configuration file,
and requires changing a default admin password after a successful
installation.
## Configuration
For all available configuration options, see [`defaults/main/config.yml`](defaults/main/config.yml)
and other supporting files in the [`defaults/main/`](defaults/main/) folder.
To add custom configuration to pretix, populate them in `pretix_config`,
where they will be (recusively) merged into the default configuration.
### Required
- `pretix_domain`: domain of the pretix instance
- `pretix_postgresql_password`: password for the (default: postgresql) database
- `pretix_config_redis_location`: connection string for the main pretix redis database
- `pretix_config_celery_backend`: connection string for the celery backend, can be a (different!) redis database
- `pretix_config_celery_broker`: connection string for the celery broker, can be a (yet another different) redis database
For examples on how a redis server (like valkey) can be configured
for redis, see [`playbooks/pretix.yml`](../../playbooks/pretix.yml).
### Mailing
Set up mails in pretix by populating the following variables:
- `pretix_config_mail_host`: domain/IP and optional port of the SMTP server
- `pretix_config_mail_user`: SMTP user to authenticate
- `pretix_config_mail_password`: password for the SMTP user
### Plugins
To install more plugins, list the wanted `pypi` packages as a list in
`pretix_plugins`. They will be installed in the created virtualenv, and migrations and an asset rebuild will be automatically started.
If your plugin requires custom configuration (f.ex.: `pretix-oidc`),
add the configuration into `pretix_config`.
## Troubleshooting
### virtualenv
By default, the virtualenv is located in `/var/lib/pretix/virtualenv`.
This can be controlled by setting `pretix_virtualenv_dir`.
NOTE: To fix a broken virtualenv, try setting `pretix_virtualenv_state` to `forcereinstall` (see
[`ansible.builtin.pip` on docs.ansible.com](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/pip_module.html)).
NOTE: To install pip packages or execute migrations in the virtualenv, ansible
needs to become the unprivilated `pretix_user` (default: `pretix`). This might
require having the `acl` system package installed.

View File

@@ -1,86 +0,0 @@
---
pretix_config_instance_name: "My pretix installation"
pretix_config_url: "https://pretix.example.org"
pretix_config_currency: "EUR"
pretix_config_data_dir: "{{ pretix_data_dir }}"
pretix_config_trust_x_forwarded_for: "on"
pretix_config_trust_x_forwarded_proto: "on"
pretix_config_wsgi_name: "pretix"
pretix_config_wsgi_workers: 4
pretix_config_wsgi_max_requests: 100
pretix_config_wsgi_log_level: "info"
pretix_config_wsgi_bind_addr: "127.0.0.1:8345"
pretix_config_worker_log_level: "{{ pretix_config_wsgi_log_level }}"
pretix_config_database_backend: postgresql
pretix_config_database_name: pretix
pretix_config_database_user: pretix
pretix_config_database_password: ~
pretix_config_database_host: ""
pretix_config_mail_host: ~
pretix_config_mail_from: "tickets@example.org"
pretix_config_mail_user: ~
pretix_config_mail_password: ~
pretix_config_mail_tls: true
pretix_config_mail_ssl: false
pretix_config_redis_location: ~
pretix_config_redis_sessions: true
pretix_config_celery_backend: ~
pretix_config_celery_broker: ~
pretix_app_config:
url: "{{ pretix_config_url }}"
instance_name: "{{ pretix_config_instance_name }}"
datadir: "{{ pretix_config_data_dir }}"
trust_x_forwarded_for: "{{ pretix_config_trust_x_forwarded_for }}"
trust_x_forwarded_proto: "{{ pretix_config_trust_x_forwarded_proto }}"
currency: "{{ pretix_config_currency }}"
pretix_database_config:
backend: "{{ pretix_config_database_backend }}"
name: "{{ pretix_config_database_name }}"
user: "{{ pretix_config_database_user }}"
password: "{{ pretix_config_database_password }}"
host: "{{ pretix_config_database_host }}"
pretix_mail_minimal_config:
host: "{{ pretix_config_mail_host }}"
from: "{{ pretix_config_mail_from }}"
pretix_mail_config: >-2
{{ pretix_mail_minimal_config
| combine({'user': pretix_config_mail_user} if pretix_config_mail_user else {})
| combine({'password': pretix_config_mail_password} if pretix_config_mail_password else {})
| combine({'ssl': pretix_config_mail_ssl | bool | ternary('on', 'off')} if pretix_config_mail_ssl else {})
| combine({'tls': pretix_config_mail_tls | bool | ternary('on', 'off')} if pretix_config_mail_tls else {})
}}
pretix_redis_config:
location: "{{ pretix_config_redis_location }}"
sessions: "{{ pretix_config_redis_sessions | bool | ternary('true', 'false') }}"
pretix_celery_config:
backend: "{{ pretix_config_celery_backend }}"
broker: "{{ pretix_config_celery_broker }}"
pretix_config: {}
pretix_default_config:
pretix: "{{ pretix_app_config }}"
database: "{{ pretix_database_config }}"
mail: "{{ pretix_mail_config }}"
redis: "{{ pretix_redis_config }}"
celery: "{{ pretix_celery_config }}"
pretix_config_merged: >-2
{{ pretix_default_config | combine(pretix_config | default({}), recursive=True) }}
pretix_config_file_content: |+2
{% for kv in (pretix_config_merged | dict2items) %}
[{{ kv.key }}]
{% for entry in ((kv.value | default({}, true)) | dict2items) %}
{{ entry.key }}={{ entry.value }}
{% endfor %}
{% endfor %}

View File

@@ -1,16 +0,0 @@
---
pretix_version: "2025.10.1"
pretix_state: "present"
pretix_deployment_method: "systemd"
pretix_config_file: "/etc/pretix/pretix.cfg"
pretix_config_file_owner: "{{ pretix_user_id }}"
pretix_config_file_group: "{{ pretix_group_id }}"
pretix_config_file_mode: "0640"
pretix_config_dir: "{{ pretix_config_file | dirname }}"
pretix_install_dir: "/var/lib/pretix"
pretix_virtualenv_dir: "{{ pretix_install_dir }}/virtualenv"
pretix_data_dir: "{{ pretix_install_dir }}/data"
pretix_media_dir: "{{ pretix_data_dir }}/media"
pretix_plugins: []

View File

@@ -1,22 +0,0 @@
---
pretix_debian_packages:
- "git"
- "build-essential"
- "python3-dev"
- "python3-venv"
- "python3"
- "python3-pip"
- "libxml2-dev"
- "libxslt1-dev"
- "libffi-dev"
- "zlib1g-dev"
- "libssl-dev"
- "gettext"
- "libpq-dev"
- "libjpeg-dev"
- "libopenjp2-7-dev"
- "nodejs"
pretix_packages:
"debian":
"12": "{{ pretix_debian_packages }}"

View File

@@ -1,50 +0,0 @@
---
pretix_systemd_unit_description: "pretix web service"
pretix_systemd_unit_after: "network.target"
pretix_systemd_unit_file_path: >-2
/etc/systemd/system/{{ pretix_systemd_service_name }}
pretix_systemd_service_name: "pretix.service"
pretix_systemd_service_user: "{{ pretix_user }}"
pretix_systemd_service_group: "{{ pretix_user }}"
pretix_systemd_service_environment:
VIRTUAL_ENV: "{{ pretix_virtualenv_dir }}"
PATH: "{{ pretix_virtualenv_dir }}/bin:/usr/local/bin:/usr/bin:/bin"
pretix_systemd_service_working_directory: "{{ pretix_install_dir }}"
pretix_systemd_service_exec_start: >-2
{{ pretix_virtualenv_dir }}/bin/gunicorn pretix.wsgi
--name {{ pretix_config_wsgi_name }}
--workers {{ pretix_config_wsgi_workers }}
--max-requests {{ pretix_config_wsgi_max_requests }}
--log-level={{ pretix_config_wsgi_log_level }}
--bind={{ pretix_config_wsgi_bind_addr }}
pretix_systemd_service_restart: "on-failure"
pretix_systemd_install_wanted_by: "multi-user.target"
# pretix worker
pretix_worker_systemd_service_name: "pretix-worker.service"
pretix_worker_systemd_service_description: "pretix worker service"
pretix_worker_systemd_unit_file_path: >-2
/etc/systemd/system/{{ pretix_worker_systemd_service_name }}
pretix_worker_systemd_service_exec_start: >-2
{{ pretix_virtualenv_dir }}/bin/celery
-A pretix.celery_app worker
-l {{ pretix_config_worker_log_level }}
# pretix cron
pretix_cron_systemd_service_name: "pretix-cron.service"
pretix_cron_systemd_service_description: "pretix cron service"
pretix_cron_systemd_unit_file_path: >-2
/etc/systemd/system/{{ pretix_cron_systemd_service_name }}
pretix_cron_systemd_service_exec_start: >-2
python3 -m pretix runperiodic
pretix_cron_systemd_timer_name: "pretix-cron.timer"
pretix_cron_systemd_timer_description: "pretix cron timer"
pretix_cron_systemd_timer_file_path: >-2
/etc/systemd/system/{{ pretix_cron_systemd_timer_name }}
pretix_cron_systemd_timer_on_active_sec: 1800
pretix_cron_systemd_timer_on_startup_sec: >-2
{{ pretix_cron_systemd_timer_on_active_sec }}
pretix_cron_systemd_timer_accuracy_sec: 60

View File

@@ -1,7 +0,0 @@
---
pretix_user: "pretix"
pretix_user_system: true
pretix_user_create_home: false
pretix_user_id: "{{ pretix_user_info.uid | default(pretix_user) }}"
pretix_group_id: "{{ pretix_user_info.group | default(pretix_user) }}"

View File

@@ -1,11 +0,0 @@
---
pretix_virtualenv_state: "{{ pretix_state }}"
pretix_virtualenv_packages:
- "pip"
- "setuptools"
- "wheel"
- "gunicorn"
- "pretix=={{ pretix_version }}"
pretix_virtualenv_site_packages: false
pretix_virtualenv_command: "python3 -m venv"

View File

@@ -1,6 +0,0 @@
---
- name: Ensure pretix systemd service is restarted
listen: pretix_restart
ansible.builtin.systemd_service:
name: "{{ pretix_systemd_service_name }}"
state: "restarted"

View File

@@ -1,9 +0,0 @@
---
allow_duplicates: true
dependencies: []
galaxy_info:
role_name: pretix
description: Ansible role to deploy pretix (https://pretix.eu)
galaxy_tags:
- pretix
- ticketing

View File

@@ -1,14 +0,0 @@
---
- name: Ensure 'pretix_state' is valid
ansible.builtin.fail:
msg: >-2
Unsupported pretix_state '{{ pretix_state }}'.
Supported states are {{ pretix_states | join(', ') }}
when: pretix_state not in pretix_states
- name: Ensure 'pretix_deployment_method' is valid
ansible.builtin.fail:
msg: >-2
Unsupported pretix_state '{{ pretix_deployment_method }}'.
Supported states are {{ pretix_deployment_methods | join(', ') }}
when: pretix_deployment_method not in pretix_deployment_methods

View File

@@ -1,10 +0,0 @@
---
- name: Ensure configuration file is written
ansible.builtin.copy:
dest: "{{ pretix_config_file }}"
content: "{{ pretix_config_file_content }}"
owner: "{{ pretix_config_file_owner }}"
group: "{{ pretix_config_file_group }}"
mode: "{{ pretix_config_file_mode }}"
when: pretix_state == 'present'
register: pretix_config_file_info

View File

@@ -1,64 +0,0 @@
---
- name: Ensure virtualenv in {{ pretix_virtualenv_dir }} is present
ansible.builtin.pip:
name: "{{ pretix_virtualenv_packages + pretix_plugins }}"
state: "{{ pretix_virtualenv_state }}"
chdir: "{{ pretix_install_dir }}"
virtualenv: "{{ pretix_virtualenv_dir }}"
virtualenv_command: "{{ pretix_virtualenv_command | default(omit, true) }}"
virtualenv_site_packages: "{{ pretix_virtualenv_site_packages }}"
become: true
become_user: "{{ pretix_user }}"
register: pretix_virtualenv_info
# TODO: determine to only do this on a) upgrades or b) initial deployis
- name: Ensure pretix database migrations are run
ansible.builtin.command:
cmd: "{{ pretix_virtualenv_dir }}/bin/python -m pretix migrate"
chdir: "{{ pretix_install_dir }}"
environment:
VIRTUAL_ENV: "{{ pretix_virtualenv_dir }}"
become: true
become_user: "{{ pretix_user }}"
notify: pretix_restart
when:
- pretix_state == 'present'
- pretix_virtualenv_info.changed or pretix_config_file_info.changed
# TODO: determine to only do this on a) upgrades or b) initial deployis
- name: Ensure pretix static assets are built
ansible.builtin.command:
cmd: "{{ pretix_virtualenv_dir }}/bin/python -m pretix rebuild"
chdir: "{{ pretix_install_dir }}"
environment:
VIRTUAL_ENV: "{{ pretix_virtualenv_dir }}"
become: true
become_user: "{{ pretix_user }}"
notify: pretix_restart
when:
- pretix_state == 'present'
- pretix_virtualenv_info.changed or pretix_config_file_info.changed
- name: Ensure pretix systemd service is enabled
ansible.builtin.systemd_service:
name: "{{ _service }}"
enabled: true
when: pretix_state == 'present'
loop:
- "{{ pretix_systemd_service_name }}"
- "{{ pretix_worker_systemd_service_name }}"
- "{{ pretix_cron_systemd_service_name }}"
- "{{ pretix_cron_systemd_timer_name }}"
loop_control:
loop_var: _service
- name: Ensure pretix systemd service is {{ pretix_state }}
ansible.builtin.systemd_service:
name: "{{ _service }}"
state: "{{ (pretix_state == 'present') | ternary('started', 'stopped') }}"
loop:
- "{{ pretix_systemd_service_name }}"
- "{{ pretix_worker_systemd_service_name }}"
- "{{ pretix_cron_systemd_timer_name }}"
loop_control:
loop_var: _service

View File

@@ -1,5 +0,0 @@
---
- name: Ensure pretix is deployed using {{ pretix_deployment_method }}
ansible.builtin.include_tasks:
file: "deploy-{{ pretix_deployment_method }}.yml"
when: pretix_state == 'present'

View File

@@ -1,16 +0,0 @@
---
- name: Ensure preconditions are met
ansible.builtin.include_tasks:
file: "check.yml"
- name: Ensure deployment preparations are done
ansible.builtin.include_tasks:
file: "prepare.yml"
- name: Ensure pretix is configured
ansible.builtin.include_tasks:
file: "configure.yml"
- name: Ensure pretix is deployed
ansible.builtin.include_tasks:
file: "deploy.yml"

View File

@@ -1,61 +0,0 @@
---
- name: Ensure ansible facts are collected
ansible.builtin.setup:
gather_subset:
- "!all"
- "pkg_mgr"
- "distribution"
- "distribution_release"
- "distribution_version"
- "distribution_major_version"
- name: Ensure system packages are present (apt)
ansible.builtin.apt:
name: "{{ package }}"
state: "{{ pretix_state }}"
loop: "{{ pretix_packages[ansible_distribution | lower][ansible_distribution_major_version] }}"
loop_control:
loop_var: "package"
when: ansible_facts['pkg_mgr'] == 'apt'
# TODO: add pretix worker and cron
- name: Ensure systemd unit {{ pretix_systemd_service_name }} is {{ pretix_state }}
ansible.builtin.template:
src: "pretix.service.j2"
dest: "{{ pretix_systemd_unit_file_path }}"
register: pretix_systemd_unit_info
notify:
- pretix_restart
- name: Ensure systemd unit {{ pretix_worker_systemd_service_name }} is {{ pretix_state }}
ansible.builtin.template:
src: "pretix.service.j2"
dest: "{{ pretix_worker_systemd_unit_file_path }}"
register: pretix_worker_systemd_unit_info
vars:
pretix_systemd_service_exec_start: "{{ pretix_worker_systemd_service_exec_start }}"
pretix_systemd_service_description: "{{ pretix_worker_systemd_service_description }}"
- name: Ensure systemd unit {{ pretix_cron_systemd_service_name }} is {{ pretix_state }}
ansible.builtin.template:
src: "pretix.service.j2"
dest: "{{ pretix_cron_systemd_unit_file_path }}"
register: pretix_cron_systemd_unit_info
vars:
pretix_systemd_service_exec_start: "{{ pretix_cron_systemd_service_exec_start }}"
pretix_systemd_service_description: "{{ pretix_cron_systemd_service_description }}"
- name: Ensure systemd timer unit {{ pretix_cron_systemd_timer_name }} is {{ pretix_state }}
ansible.builtin.template:
src: "pretix-cron.timer.j2"
dest: "{{ pretix_cron_systemd_timer_file_path }}"
register: pretix_cron_systemd_timer_info
- name: Ensure systemd is reloaded
ansible.builtin.systemd_service:
daemon_reload: true
when: >-2
pretix_systemd_unit_info.changed
or pretix_worker_systemd_unit_info.changed
or pretix_cron_systemd_unit_info.changed
or pretix_cron_systemd_timer_info.changed

View File

@@ -1,29 +0,0 @@
---
- name: Ensure pretix user '{{ pretix_user }}' is {{ pretix_state }}
ansible.builtin.user:
name: "{{ pretix_user }}"
state: "{{ pretix_state }}"
system: "{{ pretix_user_system }}"
create_home: "{{ pretix_user_create_home }}"
register: pretix_user_info
- name: Ensure host directories are {{ pretix_state }}
ansible.builtin.file:
path: "{{ item.path }}"
owner: "{{ item.owner | default(pretix_user_id) }}"
group: "{{ item.group | default(pretix_group_id) }}"
mode: "{{ item.mode | default('0750') }}"
state: "directory"
loop:
- path: "{{ pretix_config_dir }}"
- path: "{{ pretix_virtualenv_dir }}"
- path: "{{ pretix_data_dir }}"
- path: "{{ pretix_media_dir }}"
when: pretix_state == 'present'
- name: Ensure deployment-type specific preparations for '{{ pretix_deployment_method }}' are run
ansible.builtin.include_tasks:
file: "prepare-{{ pretix_deployment_method }}.yml"
when:
- pretix_state == 'present'
- pretix_deployment_method in ['systemd']

View File

@@ -1,10 +0,0 @@
[Unit]
Description={{ pretix_cron_systemd_timer_description }}
[Timer]
OnActiveSec={{ pretix_cron_systemd_timer_on_active_sec }}
OnStartupSec={{ pretix_cron_systemd_timer_on_startup_sec }}
AccuracySec={{ pretix_cron_systemd_timer_accuracy_sec }}
[Install]
WantedBy=timers.target

View File

@@ -1,16 +0,0 @@
[Unit]
Description={{ pretix_systemd_unit_description }}
After={{ pretix_systemd_unit_after }}
[Service]
User={{ pretix_systemd_service_user }}
Group={{ pretix_systemd_service_group }}
{% for kv in pretix_systemd_service_environment | dict2items %}
Environment="{{ kv.key }}={{ kv.value }}"
{% endfor %}
WorkingDirectory={{ pretix_systemd_service_working_directory }}
ExecStart={{ pretix_systemd_service_exec_start }}
Restart={{ pretix_systemd_service_restart }}
[Install]
WantedBy={{ pretix_systemd_install_wanted_by }}

View File

@@ -1,7 +0,0 @@
---
pretix_states:
- "present"
- "absent"
pretix_deployment_methods:
- "systemd"

77
roles/restic/README.md Normal file
View File

@@ -0,0 +1,77 @@
# `finallycoffee.services.restic`
Ansible role for backup up data using `restic`, utilizing `systemd` timers for scheduling.
## Overview
As restic encrypts the data before storing it, the `restic_repo_password` needs
to be populated with a strong key, and saved accordingly as only this key can
be used to decrypt the data for a restore!
### Backends
#### S3 Backend
To use a `s3`-compatible backend like AWS buckets or minio, both `restic_s3_key_id`
and `restic_s3_access_key` need to be populated, and the `restic_repo_url` has the
format `s3:https://my.s3.endpoint:port/bucket-name`.
#### SFTP Backend
Using the `sftp` backend requires the configured `restic_user` to be able to
authenticate to the configured SFTP-Server using password-less methods like
publickey-authentication. The `restic_repo_url` then follows the format
`sftp:{user}@{server}:/my-restic-repository` (or without leading `/` for relative
paths to the `{user}`s home directory.
### Backing up data
A job name like `$service-postgres` or similar needs to be set in `restic_job_name`,
which is used for naming the `systemd` units, their syslog identifiers etc.
If backing up filesystem locations, the paths need to be specified in
`restic_backup_paths` as lists of strings representing absolute filesystem
locations.
If backing up f.ex. database or other data which is generating backups using
a command like `pg_dump`, use `restic_backup_stdin_command` (which needs to output
to `stdout`) in conjunction with `restic_backup_stdin_command_filename` to name
the resulting output (required).
### Policy
The backup policy can be adjusted by overriding the `restic_policy_keep_*`
variables, with the defaults being:
```yaml
restic_policy_keep_all_within: 1d
restic_policy_keep_hourly: 6
restic_policy_keep_daily: 2
restic_policy_keep_weekly: 7
restic_policy_keep_monthly: 4
restic_policy_backup_frequency: hourly
```
**Note:** `restic_policy_backup_frequency` must conform to `systemd`s
`OnCalendar` syntax, which can be checked using `systemd-analyze calender $x`.
## Role behaviour
Per default, when the systemd unit for a job changes, the job is not immediately
started. This can be overridden using `restic_start_job_on_unit_change: true`,
which will immediately start the backup job if it's configuration changed.
The systemd unit runs with `restic_user`, which is root by default, guaranteeing
that filesystem paths are always readable. The `restic_user` can be overridden,
but care needs to be taken to ensure the user has permission to read all the
provided filesystem paths / the backup command may be executed by the user.
If ansible should create the user, set `restic_create_user` to `true`, which
will attempt to create the `restic_user` as a system user.
### Installing
For Debian and RedHat, the role attempts to install restic using the default
package manager's ansible module (apt/dnf). For other distributions, the generic
`package` module tries to install `restic_package_name` (default: `restic`),
which can be overridden if needed.

View File

@@ -0,0 +1,37 @@
---
restic_repo_url: ~
restic_repo_password: ~
restic_s3_key_id: ~
restic_s3_access_key: ~
restic_backup_paths: []
restic_backup_stdin_command: ~
restic_backup_stdin_command_filename: ~
restic_policy_keep_all_within: 1d
restic_policy_keep_hourly: 6
restic_policy_keep_daily: 2
restic_policy_keep_weekly: 7
restic_policy_keep_monthly: 4
restic_policy_backup_frequency: hourly
restic_policy:
keep_within: "{{ restic_policy_keep_all_within }}"
hourly: "{{ restic_policy_keep_hourly }}"
daily: "{{ restic_policy_keep_daily }}"
weekly: "{{ restic_policy_keep_weekly }}"
monthly: "{{ restic_policy_keep_monthly }}"
frequency: "{{ restic_policy_backup_frequency }}"
restic_user: root
restic_create_user: false
restic_start_job_on_unit_change: false
restic_job_name: ~
restic_job_description: "Restic backup job for {{ restic_job_name }}"
restic_systemd_unit_naming_scheme: "restic.{{ restic_job_name }}"
restic_systemd_working_directory: /tmp
restic_systemd_syslog_identifier: "restic-{{ restic_job_name }}"
restic_package_name: restic

Some files were not shown because too many files have changed in this diff Show More