Compare commits
1 Commits
0.1.1
...
bce10967a6
Author | SHA1 | Date | |
---|---|---|---|
bce10967a6
|
20
README.md
20
README.md
@ -8,23 +8,11 @@ concise area of concern.
|
|||||||
|
|
||||||
## Roles
|
## Roles
|
||||||
|
|
||||||
- [`authelia`](roles/authelia/README.md): Deploys an [authelia.com](https://www.authelia.com)
|
- [`roles/restic-s3`](roles/restic-s3/README.md): Manage backups using restic
|
||||||
instance, an authentication provider with beta OIDC provider support.
|
and persist them to an s3-compatible backend.
|
||||||
|
|
||||||
- [`ghost`](roles/ghost/README.md): Deploys [ghost.org](https://ghost.org/), a simple to use
|
- [`roles/minio`](roles/minio/README.md): Deploy [min.io](https://min.io), an
|
||||||
blogging and publishing platform.
|
s3-compatible object storage server, using docker containers.
|
||||||
|
|
||||||
- [`gitea`](roles/gitea/README.md): Deploy [gitea.io](https://gitea.io), a
|
|
||||||
lightweight, self-hosted git service.
|
|
||||||
|
|
||||||
- [`jellyfin`](roles/jellyfin/README.md): Deploy [jellyfin.org](https://jellyfin.org),
|
|
||||||
the free software media system for streaming stored media to any device.
|
|
||||||
|
|
||||||
- [`openproject`](roles/openproject/README.md): Deploys an [openproject.org](https://www.openproject.org)
|
|
||||||
installation using the upstream provided docker-compose setup.
|
|
||||||
|
|
||||||
- [`vouch_proxy`](roles/vouch_proxy/README.md): Deploys [vouch-proxy](https://github.com/vouch/vouch-proxy),
|
|
||||||
an authorization proxy for arbitrary webapps working with `nginx`s' `auth_request` module.
|
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
|
11
galaxy.yml
11
galaxy.yml
@ -1,14 +1,15 @@
|
|||||||
namespace: finallycoffee
|
namespace: finallycoffee
|
||||||
name: services
|
name: services
|
||||||
version: 0.1.1
|
version: 0.0.1
|
||||||
readme: README.md
|
readme: README.md
|
||||||
authors:
|
authors:
|
||||||
- transcaffeine <transcaffeine@finally.coffee>
|
- Johanna Dorothea Reichmann <transcaffeine@finallycoffee.eu>
|
||||||
description: Various ansible roles useful for automating infrastructure
|
description: Various ansible roles useful for automating infrastructure
|
||||||
dependencies:
|
dependencies:
|
||||||
"community.docker": "^1.10.0"
|
"community.docker": "^1.10.0"
|
||||||
license_file: LICENSE.md
|
license:
|
||||||
|
- CNPLv7+
|
||||||
build_ignore:
|
build_ignore:
|
||||||
- '*.tar.gz'
|
- '*.tar.gz'
|
||||||
repository: https://git.finally.coffee/finallycoffee/services
|
repository: https://git.finallycoffee.eu/finallycoffee.eu/services
|
||||||
issues: https://git.finally.coffee/finallycoffee/services/issues
|
issues: https://git.finallycoffee.eu/finallycoffee.eu/services/issues
|
||||||
|
@ -1,3 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
requires_ansible: ">=2.15"
|
|
@ -1,6 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Install openproject
|
|
||||||
hosts: "{{ openproject_hosts | default('openproject') }}"
|
|
||||||
become: "{{ openproject_become | default(true, false) }}"
|
|
||||||
roles:
|
|
||||||
- role: finallycoffee.services.openproject
|
|
@ -1,74 +0,0 @@
|
|||||||
# `finallycoffee.services.authelia` ansible role
|
|
||||||
|
|
||||||
Deploys [authelia](https://www.authelia.com), an open-source full-featured
|
|
||||||
authentication server with OIDC beta support.
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
Most configurations options are exposed to be overrideable by setting
|
|
||||||
`authelia_config_{flat_config_key}`, which means `totp.digits: 8`
|
|
||||||
would become `authelia_config_totp_digits: 8`.
|
|
||||||
|
|
||||||
If configuration is not exposed in [`defaults/main.yml`](defaults/main.yml),
|
|
||||||
it can be overridden in `authelia_extra_config`, which is merged recursively
|
|
||||||
to the default config. Entire blocks can currently not be easily overridden,
|
|
||||||
it's best to rely on the `authelia_extra_config` here.
|
|
||||||
|
|
||||||
Below are some configuration hints towards enabling 2nd factor
|
|
||||||
providers like TOTP, WebAuthN etc.
|
|
||||||
|
|
||||||
### TOTP
|
|
||||||
|
|
||||||
See [the authelia docs on TOTP](https://www.authelia.com/docs/configuration/one-time-password.html#algorithm)
|
|
||||||
before adjusting some of the fine-grained configuration, as many
|
|
||||||
TOTP clients do not properly support all by-spec supported values.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
authelia_config_totp_disable: false
|
|
||||||
authelia_config_totp_issuer: "your.authelia.domain"
|
|
||||||
# Best to stick to authelias guide here
|
|
||||||
authelia_config_totp_algorithm: [...]
|
|
||||||
authelia_config_totp_digits: [...]
|
|
||||||
authelia_config_totp_period: [...]
|
|
||||||
```
|
|
||||||
|
|
||||||
### WebAuthN
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
authelia_config_webauthn_disable: false
|
|
||||||
authelia_config_webauthn_timeout: 30s
|
|
||||||
# Force user to touch the security key's confirmation button
|
|
||||||
authelia_config_webauthn_user_verification: required
|
|
||||||
```
|
|
||||||
|
|
||||||
For more information about possible WebAuthN configuration, see
|
|
||||||
[the authelia docs on WebAuthN](https://www.authelia.com/docs/configuration/webauthn.html).
|
|
||||||
|
|
||||||
### Database & Redis
|
|
||||||
|
|
||||||
While Authelia can use a sqlite DB with in memory store by setting
|
|
||||||
`authelia_sqlite_storage_file_path`, it is recommended to use a proper
|
|
||||||
database and a redis instance:
|
|
||||||
```yaml
|
|
||||||
authelia_database_type: postgres
|
|
||||||
authelia_database_host: /var/run/postgres/
|
|
||||||
authelia_database_user: authelia
|
|
||||||
authelia_database_pass: authelia
|
|
||||||
|
|
||||||
# Redis
|
|
||||||
authelia_redis_host: /var/run/redis/
|
|
||||||
authelia_redis_pass: very_long_static_secret
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
### Notifications
|
|
||||||
|
|
||||||
For a test setup, notifications can be written into a config file, this behaviour
|
|
||||||
is enabled by setting `authelia_config_notifier_filesystem_filename`. For real-world
|
|
||||||
use, an SMTP server is strongly recommended, its config is as follows:
|
|
||||||
```
|
|
||||||
authelia_smtp_host: mail.domain.com
|
|
||||||
authelia_smtp_port: 587 # for StartTLS
|
|
||||||
authelia_smtp_user: authelia@domain.com
|
|
||||||
authelia_smtp_pass: authelia_user_pass
|
|
||||||
```
|
|
@ -1,184 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
authelia_version: 4.37.5
|
|
||||||
authelia_user: authelia
|
|
||||||
authelia_base_dir: /opt/authelia
|
|
||||||
authelia_domain: authelia.example.org
|
|
||||||
|
|
||||||
authelia_config_dir: "{{ authelia_base_dir }}/config"
|
|
||||||
authelia_config_file: "{{ authelia_config_dir }}/config.yaml"
|
|
||||||
authelia_data_dir: "{{ authelia_base_dir }}/data"
|
|
||||||
authelia_asset_dir: "{{ authelia_base_dir }}/assets"
|
|
||||||
authelia_sqlite_storage_file: "{{ authelia_data_dir }}/authelia.sqlite3"
|
|
||||||
authelia_notification_storage_file: "{{ authelia_data_dir }}/notifications.txt"
|
|
||||||
authelia_user_storage_file: "{{ authelia_data_dir }}/user_database.yml"
|
|
||||||
|
|
||||||
authelia_container_name: authelia
|
|
||||||
authelia_container_image_name: docker.io/authelia/authelia
|
|
||||||
authelia_container_image_tag: ~
|
|
||||||
authelia_container_image_ref: "{{ authelia_container_image_name }}:{{ authelia_container_image_tag | default(authelia_version, true) }}"
|
|
||||||
authelia_container_image_force_pull: "{{ authelia_container_image_tag | default(false, True) }}"
|
|
||||||
authelia_container_env:
|
|
||||||
PUID: "{{ authelia_run_user }}"
|
|
||||||
PGID: "{{ authelia_run_group }}"
|
|
||||||
authelia_container_labels: >-2
|
|
||||||
{{ authelia_container_base_labels | combine(authelia_container_extra_labels) }}
|
|
||||||
authelia_container_extra_labels: {}
|
|
||||||
authelia_container_extra_volumes: []
|
|
||||||
authelia_container_volumes: >-2
|
|
||||||
{{ authelia_container_base_volumes
|
|
||||||
+ authelia_container_extra_volumes }}
|
|
||||||
authelia_container_ports: ~
|
|
||||||
authelia_container_networks: ~
|
|
||||||
authelia_container_purge_networks: ~
|
|
||||||
authelia_container_restart_policy: unless-stopped
|
|
||||||
authelia_container_state: started
|
|
||||||
|
|
||||||
authelia_container_listen_port: 9091
|
|
||||||
authelia_tls_minimum_version: TLS1.2
|
|
||||||
|
|
||||||
authelia_config_theme: auto
|
|
||||||
authelia_config_jwt_secret: ~
|
|
||||||
authelia_config_default_redirection_url: ~
|
|
||||||
authelia_config_server_host: 0.0.0.0
|
|
||||||
authelia_config_server_port: "{{ authelia_container_listen_port }}"
|
|
||||||
authelia_config_server_path: ""
|
|
||||||
authelia_config_server_asset_path: "/config/assets/"
|
|
||||||
authelia_config_server_read_buffer_size: 4096
|
|
||||||
authelia_config_server_write_buffer_size: 4096
|
|
||||||
authelia_config_server_enable_pprof: true
|
|
||||||
authelia_config_server_enable_expvars: true
|
|
||||||
authelia_config_server_disable_healthcheck:
|
|
||||||
authelia_config_server_tls_key: ~
|
|
||||||
authelia_config_server_tls_certificate: ~
|
|
||||||
authelia_config_server_tls_client_certificates: []
|
|
||||||
authelia_config_server_headers_csp_template: ~
|
|
||||||
authelia_config_log_level: info
|
|
||||||
authelia_config_log_format: json
|
|
||||||
authelia_config_log_file_path: ~
|
|
||||||
authelia_config_log_keep_stdout: false
|
|
||||||
authelia_config_telemetry_metrics_enabled: false
|
|
||||||
authelia_config_telemetry_metrics_address: '0.0.0.0:9959'
|
|
||||||
authelia_config_totp_disable: true
|
|
||||||
authelia_config_totp_issuer: "{{ authelia_domain }}"
|
|
||||||
authelia_config_totp_algorithm: sha1
|
|
||||||
authelia_config_totp_digits: 6
|
|
||||||
authelia_config_totp_period: 30
|
|
||||||
authelia_config_totp_skew: 1
|
|
||||||
authelia_config_totp_secret_size: 32
|
|
||||||
authelia_config_webauthn_disable: true
|
|
||||||
authelia_config_webauthn_timeout: 60s
|
|
||||||
authelia_config_webauthn_display_name: "Authelia ({{ authelia_domain }})"
|
|
||||||
authelia_config_webauthn_attestation_conveyance_preference: indirect
|
|
||||||
authelia_config_webauthn_user_verification: preferred
|
|
||||||
authelia_config_duo_api_hostname: ~
|
|
||||||
authelia_config_duo_api_integration_key: ~
|
|
||||||
authelia_config_duo_api_secret_key: ~
|
|
||||||
authelia_config_duo_api_enable_self_enrollment: false
|
|
||||||
authelia_config_ntp_address: "time.cloudflare.com:123"
|
|
||||||
authelia_config_ntp_version: 4
|
|
||||||
authelia_config_ntp_max_desync: 3s
|
|
||||||
authelia_config_ntp_disable_startup_check: false
|
|
||||||
authelia_config_ntp_disable_failure: false
|
|
||||||
authelia_config_authentication_backend_refresh_interval: 5m
|
|
||||||
authelia_config_authentication_backend_password_reset_disable: false
|
|
||||||
authelia_config_authentication_backend_password_reset_custom_url: ~
|
|
||||||
authelia_config_authentication_backend_ldap_implementation: custom
|
|
||||||
authelia_config_authentication_backend_ldap_url: ldap://127.0.0.1:389
|
|
||||||
authelia_config_authentication_backend_ldap_timeout: 5s
|
|
||||||
authelia_config_authentication_backend_ldap_start_tls: false
|
|
||||||
authelia_config_authentication_backend_ldap_tls_skip_verify: false
|
|
||||||
authelia_config_authentication_backend_ldap_minimum_version: "{{ authelia_tls_minimum_version }}"
|
|
||||||
authelia_config_authentication_backend_ldap_base_dn: ~
|
|
||||||
authelia_config_authentication_backend_ldap_additional_users_dn: "ou=users"
|
|
||||||
authelia_config_authentication_backend_ldap_users_filter: "(&(|({username_attribute}={input})({mail_attribute}={input}))(objectClass=inetOrgPerson))"
|
|
||||||
authelia_config_authentication_backend_ldap_additional_groups_dn: "ou=groups"
|
|
||||||
authelia_config_authentication_backend_ldap_groups_filter: "(member={dn})"
|
|
||||||
authelia_config_authentication_backend_ldap_group_name_attribute: cn
|
|
||||||
authelia_config_authentication_backend_ldap_username_attribute: uid
|
|
||||||
authelia_config_authentication_backend_ldap_mail_attribute: mail
|
|
||||||
authelia_config_authentication_backend_ldap_display_name_attribute: displayName
|
|
||||||
authelia_config_authentication_backend_ldap_user: ~
|
|
||||||
authelia_config_authentication_backend_ldap_password: ~
|
|
||||||
authelia_config_authentication_backend_file_path: ~
|
|
||||||
authelia_config_authentication_backend_file_password_algorithm: argon2id
|
|
||||||
authelia_config_authentication_backend_file_password_iterations: 5
|
|
||||||
authelia_config_authentication_backend_file_password_key_length: 32
|
|
||||||
authelia_config_authentication_backend_file_password_salt_length: 16
|
|
||||||
authelia_config_authentication_backend_file_password_memory: 1024
|
|
||||||
authelia_config_authentication_backend_file_password_parallelism: 8
|
|
||||||
authelia_config_password_policy_standard_enabled: false
|
|
||||||
authelia_config_password_policy_standard_min_length: 12
|
|
||||||
authelia_config_password_policy_standard_max_length: 0
|
|
||||||
authelia_config_password_policy_standard_require_uppercase: true
|
|
||||||
authelia_config_password_policy_standard_require_lowercase: true
|
|
||||||
authelia_config_password_policy_standard_require_number: true
|
|
||||||
authelia_config_password_policy_standard_require_special: false
|
|
||||||
authelia_config_password_policy_zxcvbn_enabled: true
|
|
||||||
authelia_config_access_control_default_policy: deny
|
|
||||||
authelia_config_access_control_networks: []
|
|
||||||
authelia_config_access_control_rules: []
|
|
||||||
authelia_config_session_name: authelia_session
|
|
||||||
authelia_config_session_domain: example.org
|
|
||||||
authelia_config_session_same_site: lax
|
|
||||||
authelia_config_session_secret: ~
|
|
||||||
authelia_config_session_expiration: 1h
|
|
||||||
authelia_config_session_inactivity: 5m
|
|
||||||
authelia_config_session_remember_me_duration: 1M
|
|
||||||
authelia_config_session_redis_host: "{{ authelia_redis_host }}"
|
|
||||||
authelia_config_session_redis_port: "{{ authelia_redis_port }}"
|
|
||||||
authelia_config_session_redis_username: "{{ authelia_redis_user }}"
|
|
||||||
authelia_config_session_redis_password: "{{ authelia_redis_pass }}"
|
|
||||||
authelia_config_session_redis_database_index: 0
|
|
||||||
authelia_config_session_redis_maximum_active_connections: 8
|
|
||||||
authelia_config_session_redis_minimum_idle_connections: 0
|
|
||||||
authelia_config_session_redis_enable_tls: false
|
|
||||||
authelia_config_session_redis_tls_server_name: ~
|
|
||||||
authelia_config_session_redis_tls_skip_verify: false
|
|
||||||
authelia_config_session_redis_tls_minimum_version: "{{ authelia_tls_minimum_version }}"
|
|
||||||
authelia_config_regulation_max_retries: 3
|
|
||||||
authelia_config_regulation_find_time: 2m
|
|
||||||
authelia_config_regulation_ban_time: 5m
|
|
||||||
authelia_config_storage_encryption_key: ~
|
|
||||||
authelia_config_storage_local_path: ~
|
|
||||||
authelia_config_storage_mysql_port: 3306
|
|
||||||
authelia_config_storage_postgres_port: 5432
|
|
||||||
authelia_config_storage_postgres_ssl_mode: disable
|
|
||||||
authelia_config_storage_postgres_ssl_root_certificate: disable
|
|
||||||
authelia_config_storage_postgres_ssl_certificate: disable
|
|
||||||
authelia_config_storage_postgres_ssl_key: disable
|
|
||||||
authelia_config_notifier_disable_startup_check: false
|
|
||||||
authelia_config_notifier_filesystem_filename: ~
|
|
||||||
authelia_config_notifier_smtp_host: "{{ authelia_smtp_host }}"
|
|
||||||
authelia_config_notifier_smtp_port: "{{ authelia_stmp_port }}"
|
|
||||||
authelia_config_notifier_smtp_username: "{{ authelia_smtp_user }}"
|
|
||||||
authelia_config_notifier_smtp_password: "{{ authelia_smtp_pass }}"
|
|
||||||
authelia_config_notifier_smtp_timeout: 5s
|
|
||||||
authelia_config_notifier_smtp_sender: "Authelia on {{ authelia_domain }} <admin@{{ authelia_domain }}>"
|
|
||||||
authelia_config_notifier_smtp_identifier: "{{ authelia_domain }}"
|
|
||||||
authelia_config_notifier_smtp_subject: "[Authelia @ {{ authelia_domain }}] {title}"
|
|
||||||
authelia_config_notifier_smtp_startup_check_address: "authelia-test@{{ authelia_domain }}"
|
|
||||||
authelia_config_notifier_smtp_disable_require_tls: false
|
|
||||||
authelia_config_notifier_smtp_disable_html_emails: false
|
|
||||||
authelia_config_notifier_smtp_tls_skip_verify: false
|
|
||||||
authelia_config_notifier_smtp_tls_minimum_version: "{{ authelia_tls_minimum_version }}"
|
|
||||||
#authelia_config_identity_provider_
|
|
||||||
|
|
||||||
authelia_database_type: ~
|
|
||||||
authelia_database_host: ~
|
|
||||||
authelia_database_user: authelia
|
|
||||||
authelia_database_pass: ~
|
|
||||||
authelia_database_name: authelia
|
|
||||||
authelia_database_timeout: 5s
|
|
||||||
|
|
||||||
authelia_smtp_host: ~
|
|
||||||
authelia_stmp_port: 465
|
|
||||||
authelia_stmp_user: authelia
|
|
||||||
authelia_stmp_pass: ~
|
|
||||||
|
|
||||||
authelia_redis_host: ~
|
|
||||||
authelia_redis_port: 6379
|
|
||||||
authelia_redis_user: ~
|
|
||||||
authelia_redis_pass: ~
|
|
||||||
|
|
||||||
authelia_extra_config: {}
|
|
@ -1,8 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
- name: Restart authelia container
|
|
||||||
docker_container:
|
|
||||||
name: "{{ authelia_container_name }}"
|
|
||||||
state: started
|
|
||||||
restart: yes
|
|
||||||
listen: restart-authelia
|
|
@ -1,91 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
- name: Ensure user {{ authelia_user }} exists
|
|
||||||
user:
|
|
||||||
name: "{{ authelia_user }}"
|
|
||||||
state: present
|
|
||||||
system: true
|
|
||||||
register: authelia_user_info
|
|
||||||
|
|
||||||
- name: Ensure host directories are created with correct permissions
|
|
||||||
file:
|
|
||||||
path: "{{ item.path }}"
|
|
||||||
state: directory
|
|
||||||
owner: "{{ item.owner | default(authelia_user) }}"
|
|
||||||
group: "{{ item.group | default(authelia_user) }}"
|
|
||||||
mode: "{{ item.mode | default('0750') }}"
|
|
||||||
when: item.path | default(false, true) | bool
|
|
||||||
loop:
|
|
||||||
- path: "{{ authelia_base_dir }}"
|
|
||||||
mode: "0755"
|
|
||||||
- path: "{{ authelia_config_dir }}"
|
|
||||||
mode: "0750"
|
|
||||||
- path: "{{ authelia_data_dir }}"
|
|
||||||
mode: "0750"
|
|
||||||
- path: "{{ authelia_asset_dir }}"
|
|
||||||
mode: "0750"
|
|
||||||
|
|
||||||
- name: Ensure config file is generated
|
|
||||||
copy:
|
|
||||||
content: "{{ authelia_config | to_nice_yaml(indent=2, width=10000) }}"
|
|
||||||
dest: "{{ authelia_config_file }}"
|
|
||||||
owner: "{{ authelia_run_user }}"
|
|
||||||
group: "{{ authelia_run_group }}"
|
|
||||||
mode: "0640"
|
|
||||||
notify: restart-authelia
|
|
||||||
|
|
||||||
- name: Ensure sqlite database file exists before mounting it
|
|
||||||
file:
|
|
||||||
path: "{{ authelia_sqlite_storage_file }}"
|
|
||||||
state: touch
|
|
||||||
owner: "{{ authelia_run_user }}"
|
|
||||||
group: "{{ authelia_run_group }}"
|
|
||||||
mode: "0640"
|
|
||||||
access_time: preserve
|
|
||||||
modification_time: preserve
|
|
||||||
when: authelia_config_storage_local_path | default(false, true)
|
|
||||||
|
|
||||||
- name: Ensure user database exists before mounting it
|
|
||||||
file:
|
|
||||||
path: "{{ authelia_user_storage_file }}"
|
|
||||||
state: touch
|
|
||||||
owner: "{{ authelia_run_user }}"
|
|
||||||
group: "{{ authelia_run_group }}"
|
|
||||||
mode: "0640"
|
|
||||||
access_time: preserve
|
|
||||||
modification_time: preserve
|
|
||||||
when: authelia_config_authentication_backend_file_path | default(false, true)
|
|
||||||
|
|
||||||
- name: Ensure notification reports file exists before mounting it
|
|
||||||
file:
|
|
||||||
path: "{{ authelia_notification_storage_file }}"
|
|
||||||
state: touch
|
|
||||||
owner: "{{ authelia_run_user }}"
|
|
||||||
group: "{{ authelia_run_group }}"
|
|
||||||
mode: "0640"
|
|
||||||
access_time: preserve
|
|
||||||
modification_time: preserve
|
|
||||||
when: authelia_config_notifier_filesystem_filename | default(false, true)
|
|
||||||
|
|
||||||
- name: Ensure authelia container image is present
|
|
||||||
community.docker.docker_image:
|
|
||||||
name: "{{ authelia_container_image_ref }}"
|
|
||||||
state: present
|
|
||||||
source: pull
|
|
||||||
force_source: "{{ authelia_container_image_force_pull }}"
|
|
||||||
register: authelia_container_image_info
|
|
||||||
|
|
||||||
- name: Ensure authelia container is running
|
|
||||||
docker_container:
|
|
||||||
name: "{{ authelia_container_name }}"
|
|
||||||
image: "{{ authelia_container_image_ref }}"
|
|
||||||
env: "{{ authelia_container_env }}"
|
|
||||||
user: "{{ authelia_run_user }}:{{ authelia_run_group }}"
|
|
||||||
ports: "{{ authelia_container_ports | default(omit, true) }}"
|
|
||||||
labels: "{{ authelia_container_labels }}"
|
|
||||||
volumes: "{{ authelia_container_volumes }}"
|
|
||||||
networks: "{{ authelia_container_networks | default(omit, true) }}"
|
|
||||||
purge_networks: "{{ authelia_container_purge_networks | default(omit, true)}}"
|
|
||||||
restart_policy: "{{ authelia_container_restart_policy }}"
|
|
||||||
state: "{{ authelia_container_state }}"
|
|
||||||
register: authelia_container_info
|
|
@ -1,266 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
authelia_run_user: "{{ (authelia_user_info.uid) if authelia_user_info is defined else authelia_user }}"
|
|
||||||
authelia_run_group: "{{ (authelia_user_info.group) if authelia_user_info is defined else authelia_user }}"
|
|
||||||
|
|
||||||
authelia_container_base_volumes: >-2
|
|
||||||
{{ [ authelia_config_file + ":/config/configuration.yml:ro"]
|
|
||||||
+ ([authelia_asset_dir + '/:' + authelia_config_server_asset_path + ':ro'] if authelia_asset_dir | default(false, true) else [])
|
|
||||||
+ ([ authelia_sqlite_storage_file + ":" + authelia_config_storage_local_path + ":z" ]
|
|
||||||
if authelia_config_storage_local_path | default(false, true) else [])
|
|
||||||
+ ([ authelia_notification_storage_file + ":" + authelia_config_notifier_filesystem_filename + ":z" ]
|
|
||||||
if authelia_config_notifier_filesystem_filename | default(false, true) else [])
|
|
||||||
+ ( [authelia_user_storage_file + ":" + authelia_config_authentication_backend_file_path + ":z"]
|
|
||||||
if authelia_config_authentication_backend_file_path | default(false, true) else [])
|
|
||||||
}}
|
|
||||||
|
|
||||||
authelia_container_base_labels:
|
|
||||||
version: "{{ authelia_version }}"
|
|
||||||
|
|
||||||
authelia_config: "{{ authelia_base_config | combine(authelia_extra_config, recursive=True) }}"
|
|
||||||
authelia_top_level_config:
|
|
||||||
theme: "{{ authelia_config_theme }}"
|
|
||||||
jwt_secret: "{{ authelia_config_jwt_secret }}"
|
|
||||||
log: "{{ authelia_config_log }}"
|
|
||||||
telemetry: "{{ authelia_config_telemetry }}"
|
|
||||||
totp: "{{ authelia_config_totp }}"
|
|
||||||
webauthn: "{{ authelia_config_webauthn }}"
|
|
||||||
duo_api: "{{ authelia_config_duo_api }}"
|
|
||||||
ntp: "{{ authelia_config_ntp }}"
|
|
||||||
authentication_backend: "{{ authelia_config_authentication_backend }}"
|
|
||||||
# password_policy: "{{ authelia_config_password_policy }}"
|
|
||||||
access_control: "{{ authelia_config_access_control }}"
|
|
||||||
session: "{{ authelia_config_session }}"
|
|
||||||
regulation: "{{ authelia_config_regulation }}"
|
|
||||||
storage: "{{ authelia_config_storage }}"
|
|
||||||
notifier: "{{ authelia_config_notifier }}"
|
|
||||||
|
|
||||||
authelia_base_config: >-2
|
|
||||||
{{
|
|
||||||
authelia_top_level_config
|
|
||||||
| combine({"default_redirection_url": authelia_config_default_redirection_url}
|
|
||||||
if authelia_config_default_redirection_url | default(false, true) else {})
|
|
||||||
| combine(({"server": authelia_config_server })
|
|
||||||
| combine({"tls": authelia_config_server_tls}
|
|
||||||
if authelia_config_server_tls_key | default(false, true) else {}))
|
|
||||||
}}
|
|
||||||
|
|
||||||
authelia_config_server: >-2
|
|
||||||
{{
|
|
||||||
{
|
|
||||||
"host": authelia_config_server_host,
|
|
||||||
"port": authelia_config_server_port,
|
|
||||||
"path": authelia_config_server_path,
|
|
||||||
"asset_path": authelia_config_server_asset_path,
|
|
||||||
"read_buffer_size": authelia_config_server_read_buffer_size,
|
|
||||||
"write_buffer_size": authelia_config_server_write_buffer_size,
|
|
||||||
"enable_pprof": authelia_config_server_enable_pprof,
|
|
||||||
"enable_expvars": authelia_config_server_enable_expvars,
|
|
||||||
"disable_healthcheck": authelia_config_server_disable_healthcheck,
|
|
||||||
} | combine({"headers": {"csp_template": authelia_config_server_headers_csp_template}}
|
|
||||||
if authelia_config_server_headers_csp_template | default(false, true) else {})
|
|
||||||
}}
|
|
||||||
authelia_config_server_tls:
|
|
||||||
key: "{{ authelia_config_server_tls_key }}"
|
|
||||||
certificate: "{{ authelia_config_server_tls_certificate }}"
|
|
||||||
client_certificates: "{{ authelia_config_server_tls_client_certificates }}"
|
|
||||||
authelia_config_log: >-2
|
|
||||||
{{
|
|
||||||
{
|
|
||||||
"level": authelia_config_log_level,
|
|
||||||
"format": authelia_config_log_format
|
|
||||||
}
|
|
||||||
| combine({"file_path": authelia_config_log_file_path}
|
|
||||||
if authelia_config_log_file_path | default(false, true) else {})
|
|
||||||
| combine({"keep_stdout": authelia_config_log_keep_stdout}
|
|
||||||
if authelia_config_log_file_path | default(false, true) else {})
|
|
||||||
}}
|
|
||||||
authelia_config_telemetry:
|
|
||||||
metrics:
|
|
||||||
enabled: "{{ authelia_config_telemetry_metrics_enabled }}"
|
|
||||||
address: "{{ authelia_config_telemetry_metrics_address }}"
|
|
||||||
authelia_config_totp:
|
|
||||||
disable: "{{ authelia_config_totp_disable }}"
|
|
||||||
issuer: "{{ authelia_config_totp_issuer }}"
|
|
||||||
algorithm: "{{ authelia_config_totp_algorithm }}"
|
|
||||||
digits: "{{ authelia_config_totp_digits }}"
|
|
||||||
period: "{{ authelia_config_totp_period }}"
|
|
||||||
skew: "{{ authelia_config_totp_skew }}"
|
|
||||||
# secret_size: "{{ authelia_config_totp_secret_size }}"
|
|
||||||
authelia_config_webauthn:
|
|
||||||
disable: "{{ authelia_config_webauthn_disable }}"
|
|
||||||
timeout: "{{ authelia_config_webauthn_timeout }}"
|
|
||||||
display_name: "{{ authelia_config_webauthn_display_name }}"
|
|
||||||
attestation_conveyance_preference: "{{ authelia_config_webauthn_attestation_conveyance_preference }}"
|
|
||||||
user_verification: "{{ authelia_config_webauthn_user_verification }}"
|
|
||||||
authelia_config_duo_api:
|
|
||||||
hostname: "{{ authelia_config_duo_api_hostname }}"
|
|
||||||
integration_key: "{{ authelia_config_duo_api_integration_key }}"
|
|
||||||
secret_key: "{{ authelia_config_duo_api_secret_key }}"
|
|
||||||
enable_self_enrollment: "{{ authelia_config_duo_api_enable_self_enrollment }}"
|
|
||||||
authelia_config_ntp:
|
|
||||||
address: "{{ authelia_config_ntp_address }}"
|
|
||||||
version: "{{ authelia_config_ntp_version }}"
|
|
||||||
max_desync: "{{ authelia_config_ntp_max_desync }}"
|
|
||||||
disable_startup_check: "{{ authelia_config_ntp_disable_startup_check }}"
|
|
||||||
disable_failure: "{{ authelia_config_ntp_disable_failure }}"
|
|
||||||
|
|
||||||
authelia_config_authentication_backend: >-2
|
|
||||||
{{
|
|
||||||
{
|
|
||||||
"refresh_interval": authelia_config_authentication_backend_refresh_interval,
|
|
||||||
}
|
|
||||||
| combine({"password_reset": authelia_config_authentication_backend_password_reset}
|
|
||||||
if authelia_config_authentication_backend_password_reset_custom_url | default(false, true) else {})
|
|
||||||
| combine({"file": authelia_config_authentication_backend_file}
|
|
||||||
if authelia_config_authentication_backend_file_path | default(false, true)
|
|
||||||
else {"ldap": authelia_config_authentication_backend_ldap})
|
|
||||||
}}
|
|
||||||
authelia_config_authentication_backend_password_reset:
|
|
||||||
custom_url: "{{ authelia_config_authentication_backend_password_reset_custom_url }}"
|
|
||||||
disable: "{{ authelia_config_authentication_backend_password_reset_disable }}"
|
|
||||||
authelia_config_authentication_backend_ldap:
|
|
||||||
implementation: "{{ authelia_config_authentication_backend_ldap_implementation }}"
|
|
||||||
url: "{{ authelia_config_authentication_backend_ldap_url }}"
|
|
||||||
timeout: "{{ authelia_config_authentication_backend_ldap_timeout }}"
|
|
||||||
start_tls: "{{ authelia_config_authentication_backend_ldap_start_tls }}"
|
|
||||||
tls:
|
|
||||||
skip_verify: "{{ authelia_config_authentication_backend_ldap_tls_skip_verify }}"
|
|
||||||
minimum_version: "{{ authelia_config_authentication_backend_ldap_minimum_version }}"
|
|
||||||
base_dn: "{{ authelia_config_authentication_backend_ldap_base_dn }}"
|
|
||||||
additional_users_dn: "{{ authelia_config_authentication_backend_ldap_additional_users_dn }}"
|
|
||||||
additional_groups_dn: "{{ authelia_config_authentication_backend_ldap_additional_groups_dn }}"
|
|
||||||
users_filter: "{{ authelia_config_authentication_backend_ldap_users_filter }}"
|
|
||||||
groups_filter: "{{ authelia_config_authentication_backend_ldap_groups_filter }}"
|
|
||||||
group_name_attribute: "{{ authelia_config_authentication_backend_ldap_group_name_attribute }}"
|
|
||||||
username_attribute: "{{ authelia_config_authentication_backend_ldap_username_attribute }}"
|
|
||||||
mail_attribute: "{{ authelia_config_authentication_backend_ldap_mail_attribute }}"
|
|
||||||
display_name_attribute: "{{ authelia_config_authentication_backend_ldap_display_name_attribute }}"
|
|
||||||
user: "{{ authelia_config_authentication_backend_ldap_user }}"
|
|
||||||
password: "{{ authelia_config_authentication_backend_ldap_password }}"
|
|
||||||
authelia_config_authentication_backend_file:
|
|
||||||
path: "{{ authelia_config_authentication_backend_file_path }}"
|
|
||||||
password:
|
|
||||||
algorithm: "{{ authelia_config_authentication_backend_file_password_algorithm }}"
|
|
||||||
iterations: "{{ authelia_config_authentication_backend_file_password_iterations }}"
|
|
||||||
key_length: "{{ authelia_config_authentication_backend_file_password_key_length }}"
|
|
||||||
salt_lenght: "{{ authelia_config_authentication_backend_file_password_salt_length }}"
|
|
||||||
memory: "{{ authelia_config_authentication_backend_file_password_memory }}"
|
|
||||||
parallelism: "{{ authelia_config_authentication_backend_file_password_parallelism }}"
|
|
||||||
authelia_config_password_policy: >-2
|
|
||||||
{{
|
|
||||||
{"standard": authelia_config_password_policy_standard}
|
|
||||||
if authelia_config_password_policy_standard_enabled
|
|
||||||
else {"zxcvbn": authelia_config_password_policy_zxcvbn}
|
|
||||||
}}
|
|
||||||
authelia_config_password_policy_standard:
|
|
||||||
enabled: "{{ authelia_config_password_policy_standard_enabled }}"
|
|
||||||
min_length: "{{ authelia_config_password_policy_standard_min_length }}"
|
|
||||||
max_length: "{{ authelia_config_password_policy_standard_max_length }}"
|
|
||||||
require_uppercase: "{{ authelia_config_password_policy_standard_require_uppercase }}"
|
|
||||||
require_lowercase: "{{ authelia_config_password_policy_standard_require_lowercase }}"
|
|
||||||
require_number: "{{ authelia_config_password_policy_standard_require_number }}"
|
|
||||||
require_special: "{{ authelia_config_password_policy_standard_require_special }}"
|
|
||||||
authelia_config_password_policy_zxcvbn:
|
|
||||||
enabled: "{{ authelia_config_password_policy_zxcvbn_enabled }}"
|
|
||||||
authelia_config_access_control:
|
|
||||||
default_policy: "{{ authelia_config_access_control_default_policy }}"
|
|
||||||
networks: "{{ authelia_config_access_control_networks }}"
|
|
||||||
rules: "{{ authelia_config_access_control_rules }}"
|
|
||||||
authelia_config_session:
|
|
||||||
name: "{{ authelia_config_session_name }}"
|
|
||||||
domain: "{{ authelia_config_session_domain }}"
|
|
||||||
same_site: "{{ authelia_config_session_same_site }}"
|
|
||||||
secret: "{{ authelia_config_session_secret }}"
|
|
||||||
expiration: "{{ authelia_config_session_expiration }}"
|
|
||||||
inactivity: "{{ authelia_config_session_inactivity }}"
|
|
||||||
remember_me_duration: "{{ authelia_config_session_remember_me_duration }}"
|
|
||||||
authelia_config_session_redis: >-2
|
|
||||||
{{
|
|
||||||
{
|
|
||||||
"host": authelia_config_session_redis_host,
|
|
||||||
"database_index": authelia_config_session_redis_database_index,
|
|
||||||
"maximum_active_connections": authelia_config_session_redis_maximum_active_connections,
|
|
||||||
"minimum_idle_connections": authelia_config_session_redis_minimum_idle_connections
|
|
||||||
}
|
|
||||||
| combine({"password": authelia_config_session_redis_password}
|
|
||||||
if authelia_config_session_redis_password | default(false, true) else {})
|
|
||||||
| combine({"username": authelia_config_session_redis_username}
|
|
||||||
if authelia_config_session_redis_username | default(false, true) else {})
|
|
||||||
| combine({"port": authelia_config_session_redis_port}
|
|
||||||
if '/' not in authelia_config_session_redis_host else {})
|
|
||||||
| combine({"tls": authelia_config_session_redis_tls}
|
|
||||||
if authelia_config_session_redis_enable_tls | default(false, true) else {})
|
|
||||||
}}
|
|
||||||
authelia_config_session_redis_tls: >-2
|
|
||||||
{{
|
|
||||||
{
|
|
||||||
"skip_verify": authelia_config_session_redis_tls_skip_verify,
|
|
||||||
"minimum_version": authelia_config_session_redis_tls_minimum_version,
|
|
||||||
}
|
|
||||||
| combine({"server_name": authelia_config_session_redis_tls_server_name}
|
|
||||||
if authelia_config_session_redis_tls_server_name | default(false, true) else {})
|
|
||||||
}}
|
|
||||||
authelia_config_regulation:
|
|
||||||
max_retries: "{{ authelia_config_regulation_max_retries }}"
|
|
||||||
find_time: "{{ authelia_config_regulation_find_time }}"
|
|
||||||
ban_time: "{{ authelia_config_regulation_ban_time }}"
|
|
||||||
authelia_config_storage: >-2
|
|
||||||
{{
|
|
||||||
{ "encryption_key": authelia_config_storage_encryption_key }
|
|
||||||
| combine({"local": authelia_config_storage_local}
|
|
||||||
if authelia_database_type in ['local', 'sqlite'] else {})
|
|
||||||
| combine({"mysql": authelia_config_storage_mysql}
|
|
||||||
if authelia_database_type == 'mysql' else {})
|
|
||||||
| combine({"postgres": authelia_config_storage_postgres}
|
|
||||||
if authelia_database_type in ['postgres', 'postgresql'] else {})
|
|
||||||
}}
|
|
||||||
authelia_config_storage_local:
|
|
||||||
path: "{{ authelia_config_storage_local_path }}"
|
|
||||||
authelia_config_storage_mysql:
|
|
||||||
host: "{{ authelia_database_host }}"
|
|
||||||
port: "{{ authelia_config_storage_mysql_port }}"
|
|
||||||
database: "{{ authelia_database_name }}"
|
|
||||||
username: "{{ authelia_database_user }}"
|
|
||||||
password: "{{ authelia_database_pass }}"
|
|
||||||
timeout: "{{ authelia_database_timeout }}"
|
|
||||||
authelia_config_storage_postgres:
|
|
||||||
host: "{{ authelia_database_host }}"
|
|
||||||
port: "{{ authelia_config_storage_postgres_port }}"
|
|
||||||
database: "{{ authelia_database_name }}"
|
|
||||||
schema: public
|
|
||||||
username: "{{ authelia_database_user }}"
|
|
||||||
password: "{{ authelia_database_pass }}"
|
|
||||||
timeout: "{{ authelia_database_timeout }}"
|
|
||||||
authelia_config_storage_postgres_ssl:
|
|
||||||
mode: "{{ authelia_config_storage_postgres_ssl_mode }}"
|
|
||||||
root_certificate: "{{ authelia_config_storage_postgres_ssl_root_certificate }}"
|
|
||||||
certificate: "{{ authelia_config_storage_postgres_ssl_certificate }}"
|
|
||||||
key: "{{ authelia_config_storage_postgres_ssl_key }}"
|
|
||||||
authelia_config_notifier: >-2
|
|
||||||
{{
|
|
||||||
{
|
|
||||||
"disable_startup_check": authelia_config_notifier_disable_startup_check
|
|
||||||
}
|
|
||||||
| combine({"filesystem": authelia_config_notifier_filesystem}
|
|
||||||
if authelia_config_notifier_filesystem_filename else {})
|
|
||||||
| combine({"smtp": authelia_config_notifier_smtp}
|
|
||||||
if not authelia_config_notifier_filesystem_filename else {})
|
|
||||||
}}
|
|
||||||
authelia_config_notifier_filesystem:
|
|
||||||
filename: "{{ authelia_config_notifier_filesystem_filename }}"
|
|
||||||
authelia_config_notifier_smtp:
|
|
||||||
host: "{{ authelia_config_notifier_smtp_host }}"
|
|
||||||
port: "{{ authelia_config_notifier_smtp_port }}"
|
|
||||||
timeout: "{{ authelia_config_notifier_smtp_timeout }}"
|
|
||||||
username: "{{ authelia_config_notifier_smtp_username }}"
|
|
||||||
password: "{{ authelia_config_notifier_smtp_password }}"
|
|
||||||
sender: "{{ authelia_config_notifier_smtp_sender }}"
|
|
||||||
identifier: "{{ authelia_config_notifier_smtp_identifier }}"
|
|
||||||
subject: "{{ authelia_config_notifier_smtp_subject }}"
|
|
||||||
startup_check_address: "{{ authelia_config_notifier_smtp_startup_check_address }}"
|
|
||||||
disable_require_tls: "{{ authelia_config_notifier_smtp_disable_require_tls }}"
|
|
||||||
disable_html_emails: "{{ authelia_config_notifier_smtp_disable_html_emails }}"
|
|
||||||
tls:
|
|
||||||
skip_verify: "{{ authelia_config_notifier_smtp_tls_skip_verify }}"
|
|
||||||
minimum_version: "{{ authelia_config_notifier_smtp_tls_minimum_version }}"
|
|
@ -1,18 +0,0 @@
|
|||||||
# `finallycoffee.services.ghost` ansible role
|
|
||||||
|
|
||||||
[Ghost](https://ghost.org/) is a self-hosted blog with rich media capabilities,
|
|
||||||
which this role deploys in a docker container.
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
|
|
||||||
Ghost requires a MySQL-database (like mariadb) for storing it's data, which
|
|
||||||
can be configured using the `ghost_database_(host|username|password|database)` variables.
|
|
||||||
|
|
||||||
Setting `ghost_domain` to a fully-qualified domain on which ghost should be reachable
|
|
||||||
is also required.
|
|
||||||
|
|
||||||
Ghosts configuration can be changed using the `ghost_config` variable.
|
|
||||||
|
|
||||||
Container arguments which are equivalent to `community.docker.docker_container` can be
|
|
||||||
provided in the `ghost_container_[...]` syntax (e.g. `ghost_container_ports` to expose
|
|
||||||
ghosts port to the host).
|
|
@ -1,39 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
ghost_domain: ~
|
|
||||||
ghost_version: "5.94.1"
|
|
||||||
ghost_user: ghost
|
|
||||||
ghost_user_group: ghost
|
|
||||||
ghost_base_path: /opt/ghost
|
|
||||||
ghost_data_path: "{{ ghost_base_path }}/data"
|
|
||||||
ghost_config_path: "{{ ghost_base_path }}/config"
|
|
||||||
ghost_config_file: "{{ ghost_config_path }}/ghost.env"
|
|
||||||
ghost_database_username: ghost
|
|
||||||
ghost_database_password: ~
|
|
||||||
ghost_database_database: ghost
|
|
||||||
ghost_database_host: ~
|
|
||||||
ghost_base_config:
|
|
||||||
url: "https://{{ ghost_domain }}"
|
|
||||||
database__client: mysql
|
|
||||||
database__connection__host: "{{ ghost_database_host }}"
|
|
||||||
database__connection__user: "{{ ghost_database_username }}"
|
|
||||||
database__connection__password: "{{ ghost_database_password }}"
|
|
||||||
database__connection__database: "{{ ghost_database_database }}"
|
|
||||||
ghost_config: {}
|
|
||||||
|
|
||||||
ghost_container_name: ghost
|
|
||||||
ghost_container_image_name: docker.io/ghost
|
|
||||||
ghost_container_image_tag: ~
|
|
||||||
ghost_container_base_volumes:
|
|
||||||
- "{{ ghost_data_path }}:{{ ghost_container_data_directory }}:rw"
|
|
||||||
ghost_container_extra_volumes: []
|
|
||||||
ghost_container_volumes:
|
|
||||||
"{{ ghost_container_base_volumes + ghost_container_extra_volumes }}"
|
|
||||||
ghost_container_base_labels:
|
|
||||||
version: "{{ ghost_version }}"
|
|
||||||
ghost_container_extra_labels: {}
|
|
||||||
ghost_container_restart_policy: "unless-stopped"
|
|
||||||
ghost_container_networks: ~
|
|
||||||
ghost_container_purge_networks: ~
|
|
||||||
ghost_container_etc_hosts: ~
|
|
||||||
ghost_container_state: started
|
|
@ -1,57 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Ensure ghost group is created
|
|
||||||
ansible.builtin.group:
|
|
||||||
name: "{{ ghost_user_group }}"
|
|
||||||
state: present
|
|
||||||
system: true
|
|
||||||
|
|
||||||
- name: Ensure ghost user is created
|
|
||||||
ansible.builtin.user:
|
|
||||||
name: "{{ ghost_user }}"
|
|
||||||
groups:
|
|
||||||
- "{{ ghost_user_group }}"
|
|
||||||
append: true
|
|
||||||
state: present
|
|
||||||
system: true
|
|
||||||
|
|
||||||
- name: Ensure host paths for docker volumes exist for ghost
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ item.path }}"
|
|
||||||
state: directory
|
|
||||||
mode: "0750"
|
|
||||||
owner: "{{ item.owner | default(ghost_user) }}"
|
|
||||||
group: "{{ item.group | default(ghost_user_group) }}"
|
|
||||||
loop:
|
|
||||||
- path: "{{ ghost_base_path }}"
|
|
||||||
- path: "{{ ghost_data_path }}"
|
|
||||||
owner: "1000"
|
|
||||||
- path: "{{ ghost_config_path }}"
|
|
||||||
|
|
||||||
- name: Ensure ghost configuration file is templated
|
|
||||||
ansible.builtin.template:
|
|
||||||
src: "ghost.env.j2"
|
|
||||||
dest: "{{ ghost_config_file }}"
|
|
||||||
owner: "{{ ghost_user }}"
|
|
||||||
group: "{{ ghost_user_group }}"
|
|
||||||
mode: "0644"
|
|
||||||
|
|
||||||
- name: Ensure ghost container image is present on host
|
|
||||||
community.docker.docker_image:
|
|
||||||
name: "{{ ghost_container_image }}"
|
|
||||||
state: present
|
|
||||||
source: pull
|
|
||||||
force_source: "{{ ghost_container_image_tag is defined }}"
|
|
||||||
|
|
||||||
- name: Ensure ghost container '{{ ghost_container_name }}' is {{ ghost_container_state }}
|
|
||||||
community.docker.docker_container:
|
|
||||||
name: "{{ ghost_container_name }}"
|
|
||||||
image: "{{ ghost_container_image }}"
|
|
||||||
ports: "{{ ghost_container_ports | default(omit, true) }}"
|
|
||||||
labels: "{{ ghost_container_labels }}"
|
|
||||||
volumes: "{{ ghost_container_volumes }}"
|
|
||||||
env_file: "{{ ghost_config_file }}"
|
|
||||||
etc_hosts: "{{ ghost_container_etc_hosts | default(omit, true) }}"
|
|
||||||
networks: "{{ ghost_container_networks | default(omit, true) }}"
|
|
||||||
purge_networks: "{{ ghost_container_purge_networks | default(omit, true) }}"
|
|
||||||
restart_policy: "{{ ghost_container_restart_policy }}"
|
|
||||||
state: "{{ ghost_container_state }}"
|
|
@ -1,3 +0,0 @@
|
|||||||
{% for key, value in ghost_config_complete.items() %}
|
|
||||||
{{ key }}={{ value }}
|
|
||||||
{% endfor %}
|
|
@ -1,10 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
ghost_container_image: "{{ ghost_container_image_name}}:{{ ghost_container_image_tag | default(ghost_version, true) }}"
|
|
||||||
ghost_container_labels: >-2
|
|
||||||
{{ ghost_container_base_labels
|
|
||||||
| combine(ghost_container_extra_labels) }}
|
|
||||||
|
|
||||||
ghost_container_data_directory: "/var/lib/ghost/content"
|
|
||||||
ghost_config_complete: >-2
|
|
||||||
{{ ghost_base_config | combine(ghost_config, recursive=True) }}
|
|
@ -7,28 +7,3 @@ using its official available docker image, and is able to setup SSH
|
|||||||
forwarding from the host to the container (enabling git-over-SSH without
|
forwarding from the host to the container (enabling git-over-SSH without
|
||||||
the need for a non-standard SSH port while running an SSH server on the
|
the need for a non-standard SSH port while running an SSH server on the
|
||||||
host aswell).
|
host aswell).
|
||||||
|
|
||||||
### Configuration
|
|
||||||
|
|
||||||
#### Email notifications
|
|
||||||
|
|
||||||
To enable to send emails, you need to set the following variables, demonstrated
|
|
||||||
here with an SMTP server. A TLS connection is strongly advised, as otherwise, it
|
|
||||||
can be trival to intercept a login to the mail server and record the authentication
|
|
||||||
details, enabling anyone to send mail as if they were your gitea instance.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
gitea_config_mailer_enabled: true
|
|
||||||
# Can be `sendmail` or `smtp`
|
|
||||||
gitea_config_mailer_type: smtp
|
|
||||||
# Including the port can be used to force secure smtp (SMTPS)
|
|
||||||
gitea_config_mailer_host: mail.my-domain.tld:465
|
|
||||||
gitea_config_mailer_user: gitea
|
|
||||||
gitea_config_mailer_passwd: very_long_password
|
|
||||||
gitea_config_mailer_tls: true
|
|
||||||
gitea_config_mailer_from_addr: "gitea@{{ gitea_domain }}"
|
|
||||||
# Set `gitea_config_mailer_sendmail_path` when using a sendmail binary
|
|
||||||
gitea_config_mailer_sendmail_path: /usr/sbin/sendmail
|
|
||||||
```
|
|
||||||
|
|
||||||
For more information, see [the gitea docs on email setup](https://docs.gitea.io/en-us/email-setup/).
|
|
||||||
|
@ -1,16 +1,12 @@
|
|||||||
---
|
---
|
||||||
|
|
||||||
gitea_version: "1.22.2"
|
gitea_version: "1.15.6"
|
||||||
gitea_user: git
|
gitea_user: git
|
||||||
gitea_run_user: "{{ gitea_user }}"
|
|
||||||
gitea_base_path: "/opt/gitea"
|
gitea_base_path: "/opt/gitea"
|
||||||
gitea_data_path: "{{ gitea_base_path }}/data"
|
gitea_data_path: "{{ gitea_base_path }}/data"
|
||||||
|
|
||||||
# Set this to the (sub)domain gitea will run at
|
|
||||||
gitea_domain: ~
|
|
||||||
|
|
||||||
# container config
|
# container config
|
||||||
gitea_container_name: "{{ gitea_user }}"
|
gitea_container_name: "git"
|
||||||
gitea_container_image_name: "docker.io/gitea/gitea"
|
gitea_container_image_name: "docker.io/gitea/gitea"
|
||||||
gitea_container_image_tag: "{{ gitea_version }}"
|
gitea_container_image_tag: "{{ gitea_version }}"
|
||||||
gitea_container_image: "{{ gitea_container_image_name }}:{{ gitea_container_image_tag }}"
|
gitea_container_image: "{{ gitea_container_image_name }}:{{ gitea_container_image_tag }}"
|
||||||
@ -18,14 +14,13 @@ gitea_container_networks: []
|
|||||||
gitea_container_purge_networks: ~
|
gitea_container_purge_networks: ~
|
||||||
gitea_container_restart_policy: "unless-stopped"
|
gitea_container_restart_policy: "unless-stopped"
|
||||||
gitea_container_extra_env: {}
|
gitea_container_extra_env: {}
|
||||||
gitea_container_extra_labels: {}
|
gitea_contianer_extra_labels: {}
|
||||||
gitea_container_extra_ports: []
|
gitea_container_extra_ports: []
|
||||||
gitea_container_extra_volumes: []
|
gitea_container_extra_volumes: []
|
||||||
gitea_container_state: started
|
|
||||||
|
|
||||||
# container defaults
|
# container defaults
|
||||||
gitea_container_base_volumes:
|
gitea_container_base_volumes:
|
||||||
- "{{ gitea_data_path }}:/data:z"
|
- "{{ gitea_data_paths }}:/data:z"
|
||||||
- "/home/{{ gitea_user }}/.ssh/:/data/git/.ssh:z"
|
- "/home/{{ gitea_user }}/.ssh/:/data/git/.ssh:z"
|
||||||
|
|
||||||
gitea_container_base_ports:
|
gitea_container_base_ports:
|
||||||
@ -38,16 +33,3 @@ gitea_container_base_env:
|
|||||||
|
|
||||||
gitea_container_base_labels:
|
gitea_container_base_labels:
|
||||||
version: "{{ gitea_version }}"
|
version: "{{ gitea_version }}"
|
||||||
|
|
||||||
gitea_config_mailer_enabled: false
|
|
||||||
gitea_config_mailer_type: ~
|
|
||||||
gitea_config_mailer_from_addr: ~
|
|
||||||
gitea_config_mailer_smtp_addr: ~
|
|
||||||
gitea_config_mailer_user: ~
|
|
||||||
gitea_config_mailer_passwd: ~
|
|
||||||
gitea_config_mailer_protocol: ~
|
|
||||||
gitea_config_mailer_sendmail_path: ~
|
|
||||||
gitea_config_metrics_enabled: false
|
|
||||||
|
|
||||||
gitea_config: "{{ gitea_config_base | combine(gitea_extra_config, recursive=True, list_merge='append') }}"
|
|
||||||
gitea_extra_config: {}
|
|
||||||
|
@ -1,11 +1,10 @@
|
|||||||
---
|
---
|
||||||
|
|
||||||
- name: Ensure gitea user '{{ gitea_user }}' is present
|
- name: Create gitea user
|
||||||
user:
|
user:
|
||||||
name: "{{ gitea_user }}"
|
name: "{{ gitea_user }}"
|
||||||
state: "present"
|
state: present
|
||||||
system: false
|
system: no
|
||||||
create_home: true
|
|
||||||
register: gitea_user_res
|
register: gitea_user_res
|
||||||
|
|
||||||
- name: Ensure host directories exist
|
- name: Ensure host directories exist
|
||||||
@ -37,14 +36,22 @@
|
|||||||
mode: 0600
|
mode: 0600
|
||||||
register: gitea_user_ssh_key
|
register: gitea_user_ssh_key
|
||||||
|
|
||||||
|
- name: Create directory to place forwarding script into
|
||||||
|
file:
|
||||||
|
path: "/app/gitea"
|
||||||
|
state: directory
|
||||||
|
mode: 0770
|
||||||
|
owner: "{{ gitea_user_res.uid }}"
|
||||||
|
group: "{{ gitea_user_res.group }}"
|
||||||
|
|
||||||
- name: Create forwarding script
|
- name: Create forwarding script
|
||||||
copy:
|
copy:
|
||||||
dest: "/usr/local/bin/gitea"
|
dest: "/app/gitea/gitea"
|
||||||
owner: "{{ gitea_user_res.uid }}"
|
owner: "{{ gitea_user_res.uid }}"
|
||||||
group: "{{ gitea_user_res.group }}"
|
group: "{{ gitea_user_res.group }}"
|
||||||
mode: 0700
|
mode: 0700
|
||||||
content: |
|
content: |
|
||||||
ssh -p {{ gitea_public_ssh_server_port }} -o StrictHostKeyChecking=no {{ gitea_run_user }}@127.0.0.1 -i /home/{{ gitea_user }}/.ssh/id_ssh_ed25519 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"
|
ssh -p {{ gitea_public_ssh_server_port }} -o StrictHostKeyChecking=no {{ gitea_user }}@127.0.0.1 -i /home/{{ gitea_user }}/.ssh/id_ssh_ed25519 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"
|
||||||
|
|
||||||
- name: Add host pubkey to git users authorized_keys file
|
- name: Add host pubkey to git users authorized_keys file
|
||||||
lineinfile:
|
lineinfile:
|
||||||
@ -57,38 +64,20 @@
|
|||||||
mode: 0600
|
mode: 0600
|
||||||
|
|
||||||
- name: Ensure gitea container image is present
|
- name: Ensure gitea container image is present
|
||||||
community.docker.docker_image:
|
docker_image:
|
||||||
name: "{{ gitea_container_image }}"
|
name: "{{ gitea_container_image }}"
|
||||||
state: present
|
state: present
|
||||||
source: pull
|
source: pull
|
||||||
force_source: "{{ gitea_container_image.endswith(':latest') }}"
|
force_source: "{{ gitea_container_image.endswith(':latest') }}"
|
||||||
|
|
||||||
- name: Ensure container '{{ gitea_container_name }}' with gitea is {{ gitea_container_state }}
|
- name: Ensure container '{{ gitea_container_name }}' with gitea is running
|
||||||
community.docker.docker_container:
|
docker_container:
|
||||||
name: "{{ gitea_container_name }}"
|
name: "{{ gitea_container_name }}"
|
||||||
image: "{{ gitea_container_image }}"
|
image: "{{ gitea_container_image }}"
|
||||||
env: "{{ gitea_container_env }}"
|
env: "{{ gitea_container_env }}"
|
||||||
labels: "{{ gitea_container_labels }}"
|
|
||||||
volumes: "{{ gitea_container_volumes }}"
|
volumes: "{{ gitea_container_volumes }}"
|
||||||
networks: "{{ gitea_container_networks | default(omit, True) }}"
|
networks: "{{ gitea_container_networks | default(omit, True) }}"
|
||||||
purge_networks: "{{ gitea_container_purge_networks | default(omit, True) }}"
|
purge_networks: "{{ gitea_container_purge_networks | default(omit, True) }}"
|
||||||
published_ports: "{{ gitea_container_ports }}"
|
published_ports: "{{ gitea_container_ports }}"
|
||||||
restart_policy: "{{ gitea_container_restart_policy }}"
|
restart_policy: "{{ gitea_container_restart_policy }}"
|
||||||
state: "{{ gitea_container_state }}"
|
state: started
|
||||||
|
|
||||||
- name: Ensure given configuration is set in the config file
|
|
||||||
ini_file:
|
|
||||||
path: "{{ gitea_data_path }}/gitea/conf/app.ini"
|
|
||||||
section: "{{ section }}"
|
|
||||||
option: "{{ option }}"
|
|
||||||
value: "{{ entry.value }}"
|
|
||||||
state: "{{ 'present' if (entry.value is not none) else 'absent' }}"
|
|
||||||
loop: "{{ lookup('ansible.utils.to_paths', gitea_config) | dict2items }}"
|
|
||||||
loop_control:
|
|
||||||
loop_var: entry
|
|
||||||
label: "{{ section | default('/', True) }}->{{ option }}"
|
|
||||||
vars:
|
|
||||||
key_split: "{{ entry.key | split('.') }}"
|
|
||||||
# sections can be named `section_name`.`sub_section`, f.ex.: `repository.upload`
|
|
||||||
section: "{{ '' if key_split|length == 1 else (key_split[:-1] | join('.')) }}"
|
|
||||||
option: "{{ key_split | first if key_split|length == 1 else key_split | last }}"
|
|
||||||
|
@ -6,29 +6,8 @@ gitea_container_labels: "{{ gitea_container_base_labels | combine(gitea_containe
|
|||||||
|
|
||||||
gitea_container_env: "{{ gitea_container_base_env | combine(gitea_container_extra_env) }}"
|
gitea_container_env: "{{ gitea_container_base_env | combine(gitea_container_extra_env) }}"
|
||||||
|
|
||||||
gitea_container_ports: "{{ gitea_container_base_ports + gitea_container_extra_ports }}"
|
gitea_container_ports: "{{ gitea_container_base_ports | combine(gitea_container_extra_ports }}"
|
||||||
|
|
||||||
|
|
||||||
gitea_container_port_webui: 3000
|
gitea_container_port_webui: 3000
|
||||||
gitea_container_port_ssh: 22
|
gitea_container_port_ssh: 22
|
||||||
|
|
||||||
gitea_config_base:
|
|
||||||
RUN_MODE: prod
|
|
||||||
RUN_USER: "{{ gitea_run_user }}"
|
|
||||||
server:
|
|
||||||
SSH_DOMAIN: "{{ gitea_domain }}"
|
|
||||||
DOMAIN: "{{ gitea_domain }}"
|
|
||||||
HTTP_PORT: "{{ gitea_container_port_webui }}"
|
|
||||||
DISABLE_SSH: false
|
|
||||||
START_SSH_SERVER: false
|
|
||||||
mailer:
|
|
||||||
ENABLED: "{{ gitea_config_mailer_enabled }}"
|
|
||||||
MAILER_TYP: "{{ gitea_config_mailer_type }}"
|
|
||||||
SMTP_ADDR: "{{ gitea_config_mailer_smtp_addr }}"
|
|
||||||
USER: "{{ gitea_config_mailer_user }}"
|
|
||||||
PASSWD: "{{ gitea_config_mailer_passwd }}"
|
|
||||||
PROTOCOL: "{{ gitea_config_mailer_protocol }}"
|
|
||||||
FROM: "{{ gitea_config_mailer_from }}"
|
|
||||||
SENDMAIL_PATH: "{{ gitea_config_mailer_sendmail_path }}"
|
|
||||||
metrics:
|
|
||||||
ENABLED: "{{ gitea_config_metrics_enabled }}"
|
|
||||||
|
@ -1,15 +0,0 @@
|
|||||||
# `finallycoffee.services.jellyfin` ansible role
|
|
||||||
|
|
||||||
This role runs [Jellyfin](https://jellyfin.org/), a free software media system,
|
|
||||||
in a docker container.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
`jellyfin_domain` contains the FQDN which jellyfin should listen to. Most configuration
|
|
||||||
is done in the software itself.
|
|
||||||
|
|
||||||
Jellyfin runs in host networking mode by default, as that is needed for some features like
|
|
||||||
network discovery with chromecasts and similar.
|
|
||||||
|
|
||||||
Media can be mounted into jellyfin using `jellyfin_media_volumes`, taking a list of strings
|
|
||||||
akin to `community.docker.docker_container`'s `volumes` key.
|
|
@ -1,32 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
jellyfin_user: jellyfin
|
|
||||||
jellyfin_version: 10.9.11
|
|
||||||
|
|
||||||
jellyfin_base_path: /opt/jellyfin
|
|
||||||
jellyfin_config_path: "{{ jellyfin_base_path }}/config"
|
|
||||||
jellyfin_cache_path: "{{ jellyfin_base_path }}/cache"
|
|
||||||
|
|
||||||
jellyfin_media_volumes: []
|
|
||||||
|
|
||||||
jellyfin_container_name: jellyfin
|
|
||||||
jellyfin_container_image_name: "docker.io/jellyfin/jellyfin"
|
|
||||||
jellyfin_container_image_tag: ~
|
|
||||||
jellyfin_container_image_ref: "{{ jellyfin_container_image_name }}:{{ jellyfin_container_image_tag | default(jellyfin_version, true) }}"
|
|
||||||
jellyfin_container_network_mode: host
|
|
||||||
jellyfin_container_networks: ~
|
|
||||||
jellyfin_container_volumes: "{{ jellyfin_container_base_volumes + jellyfin_media_volumes }}"
|
|
||||||
jellyfin_container_labels: "{{ jellyfin_container_base_labels | combine(jellyfin_container_extra_labels) }}"
|
|
||||||
jellyfin_container_extra_labels: {}
|
|
||||||
jellyfin_container_restart_policy: "unless-stopped"
|
|
||||||
|
|
||||||
jellyfin_host_directories:
|
|
||||||
- path: "{{ jellyfin_base_path }}"
|
|
||||||
mode: "0750"
|
|
||||||
- path: "{{ jellyfin_config_path }}"
|
|
||||||
mode: "0750"
|
|
||||||
- path: "{{ jellyfin_cache_path }}"
|
|
||||||
mode: "0750"
|
|
||||||
|
|
||||||
jellyfin_uid: "{{ jellyfin_user_info.uid | default(jellyfin_user) }}"
|
|
||||||
jellyfin_gid: "{{ jellyfin_user_info.group | default(jellyfin_user) }}"
|
|
@ -1,40 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
- name: Ensure user '{{ jellyfin_user }}' for jellyfin is created
|
|
||||||
user:
|
|
||||||
name: "{{ jellyfin_user }}"
|
|
||||||
state: present
|
|
||||||
system: yes
|
|
||||||
register: jellyfin_user_info
|
|
||||||
|
|
||||||
- name: Ensure host directories for jellyfin exist
|
|
||||||
file:
|
|
||||||
path: "{{ item.path }}"
|
|
||||||
state: directory
|
|
||||||
owner: "{{ item.owner | default(jellyfin_uid) }}"
|
|
||||||
group: "{{ item.group | default(jellyfin_gid) }}"
|
|
||||||
mode: "{{ item.mode }}"
|
|
||||||
loop: "{{ jellyfin_host_directories }}"
|
|
||||||
|
|
||||||
- name: Ensure container image for jellyfin is available
|
|
||||||
docker_image:
|
|
||||||
name: "{{ jellyfin_container_image_ref }}"
|
|
||||||
state: present
|
|
||||||
source: pull
|
|
||||||
force_source: "{{ jellyfin_container_image_tag | default(false, true) }}"
|
|
||||||
register: jellyfin_container_image_pull_result
|
|
||||||
until: jellyfin_container_image_pull_result is succeeded
|
|
||||||
retries: 5
|
|
||||||
delay: 3
|
|
||||||
|
|
||||||
- name: Ensure container '{{ jellyfin_container_name }}' is running
|
|
||||||
docker_container:
|
|
||||||
name: "{{ jellyfin_container_name }}"
|
|
||||||
image: "{{ jellyfin_container_image_ref }}"
|
|
||||||
user: "{{ jellyfin_uid }}:{{ jellyfin_gid }}"
|
|
||||||
labels: "{{ jellyfin_container_labels }}"
|
|
||||||
volumes: "{{ jellyfin_container_volumes }}"
|
|
||||||
networks: "{{ jellyfin_container_networks | default(omit, True) }}"
|
|
||||||
network_mode: "{{ jellyfin_container_network_mode }}"
|
|
||||||
restart_policy: "{{ jellyfin_container_restart_policy }}"
|
|
||||||
state: started
|
|
@ -1,8 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
jellyfin_container_base_volumes:
|
|
||||||
- "{{ jellyfin_config_path }}:/config:z"
|
|
||||||
- "{{ jellyfin_cache_path }}:/cache:z"
|
|
||||||
|
|
||||||
jellyfin_container_base_labels:
|
|
||||||
version: "{{ jellyfin_version }}"
|
|
29
roles/minio/README.md
Normal file
29
roles/minio/README.md
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
# `finallycoffee.services.minio` ansible role
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This role deploys a [min.io](https://min.io) server (s3-compatible object storage server)
|
||||||
|
using the official docker container image.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
The role requires setting the password for the `root` user (name can be changed by
|
||||||
|
setting `minio_root_username`) in `minio_root_password`. That user has full control
|
||||||
|
over the minio-server instance.
|
||||||
|
|
||||||
|
### Useful config hints
|
||||||
|
|
||||||
|
Most configuration is done by setting environment variables in
|
||||||
|
`minio_container_extra_env`, for example:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
minio_container_extra_env:
|
||||||
|
# disable the "console" web browser UI
|
||||||
|
MINIO_BROWSER: off
|
||||||
|
# enable public prometheus metrics on `/minio/v2/metrics/cluster`
|
||||||
|
MINIO_PROMETHEUS_AUTH_TYPE: public
|
||||||
|
```
|
||||||
|
|
||||||
|
When serving minio (or any s3-compatible server) on a "subfolder",
|
||||||
|
see https://docs.aws.amazon.com/AmazonS3/latest/userguide/RESTRedirect.html
|
||||||
|
and https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html
|
40
roles/minio/defaults/main.yml
Normal file
40
roles/minio/defaults/main.yml
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
minio_user: ~
|
||||||
|
minio_data_path: /opt/minio
|
||||||
|
|
||||||
|
minio_create_user: false
|
||||||
|
minio_manage_host_filesystem: false
|
||||||
|
|
||||||
|
minio_root_username: root
|
||||||
|
minio_root_password: ~
|
||||||
|
|
||||||
|
minio_container_name: minio
|
||||||
|
minio_container_image_name: docker.io/minio/minio
|
||||||
|
minio_container_image_tag: latest
|
||||||
|
minio_container_image: "{{ minio_container_image_name }}:{{ minio_container_image_tag }}"
|
||||||
|
minio_container_networks: []
|
||||||
|
minio_container_ports: []
|
||||||
|
|
||||||
|
minio_container_base_volumes:
|
||||||
|
- "{{ minio_data_path }}:{{ minio_container_data_path }}:z"
|
||||||
|
minio_container_extra_volumes: []
|
||||||
|
|
||||||
|
minio_container_base_env:
|
||||||
|
MINIO_ROOT_USER: "{{ minio_root_username }}"
|
||||||
|
MINIO_ROOT_PASSWORD: "{{ minio_root_password }}"
|
||||||
|
minio_container_extra_env: {}
|
||||||
|
|
||||||
|
minio_container_labels: {}
|
||||||
|
|
||||||
|
minio_container_command:
|
||||||
|
- "server"
|
||||||
|
- "{{ minio_container_data_path }}"
|
||||||
|
- "--console-address \":{{ minio_container_listen_port_console }}\""
|
||||||
|
minio_container_restart_policy: "unless-stopped"
|
||||||
|
minio_container_image_force_source: "{{ (minio_container_image_tag == 'latest')|bool }}"
|
||||||
|
|
||||||
|
minio_container_listen_port_api: 9000
|
||||||
|
minio_container_listen_port_console: 8900
|
||||||
|
|
||||||
|
minio_container_data_path: /storage
|
37
roles/minio/tasks/main.yml
Normal file
37
roles/minio/tasks/main.yml
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- name: Ensure minio run user is present
|
||||||
|
user:
|
||||||
|
name: "{{ minio_user }}"
|
||||||
|
state: present
|
||||||
|
system: yes
|
||||||
|
when: minio_create_user
|
||||||
|
|
||||||
|
- name: Ensure filesystem mounts ({{ minio_data_path }}) for container volumes are present
|
||||||
|
file:
|
||||||
|
path: "{{ minio_data_path }}"
|
||||||
|
state: directory
|
||||||
|
user: "{{ minio_user|default(omit, True) }}"
|
||||||
|
group: "{{ minio_user|default(omit, True) }}"
|
||||||
|
when: minio_manage_host_filesystem
|
||||||
|
|
||||||
|
- name: Ensure container image for minio is present
|
||||||
|
community.docker.docker_image:
|
||||||
|
name: "{{ minio_container_image }}"
|
||||||
|
state: present
|
||||||
|
source: pull
|
||||||
|
force_source: "{{ minio_container_image_force_source }}"
|
||||||
|
|
||||||
|
- name: Ensure container {{ minio_container_name }} is running
|
||||||
|
docker_container:
|
||||||
|
name: "{{ minio_container_name }}"
|
||||||
|
image: "{{ minio_container_image }}"
|
||||||
|
volumes: "{{ minio_container_volumes }}"
|
||||||
|
env: "{{ minio_container_env }}"
|
||||||
|
labels: "{{ minio_container_labels }}"
|
||||||
|
networks: "{{ minio_container_networks }}"
|
||||||
|
ports: "{{ minio_container_ports }}"
|
||||||
|
user: "{{ minio_user|default(omit, True) }}"
|
||||||
|
command: "{{ minio_container_command }}"
|
||||||
|
restart_policy: "{{ minio_container_restart_policy }}"
|
||||||
|
state: started
|
5
roles/minio/vars/main.yml
Normal file
5
roles/minio/vars/main.yml
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
minio_container_volumes: "{{ minio_container_base_volumes + minio_container_extra_volumes }}"
|
||||||
|
|
||||||
|
minio_container_env: "{{ minio_container_base_env | combine(minio_container_extra_env) }}"
|
@ -1,21 +0,0 @@
|
|||||||
# `finallycoffee.services.openproject` ansible role
|
|
||||||
|
|
||||||
Deploys [openproject](https://www.openproject.org/) using docker-compose.
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
To set configuration variables for OpenProject, set them in `openproject_compose_overrides`:
|
|
||||||
```yaml
|
|
||||||
openproject_compose_overrides:
|
|
||||||
version: "3.7"
|
|
||||||
services:
|
|
||||||
proxy:
|
|
||||||
[...]
|
|
||||||
volumes:
|
|
||||||
pgdata:
|
|
||||||
driver: local
|
|
||||||
driver_opts:
|
|
||||||
o: bind
|
|
||||||
type: none
|
|
||||||
device: /var/lib/postgresql
|
|
||||||
```
|
|
@ -1,11 +0,0 @@
|
|||||||
---
|
|
||||||
openproject_base_path: "/opt/openproject"
|
|
||||||
|
|
||||||
openproject_upstream_git_url: "https://github.com/opf/openproject-deploy.git"
|
|
||||||
openproject_upstream_git_branch: "stable/13"
|
|
||||||
|
|
||||||
openproject_compose_project_path: "{{ openproject_base_path }}/compose"
|
|
||||||
openproject_compose_project_name: "openproject"
|
|
||||||
openproject_compose_project_env_file: "{{ openproject_compose_project_path }}/.env"
|
|
||||||
openproject_compose_project_override_file: "{{ openproject_compose_project_path }}/docker-compose.override.yml"
|
|
||||||
openproject_compose_project_env: {}
|
|
@ -1,39 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Ensure base directory '{{ openproject_base_path }}' is present
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ openproject_base_path }}"
|
|
||||||
state: directory
|
|
||||||
|
|
||||||
- name: Ensure upstream repository is cloned
|
|
||||||
ansible.builtin.git:
|
|
||||||
dest: "{{ openproject_base_path }}"
|
|
||||||
repo: "{{ openproject_upstream_git_url }}"
|
|
||||||
version: "{{ openproject_upstream_git_branch }}"
|
|
||||||
clone: true
|
|
||||||
depth: 1
|
|
||||||
|
|
||||||
- name: Ensure environment is configured
|
|
||||||
ansible.builtin.lineinfile:
|
|
||||||
line: "{{ item.key}}={{ item.value}}"
|
|
||||||
path: "{{ openproject_compose_project_env_file }}"
|
|
||||||
state: present
|
|
||||||
create: true
|
|
||||||
loop: "{{ openproject_compose_project_env | dict2items(key_name='key', value_name='value') }}"
|
|
||||||
|
|
||||||
- name: Ensure docker compose overrides are set
|
|
||||||
ansible.builtin.copy:
|
|
||||||
dest: "{{ openproject_compose_project_override_file }}"
|
|
||||||
content: "{{ openproject_compose_overrides | default({}) | to_nice_yaml }}"
|
|
||||||
|
|
||||||
- name: Ensure containers are pulled
|
|
||||||
community.docker.docker_compose:
|
|
||||||
project_src: "{{ openproject_compose_project_path }}"
|
|
||||||
project_name: "{{ openproject_compose_project_name }}"
|
|
||||||
pull: true
|
|
||||||
|
|
||||||
- name: Ensure services are running
|
|
||||||
community.docker.docker_compose:
|
|
||||||
project_src: "{{ openproject_compose_project_path }}"
|
|
||||||
project_name: "{{ openproject_compose_project_name }}"
|
|
||||||
state: "present"
|
|
||||||
build: false
|
|
63
roles/restic-s3/README.md
Normal file
63
roles/restic-s3/README.md
Normal file
@ -0,0 +1,63 @@
|
|||||||
|
# `finallycoffee.services.restic-s3`
|
||||||
|
|
||||||
|
Ansible role for backup up data using `restic` to an `s3`-compatible backend,
|
||||||
|
utilizing `systemd` timers for scheduling
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The s3 repository and the credentials for it are specified in `restic_repo_url`,
|
||||||
|
`restic_s3_key_id` and `restic_s3_access_key`. As restic encrypts the data before
|
||||||
|
storing it, the `restic_repo_password` needs to be populated with a strong key,
|
||||||
|
and saved accordingly as only this key can be used to decrypt the data for a restore!
|
||||||
|
|
||||||
|
### Backing up data
|
||||||
|
|
||||||
|
A job name like `$service-postgres` or similar needs to be set in `restic_job_name`,
|
||||||
|
which is used for naming the `systemd` units, their syslog identifiers etc.
|
||||||
|
|
||||||
|
If backing up filesystem locations, the paths need to be specified in
|
||||||
|
`restic_backup_paths` as lists of strings representing absolute filesystem
|
||||||
|
locations.
|
||||||
|
|
||||||
|
If backing up f.ex. database or other data which is generating backups using
|
||||||
|
a command like `pg_dump`, use `restic_backup_stdin_command` (which needs to output
|
||||||
|
to `stdout`) in conjunction with `restic_backup_stdin_command_filename` to name
|
||||||
|
the resulting output (required).
|
||||||
|
|
||||||
|
### Policy
|
||||||
|
|
||||||
|
The backup policy can be adjusted by overriding the `restic_policy_keep_*`
|
||||||
|
variables, with the defaults being:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
restic_policy_keep_all_within: 1d
|
||||||
|
restic_policy_keep_hourly: 6
|
||||||
|
restic_policy_keep_daily: 2
|
||||||
|
restic_policy_keep_weekly: 7
|
||||||
|
restic_policy_keep_monthly: 4
|
||||||
|
restic_policy_backup_frequency: hourly
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** `restic_policy_backup_frequency` must conform to `systemd`s
|
||||||
|
`OnCalendar` syntax, which can be checked using `systemd-analyze calender $x`.
|
||||||
|
|
||||||
|
## Role behaviour
|
||||||
|
|
||||||
|
Per default, when the systemd unit for a job changes, the job is not immediately
|
||||||
|
started. This can be overridden using `restic_start_job_on_unit_change: true`,
|
||||||
|
which will immediately start the backup job if it's configuration changed.
|
||||||
|
|
||||||
|
The systemd unit runs with `restic_user`, which is root by default, guaranteeing
|
||||||
|
that filesystem paths are always readable. The `restic_user` can be overridden,
|
||||||
|
but care needs to be taken to ensure the user has permission to read all the
|
||||||
|
provided filesystem paths / the backup command may be executed by the user.
|
||||||
|
|
||||||
|
If ansible should create the user, set `restic_create_user` to `true`, which
|
||||||
|
will attempt to create the `restic_user` as a system user.
|
||||||
|
|
||||||
|
### Installing
|
||||||
|
|
||||||
|
For Debian and RedHat, the role attempts to install restic using the default
|
||||||
|
package manager's ansible module (apt/dnf). For other distributions, the generic
|
||||||
|
`package` module tries to install `restic_package_name` (default: `restic`),
|
||||||
|
which can be overridden if needed.
|
37
roles/restic-s3/defaults/main.yml
Normal file
37
roles/restic-s3/defaults/main.yml
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
restic_repo_url: ~
|
||||||
|
restic_repo_password: ~
|
||||||
|
restic_s3_key_id: ~
|
||||||
|
restic_s3_access_key: ~
|
||||||
|
|
||||||
|
restic_backup_paths: []
|
||||||
|
restic_backup_stdin_command: ~
|
||||||
|
restic_backup_stdin_command_filename: ~
|
||||||
|
|
||||||
|
restic_policy_keep_all_within: 1d
|
||||||
|
restic_policy_keep_hourly: 6
|
||||||
|
restic_policy_keep_daily: 2
|
||||||
|
restic_policy_keep_weekly: 7
|
||||||
|
restic_policy_keep_monthly: 4
|
||||||
|
restic_policy_backup_frequency: hourly
|
||||||
|
|
||||||
|
restic_policy:
|
||||||
|
keep_within: "{{ restic_policy_keep_all_within }}"
|
||||||
|
hourly: "{{ restic_policy_keep_hourly }}"
|
||||||
|
daily: "{{ restic_policy_keep_daily }}"
|
||||||
|
weekly: "{{ restic_policy_keep_weekly }}"
|
||||||
|
monthly: "{{ restic_policy_keep_monthly }}"
|
||||||
|
frequency: "{{ restic_policy_backup_frequency }}"
|
||||||
|
|
||||||
|
restic_user: root
|
||||||
|
restic_create_user: false
|
||||||
|
restic_start_job_on_unit_change: false
|
||||||
|
|
||||||
|
restic_job_name: ~
|
||||||
|
restic_job_description: "Restic backup job for {{ restic_job_name }}"
|
||||||
|
restic_systemd_unit_naming_scheme: "restic.{{ restic_job_name }}"
|
||||||
|
restic_systemd_working_directory: /tmp
|
||||||
|
restic_systemd_syslog_identifier: "restic-{{ restic_job_name }}"
|
||||||
|
|
||||||
|
restic_package_name: restic
|
13
roles/restic-s3/handlers/main.yml
Normal file
13
roles/restic-s3/handlers/main.yml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- name: Ensure system daemon is reloaded
|
||||||
|
listen: reload-systemd
|
||||||
|
systemd:
|
||||||
|
daemon_reload: true
|
||||||
|
|
||||||
|
- name: Ensure systemd service for '{{ restic_job_name }}' is started immediately
|
||||||
|
listen: trigger-restic
|
||||||
|
systemd:
|
||||||
|
name: "{{ restic_systemd_unit_naming_scheme }}.service"
|
||||||
|
state: started
|
||||||
|
when: restic_start_job_on_unit_change
|
77
roles/restic-s3/tasks/main.yml
Normal file
77
roles/restic-s3/tasks/main.yml
Normal file
@ -0,0 +1,77 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- name: Ensure {{ restic_user }} system user exists
|
||||||
|
user:
|
||||||
|
name: "{{ restic_user }}"
|
||||||
|
state: present
|
||||||
|
system: true
|
||||||
|
when: restic_create_user
|
||||||
|
|
||||||
|
- name: Ensure either backup_paths or backup_stdin_command is populated
|
||||||
|
when: restic_backup_paths|length > 0 and restic_backup_stdin_command
|
||||||
|
fail:
|
||||||
|
msg: "Setting both `restic_backup_paths` and `restic_backup_stdin_command` is not supported"
|
||||||
|
|
||||||
|
- name: Ensure a filename for stdin_command backup is given
|
||||||
|
when: restic_backup_stdin_command and not restic_backup_stdin_command_filename
|
||||||
|
fail:
|
||||||
|
msg: "`restic_backup_stdin_command` was set but no filename for the resulting output was supplied in `restic_backup_stdin_command_filename`"
|
||||||
|
|
||||||
|
- name: Ensure backup frequency adheres to systemd's OnCalender syntax
|
||||||
|
command:
|
||||||
|
cmd: "systemd-analyze calendar {{ restic_policy.frequency }}"
|
||||||
|
register: systemd_calender_parse_res
|
||||||
|
failed_when: systemd_calender_parse_res.rc != 0
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
- name: Ensure restic is installed
|
||||||
|
block:
|
||||||
|
- name: Ensure restic is installed via apt
|
||||||
|
apt:
|
||||||
|
package: restic
|
||||||
|
state: latest
|
||||||
|
when: ansible_os_family == 'Debian'
|
||||||
|
- name: Ensure restic is installed via dnf
|
||||||
|
dnf:
|
||||||
|
name: restic
|
||||||
|
state: latest
|
||||||
|
when: ansible_os_family == 'RedHat'
|
||||||
|
- name: Ensure restic is installed using the auto-detected package-manager
|
||||||
|
package:
|
||||||
|
name: "{{ restic_package_name }}"
|
||||||
|
state: present
|
||||||
|
when: ansible_os_family not in ['RedHat', 'Debian']
|
||||||
|
|
||||||
|
- name: Ensure systemd service file for '{{ restic_job_name }}' is templated
|
||||||
|
template:
|
||||||
|
dest: "/etc/systemd/system/{{ restic_systemd_unit_naming_scheme }}.service"
|
||||||
|
src: restic.service.j2
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: 0640
|
||||||
|
notify:
|
||||||
|
- reload-systemd
|
||||||
|
- trigger-restic
|
||||||
|
|
||||||
|
- name: Ensure systemd service file for '{{ restic_job_name }}' is templated
|
||||||
|
template:
|
||||||
|
dest: "/etc/systemd/system/{{ restic_systemd_unit_naming_scheme }}.timer"
|
||||||
|
src: restic.timer.j2
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: 0640
|
||||||
|
notify:
|
||||||
|
- reload-systemd
|
||||||
|
|
||||||
|
- name: Flush handlers to ensure systemd knows about '{{ restic_job_name }}'
|
||||||
|
meta: flush_handlers
|
||||||
|
|
||||||
|
- name: Ensure systemd timer for '{{ restic_job_name }}' is activated
|
||||||
|
systemd:
|
||||||
|
name: "{{ restic_systemd_unit_naming_scheme }}.timer"
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
- name: Ensure systemd timer for '{{ restic_job_name }}' is started
|
||||||
|
systemd:
|
||||||
|
name: "{{ restic_systemd_unit_naming_scheme }}.timer"
|
||||||
|
state: started
|
26
roles/restic-s3/templates/restic.service.j2
Normal file
26
roles/restic-s3/templates/restic.service.j2
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
[Unit]
|
||||||
|
Description={{ restic_job_description }}
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=oneshot
|
||||||
|
User={{ restic_user }}
|
||||||
|
WorkingDirectory={{ restic_systemd_working_directory }}
|
||||||
|
SyslogIdentifier={{ restic_systemd_syslog_identifier }}
|
||||||
|
|
||||||
|
Environment=RESTIC_REPOSITORY={{ restic_repo_url }}
|
||||||
|
Environment=RESTIC_PASSWORD={{ restic_repo_password }}
|
||||||
|
Environment=AWS_ACCESS_KEY_ID={{ restic_s3_key_id }}
|
||||||
|
Environment=AWS_SECRET_ACCESS_KEY={{ restic_s3_access_key }}
|
||||||
|
|
||||||
|
ExecStartPre=-/bin/sh -c '/usr/bin/restic snapshots || /usr/bin/restic init'
|
||||||
|
{% if restic_backup_stdin_command %}
|
||||||
|
ExecStart=/bin/sh -c '{{ restic_backup_stdin_command }} | /usr/bin/restic backup --verbose --stdin --stdin-filename {{ restic_backup_stdin_command_filename }}'
|
||||||
|
{% else %}
|
||||||
|
ExecStart=/usr/bin/restic --verbose backup {{ restic_backup_paths | join(' ') }}
|
||||||
|
{% endif %}
|
||||||
|
ExecStartPost=/usr/bin/restic forget --prune --keep-within={{ restic_policy.keep_within }} --keep-hourly={{ restic_policy.hourly }} --keep-daily={{ restic_policy.daily }} --keep-weekly={{ restic_policy.weekly }} --keep-monthly={{ restic_policy.monthly }}
|
||||||
|
ExecStartPost=-/usr/bin/restic snapshots
|
||||||
|
ExecStartPost=/usr/bin/restic check
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
10
roles/restic-s3/templates/restic.timer.j2
Normal file
10
roles/restic-s3/templates/restic.timer.j2
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Run {{ restic_job_name }}
|
||||||
|
|
||||||
|
[Timer]
|
||||||
|
OnCalendar={{ restic_policy.frequency }}
|
||||||
|
Persistent=True
|
||||||
|
Unit={{ restic_systemd_unit_naming_scheme }}.service
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=timers.target
|
@ -1,16 +0,0 @@
|
|||||||
# `finallycoffee.services.vouch-proxy`
|
|
||||||
|
|
||||||
[Vouch-Proxy](https://github.com/vouch/vouch-proxy) can be used in combination with
|
|
||||||
nginx' `auth_request` module to secure web services with OIDC/OAuth. This role runs
|
|
||||||
vouch-proxys' official docker container.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
The `oauth` config section must be supplied in `vouch_proxy_oauth_config`, and the
|
|
||||||
`vouch` config section can be overridden in `vouch_proxy_vouch_config`. For possible
|
|
||||||
configuration values, see https://github.com/vouch/vouch-proxy/blob/master/config/config.yml_example.
|
|
||||||
|
|
||||||
For an example nginx config, see https://github.com/vouch/vouch-proxy#installation-and-configuration.
|
|
||||||
|
|
||||||
Passing container arguments in the same way as `community.docker.docker_container` is supported
|
|
||||||
using the `vouch_proxy_container_[...]` prefix (e.g. `vouch_proxy_container_ports`).
|
|
@ -1,51 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
vouch_proxy_user: vouch-proxy
|
|
||||||
vouch_proxy_version: 0.40.0
|
|
||||||
vouch_proxy_base_path: /opt/vouch-proxy
|
|
||||||
vouch_proxy_config_path: "{{ vouch_proxy_base_path }}/config"
|
|
||||||
vouch_proxy_config_file: "{{ vouch_proxy_config_path }}/config.yaml"
|
|
||||||
|
|
||||||
vouch_proxy_container_name: vouch-proxy
|
|
||||||
vouch_proxy_container_image_name: vouch-proxy
|
|
||||||
vouch_proxy_container_image_namespace: vouch/
|
|
||||||
vouch_proxy_container_image_registry: quay.io
|
|
||||||
|
|
||||||
vouch_proxy_container_image_repository: >-
|
|
||||||
{{
|
|
||||||
(container_registries[vouch_proxy_container_image_registry] | default(vouch_proxy_container_image_registry))
|
|
||||||
+ '/' + (vouch_proxy_container_image_namespace | default(''))
|
|
||||||
+ vouch_proxy_container_image_name
|
|
||||||
}}
|
|
||||||
vouch_proxy_container_image_reference: >-
|
|
||||||
{{
|
|
||||||
vouch_proxy_container_image_repository + ':'
|
|
||||||
+ (vouch_proxy_container_image_tag | default(vouch_proxy_version))
|
|
||||||
}}
|
|
||||||
|
|
||||||
vouch_proxy_container_image_force_pull: "{{ vouch_proxy_container_image_tag is defined }}"
|
|
||||||
|
|
||||||
vouch_proxy_container_default_volumes:
|
|
||||||
- "{{ vouch_proxy_config_file }}:/config/config.yaml:ro"
|
|
||||||
vouch_proxy_container_volumes: >-
|
|
||||||
{{ vouch_proxy_container_default_volumes
|
|
||||||
+ vouch_proxy_container_extra_volumes | default([]) }}
|
|
||||||
vouch_proxy_container_restart_policy: "unless-stopped"
|
|
||||||
|
|
||||||
vouch_proxy_config_vouch_log_level: info
|
|
||||||
vouch_proxy_config_vouch_listen: 0.0.0.0
|
|
||||||
vouch_proxy_config_vouch_port: 9090
|
|
||||||
vouch_proxy_config_vouch_domains: []
|
|
||||||
vouch_proxy_config_vouch_document_root: ~
|
|
||||||
|
|
||||||
vouch_proxy_oauth_config: {}
|
|
||||||
vouch_proxy_vouch_config:
|
|
||||||
logLevel: "{{ vouch_proxy_config_vouch_log_level }}"
|
|
||||||
listen: "{{ vouch_proxy_config_vouch_listen }}"
|
|
||||||
port: "{{ vouch_proxy_config_vouch_port }}"
|
|
||||||
domains: "{{ vouch_proxy_config_vouch_domains }}"
|
|
||||||
document_root: "{{ vouch_proxy_config_vouch_document_root }}"
|
|
||||||
|
|
||||||
vouch_proxy_config:
|
|
||||||
vouch: "{{ vouch_proxy_vouch_config }}"
|
|
||||||
oauth: "{{ vouch_proxy_oauth_config }}"
|
|
@ -1,8 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
- name: Ensure vouch-proxy was restarted
|
|
||||||
community.docker.docker_container:
|
|
||||||
name: "{{ vouch_proxy_container_name }}"
|
|
||||||
state: started
|
|
||||||
restart: yes
|
|
||||||
listen: restart-vouch-proxy
|
|
@ -1,50 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
- name: Ensure vouch-proxy user '{{ vouch_proxy_user }}' exists
|
|
||||||
ansible.builtin.user:
|
|
||||||
name: "{{ vouch_proxy_user }}"
|
|
||||||
state: present
|
|
||||||
system: true
|
|
||||||
register: vouch_proxy_user_info
|
|
||||||
|
|
||||||
- name: Ensure mounts are created
|
|
||||||
ansible.builtin.file:
|
|
||||||
dest: "{{ item.path }}"
|
|
||||||
state: directory
|
|
||||||
owner: "{{ item.owner | default(vouch_proxy_user_info.uid | default(vouch_proxy_user)) }}"
|
|
||||||
group: "{{ item.owner | default(vouch_proxy_user_info.group | default(vouch_proxy_user)) }}"
|
|
||||||
mode: "{{ item.mode | default('0755') }}"
|
|
||||||
loop:
|
|
||||||
- path: "{{ vouch_proxy_base_path }}"
|
|
||||||
- path: "{{ vouch_proxy_config_path }}"
|
|
||||||
|
|
||||||
- name: Ensure config file is templated
|
|
||||||
ansible.builtin.copy:
|
|
||||||
dest: "{{ vouch_proxy_config_file }}"
|
|
||||||
content: "{{ vouch_proxy_config | to_nice_yaml }}"
|
|
||||||
owner: "{{ vouch_proxy_user_info.uid | default(vouch_proxy_user) }}"
|
|
||||||
group: "{{ vouch_proxy_user_info.group | default(vouch_proxy_user) }}"
|
|
||||||
mode: "0640"
|
|
||||||
notify:
|
|
||||||
- restart-vouch-proxy
|
|
||||||
|
|
||||||
- name: Ensure container image is present on host
|
|
||||||
community.docker.docker_image:
|
|
||||||
name: "{{ vouch_proxy_container_image_reference }}"
|
|
||||||
state: present
|
|
||||||
source: pull
|
|
||||||
force_source: "{{ vouch_proxy_container_image_force_pull | bool }}"
|
|
||||||
|
|
||||||
- name: Ensure container '{{ vouch_proxy_container_name }}' is running
|
|
||||||
community.docker.docker_container:
|
|
||||||
name: "{{ vouch_proxy_container_name }}"
|
|
||||||
image: "{{ vouch_proxy_container_image_reference }}"
|
|
||||||
env: "{{ vouch_proxy_container_env | default(omit) }}"
|
|
||||||
user: "{{ vouch_proxy_user_info.uid | default(vouch_proxy_user) }}"
|
|
||||||
ports: "{{ vouch_proxy_container_ports | default(omit) }}"
|
|
||||||
volumes: "{{ vouch_proxy_container_volumes | default(omit) }}"
|
|
||||||
networks: "{{ vouch_proxy_container_networks | default(omit) }}"
|
|
||||||
purge_networks: "{{ vouch_proxy_container_purge_networks | default(omit) }}"
|
|
||||||
etc_hosts: "{{ vouch_proxy_container_etc_hosts | default(omit) }}"
|
|
||||||
restart_policy: "{{ vouch_proxy_container_restart_policy }}"
|
|
||||||
state: started
|
|
Reference in New Issue
Block a user