24 Commits

Author SHA1 Message Date
80077af008 meta: bump collection version to 0.1.1 2024-09-21 11:29:19 +02:00
eeb66de8a4 update(ghost): bump version to 5.94.1 2024-09-21 11:28:56 +02:00
396f1b3a57 update(vouch_proxy): bump version to 0.40.0 2024-09-21 11:27:20 +02:00
be66a3fe7a update(jellyfin): bump version to 10.9.11 2024-09-21 11:26:19 +02:00
5bfca1a55c meta: require atleast ansible >= 2.15.0 2024-09-21 11:25:19 +02:00
70238d3bd4 meta: update galaxy collection version to 0.1.0 2024-09-21 10:25:17 +02:00
f6a97805de docs: update README links 2024-09-21 10:24:17 +02:00
b350a19bcc update(gitea): bump version to 1.22.2 2024-09-19 14:57:47 +02:00
74a3216a41 update(jellyfin): bump version to 10.9.8 2024-07-21 19:23:05 +02:00
ef6da18172 feat(openproject): add deployment using docker-compose 2024-03-31 14:27:53 +02:00
65a256e8b5 update(ghost): bump version to 5.78.0 2024-02-04 10:53:50 +01:00
6547f15bb4 update(gitea): bump version to 1.21.5 2024-02-04 10:44:19 +01:00
5f19b5d9a9 refactor(gitea): support using forgejo in the role 2023-10-07 22:18:02 +02:00
4a2d1dec92 update(gitea): bump version to 1.20.5 2023-10-07 17:46:02 +02:00
4632a1263a update(ghost): bump version to 5.67.0 2023-10-07 16:21:20 +02:00
e5924d5ecb chore(ghost): improve role task names and fix mount permissions 2023-10-07 16:16:53 +02:00
f2fe2fb034 meta: update collection to 0.0.3 2023-08-05 18:34:31 +02:00
e8e97a5d89 update(gitea): bump version to 1.20.2 2023-08-05 18:20:53 +02:00
5ea358e65d update(ghost): bump version to 5.58.0 2023-08-05 18:20:31 +02:00
82ff46ee99 meta: bump galaxy version to 0.0.2 2023-07-28 15:47:05 +02:00
8a2993e9c3 chore(restic)!: migrate restic role to finallycoffee.base collection 2023-07-28 15:33:53 +02:00
87094b81ec chore(minio)!: migrate minio role to finallycoffee.base collection 2023-07-28 15:24:08 +02:00
bbe369d525 chore(elasticsearch)!: migrate elasticsearch role to finallycoffee.base collection 2023-07-28 15:23:55 +02:00
ed56665ed8 chore(nginx)!: migrate nginx role to finallycoffee.base collection 2023-07-28 15:11:09 +02:00
31 changed files with 122 additions and 588 deletions

View File

@ -8,24 +8,23 @@ concise area of concern.
## Roles ## Roles
- [`roles/authelia`](roles/authelia/README.md): Deploys an [authelia.com](https://www.authelia.com) - [`authelia`](roles/authelia/README.md): Deploys an [authelia.com](https://www.authelia.com)
instance, an authentication provider with beta OIDC provider support. instance, an authentication provider with beta OIDC provider support.
- [`roles/elasticsearch`](roles/elasticsearch/README.md): Deploy [elasticsearch](https://www.docker.elastic.co/r/elasticsearch/elasticsearch-oss), - [`ghost`](roles/ghost/README.md): Deploys [ghost.org](https://ghost.org/), a simple to use
a popular (distributed) search and analytics engine, mostly known by it's blogging and publishing platform.
letter "E" in the ELK-stack.
- [`roles/gitea`](roles/gitea/README.md): Deploy [gitea.io](https://gitea.io), a - [`gitea`](roles/gitea/README.md): Deploy [gitea.io](https://gitea.io), a
lightweight, self-hosted git service. lightweight, self-hosted git service.
- [`roles/jellyfin`](roles/jellyfin/README.md): Deploy [jellyfin.org](https://jellyfin.org), - [`jellyfin`](roles/jellyfin/README.md): Deploy [jellyfin.org](https://jellyfin.org),
the free software media system for streaming stored media to any device. the free software media system for streaming stored media to any device.
- [`roles/restic`](roles/restic/README.md): Manage backups using restic - [`openproject`](roles/openproject/README.md): Deploys an [openproject.org](https://www.openproject.org)
and persist them to a configurable backend. installation using the upstream provided docker-compose setup.
- [`roles/minio`](roles/minio/README.md): Deploy [min.io](https://min.io), an - [`vouch_proxy`](roles/vouch_proxy/README.md): Deploys [vouch-proxy](https://github.com/vouch/vouch-proxy),
s3-compatible object storage server, using docker containers. an authorization proxy for arbitrary webapps working with `nginx`s' `auth_request` module.
## License ## License

View File

@ -1,6 +1,6 @@
namespace: finallycoffee namespace: finallycoffee
name: services name: services
version: 0.0.1 version: 0.1.1
readme: README.md readme: README.md
authors: authors:
- transcaffeine <transcaffeine@finally.coffee> - transcaffeine <transcaffeine@finally.coffee>

View File

@ -1,3 +1,3 @@
--- ---
requires_ansible: ">=2.12" requires_ansible: ">=2.15"

View File

@ -0,0 +1,6 @@
---
- name: Install openproject
hosts: "{{ openproject_hosts | default('openproject') }}"
become: "{{ openproject_become | default(true, false) }}"
roles:
- role: finallycoffee.services.openproject

View File

@ -1,22 +0,0 @@
# `finallycoffee.services.elastiscsearch`
A simple ansible role which deploys a single-node elastic container to provide
an easy way to do some indexing.
## Usage
Per default, `/opt/elasticsearch/data` is used to persist data, it is
customizable by using either `elasticsearch_base_path` or `elasticsearch_data_path`.
As elasticsearch be can be quite memory heavy, the maximum amount of allowed RAM
can be configured using `elasticsearch_allocated_ram_mb`, defaulting to 512 (mb).
The cluster name and discovery type can be overridden using
`elasticsearch_config_cluster_name` (default: elastic) and
`elasticsearch_config_discovery_type` (default: single-node), should one
need a multi-node elasticsearch deployment.
Per default, no ports or networks are mapped, and explizit mapping using
either ports (`elasticsearch_container_ports`) or networks
(`elasticsearch_container_networks`) is required in order for other services
to use elastic.

View File

@ -1,35 +0,0 @@
---
elasticsearch_version: 7.17.7
elasticsearch_base_path: /opt/elasticsearch
elasticsearch_data_path: "{{ elasticsearch_base_path }}/data"
elasticsearch_config_cluster_name: elastic
elasticsearch_config_discovery_type: single-node
elasticsearch_config_boostrap_memory_lock: true
elasticsearch_allocated_ram_mb: 512
elasticsearch_container_image_name: docker.elastic.co/elasticsearch/elasticsearch-oss
elasticsearch_container_image_tag: ~
elasticsearch_container_image: >-
{{ elasticsearch_container_image_name }}:{{ elasticsearch_container_image_tag | default(elasticsearch_version, true) }}
elasticsearch_container_name: elasticsearch
elasticsearch_container_env:
"ES_JAVA_OPTS": "-Xms{{ elasticsearch_allocated_ram_mb }}m -Xmx{{ elasticsearch_allocated_ram_mb }}m"
"cluster.name": "{{ elasticsearch_config_cluster_name }}"
"discovery.type": "{{ elasticsearch_config_discovery_type }}"
"bootstrap.memory_lock": "{{ 'true' if elasticsearch_config_boostrap_memory_lock else 'false' }}"
elasticsearch_container_user: ~
elasticsearch_container_ports: ~
elasticsearch_container_labels:
version: "{{ elasticsearch_version }}"
elasticsearch_container_ulimits:
# - "memlock:{{ (1.5 * 1024 * elasticsearch_allocated_ram_mb) | int }}:{{ (1.5 * 1024 * elasticsearch_allocated_ram_mb) | int }}"
- "memlock:-1:-1"
elasticsearch_container_volumes:
- "{{ elasticsearch_data_path }}:/usr/share/elasticsearch/data:z"
elasticsearch_container_networks: ~
elasticsearch_container_purge_networks: ~
elasticsearch_container_restart_policy: unless-stopped

View File

@ -1,32 +0,0 @@
---
- name: Ensure host directories are present
file:
path: "{{ item }}"
state: directory
mode: "0777"
loop:
- "{{ elasticsearch_base_path }}"
- "{{ elasticsearch_data_path }}"
- name: Ensure elastic container image is present
docker_image:
name: "{{ elasticsearch_container_image }}"
state: present
source: pull
force_source: "{{ elasticsearch_container_image_tag|default(false, true)|bool }}"
- name: Ensure elastic container is running
docker_container:
name: "{{ elasticsearch_container_name }}"
image: "{{ elasticsearch_container_image }}"
env: "{{ elasticsearch_container_env | default(omit, True) }}"
user: "{{ elasticsearch_container_user | default(omit, True) }}"
ports: "{{ elasticsearch_container_ports | default(omit, True) }}"
labels: "{{ elasticsearch_container_labels | default(omit, True) }}"
volumes: "{{ elasticsearch_container_volumes }}"
ulimits: "{{ elasticsearch_container_ulimits }}"
networks: "{{ elasticsearch_container_networks | default(omit, True) }}"
purge_networks: "{{ elasticsearch_container_purge_networks | default(omit, True) }}"
restart_policy: "{{ elasticsearch_container_restart_policy }}"
state: started

View File

@ -1,7 +1,7 @@
--- ---
ghost_domain: ~ ghost_domain: ~
ghost_version: "5.33.6" ghost_version: "5.94.1"
ghost_user: ghost ghost_user: ghost
ghost_user_group: ghost ghost_user_group: ghost
ghost_base_path: /opt/ghost ghost_base_path: /opt/ghost
@ -36,3 +36,4 @@ ghost_container_restart_policy: "unless-stopped"
ghost_container_networks: ~ ghost_container_networks: ~
ghost_container_purge_networks: ~ ghost_container_purge_networks: ~
ghost_container_etc_hosts: ~ ghost_container_etc_hosts: ~
ghost_container_state: started

View File

@ -16,15 +16,16 @@
- name: Ensure host paths for docker volumes exist for ghost - name: Ensure host paths for docker volumes exist for ghost
ansible.builtin.file: ansible.builtin.file:
path: "{{ item }}" path: "{{ item.path }}"
state: directory state: directory
mode: "0750" mode: "0750"
owner: "{{ ghost_user }}" owner: "{{ item.owner | default(ghost_user) }}"
group: "{{ ghost_user_group }}" group: "{{ item.group | default(ghost_user_group) }}"
loop: loop:
- "{{ ghost_base_path }}" - path: "{{ ghost_base_path }}"
- "{{ ghost_data_path }}" - path: "{{ ghost_data_path }}"
- "{{ ghost_config_path }}" owner: "1000"
- path: "{{ ghost_config_path }}"
- name: Ensure ghost configuration file is templated - name: Ensure ghost configuration file is templated
ansible.builtin.template: ansible.builtin.template:
@ -41,7 +42,7 @@
source: pull source: pull
force_source: "{{ ghost_container_image_tag is defined }}" force_source: "{{ ghost_container_image_tag is defined }}"
- name: Ensure ghost container is running - name: Ensure ghost container '{{ ghost_container_name }}' is {{ ghost_container_state }}
community.docker.docker_container: community.docker.docker_container:
name: "{{ ghost_container_name }}" name: "{{ ghost_container_name }}"
image: "{{ ghost_container_image }}" image: "{{ ghost_container_image }}"
@ -53,4 +54,4 @@
networks: "{{ ghost_container_networks | default(omit, true) }}" networks: "{{ ghost_container_networks | default(omit, true) }}"
purge_networks: "{{ ghost_container_purge_networks | default(omit, true) }}" purge_networks: "{{ ghost_container_purge_networks | default(omit, true) }}"
restart_policy: "{{ ghost_container_restart_policy }}" restart_policy: "{{ ghost_container_restart_policy }}"
state: started state: "{{ ghost_container_state }}"

View File

@ -1,7 +1,8 @@
--- ---
gitea_version: "1.19.4" gitea_version: "1.22.2"
gitea_user: git gitea_user: git
gitea_run_user: "{{ gitea_user }}"
gitea_base_path: "/opt/gitea" gitea_base_path: "/opt/gitea"
gitea_data_path: "{{ gitea_base_path }}/data" gitea_data_path: "{{ gitea_base_path }}/data"
@ -9,7 +10,7 @@ gitea_data_path: "{{ gitea_base_path }}/data"
gitea_domain: ~ gitea_domain: ~
# container config # container config
gitea_container_name: "git" gitea_container_name: "{{ gitea_user }}"
gitea_container_image_name: "docker.io/gitea/gitea" gitea_container_image_name: "docker.io/gitea/gitea"
gitea_container_image_tag: "{{ gitea_version }}" gitea_container_image_tag: "{{ gitea_version }}"
gitea_container_image: "{{ gitea_container_image_name }}:{{ gitea_container_image_tag }}" gitea_container_image: "{{ gitea_container_image_name }}:{{ gitea_container_image_tag }}"
@ -17,9 +18,10 @@ gitea_container_networks: []
gitea_container_purge_networks: ~ gitea_container_purge_networks: ~
gitea_container_restart_policy: "unless-stopped" gitea_container_restart_policy: "unless-stopped"
gitea_container_extra_env: {} gitea_container_extra_env: {}
gitea_contianer_extra_labels: {} gitea_container_extra_labels: {}
gitea_container_extra_ports: [] gitea_container_extra_ports: []
gitea_container_extra_volumes: [] gitea_container_extra_volumes: []
gitea_container_state: started
# container defaults # container defaults
gitea_container_base_volumes: gitea_container_base_volumes:
@ -40,10 +42,10 @@ gitea_container_base_labels:
gitea_config_mailer_enabled: false gitea_config_mailer_enabled: false
gitea_config_mailer_type: ~ gitea_config_mailer_type: ~
gitea_config_mailer_from_addr: ~ gitea_config_mailer_from_addr: ~
gitea_config_mailer_host: ~ gitea_config_mailer_smtp_addr: ~
gitea_config_mailer_user: ~ gitea_config_mailer_user: ~
gitea_config_mailer_passwd: ~ gitea_config_mailer_passwd: ~
gitea_config_mailer_tls: ~ gitea_config_mailer_protocol: ~
gitea_config_mailer_sendmail_path: ~ gitea_config_mailer_sendmail_path: ~
gitea_config_metrics_enabled: false gitea_config_metrics_enabled: false

View File

@ -1,10 +1,11 @@
--- ---
- name: Create gitea user - name: Ensure gitea user '{{ gitea_user }}' is present
user: user:
name: "{{ gitea_user }}" name: "{{ gitea_user }}"
state: present state: "present"
system: no system: false
create_home: true
register: gitea_user_res register: gitea_user_res
- name: Ensure host directories exist - name: Ensure host directories exist
@ -43,7 +44,7 @@
group: "{{ gitea_user_res.group }}" group: "{{ gitea_user_res.group }}"
mode: 0700 mode: 0700
content: | content: |
ssh -p {{ gitea_public_ssh_server_port }} -o StrictHostKeyChecking=no {{ gitea_user }}@127.0.0.1 -i /home/{{ gitea_user }}/.ssh/id_ssh_ed25519 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@" ssh -p {{ gitea_public_ssh_server_port }} -o StrictHostKeyChecking=no {{ gitea_run_user }}@127.0.0.1 -i /home/{{ gitea_user }}/.ssh/id_ssh_ed25519 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"
- name: Add host pubkey to git users authorized_keys file - name: Add host pubkey to git users authorized_keys file
lineinfile: lineinfile:
@ -56,23 +57,24 @@
mode: 0600 mode: 0600
- name: Ensure gitea container image is present - name: Ensure gitea container image is present
docker_image: community.docker.docker_image:
name: "{{ gitea_container_image }}" name: "{{ gitea_container_image }}"
state: present state: present
source: pull source: pull
force_source: "{{ gitea_container_image.endswith(':latest') }}" force_source: "{{ gitea_container_image.endswith(':latest') }}"
- name: Ensure container '{{ gitea_container_name }}' with gitea is running - name: Ensure container '{{ gitea_container_name }}' with gitea is {{ gitea_container_state }}
docker_container: community.docker.docker_container:
name: "{{ gitea_container_name }}" name: "{{ gitea_container_name }}"
image: "{{ gitea_container_image }}" image: "{{ gitea_container_image }}"
env: "{{ gitea_container_env }}" env: "{{ gitea_container_env }}"
labels: "{{ gitea_container_labels }}"
volumes: "{{ gitea_container_volumes }}" volumes: "{{ gitea_container_volumes }}"
networks: "{{ gitea_container_networks | default(omit, True) }}" networks: "{{ gitea_container_networks | default(omit, True) }}"
purge_networks: "{{ gitea_container_purge_networks | default(omit, True) }}" purge_networks: "{{ gitea_container_purge_networks | default(omit, True) }}"
published_ports: "{{ gitea_container_ports }}" published_ports: "{{ gitea_container_ports }}"
restart_policy: "{{ gitea_container_restart_policy }}" restart_policy: "{{ gitea_container_restart_policy }}"
state: started state: "{{ gitea_container_state }}"
- name: Ensure given configuration is set in the config file - name: Ensure given configuration is set in the config file
ini_file: ini_file:

View File

@ -14,7 +14,7 @@ gitea_container_port_ssh: 22
gitea_config_base: gitea_config_base:
RUN_MODE: prod RUN_MODE: prod
RUN_USER: "{{ gitea_user }}" RUN_USER: "{{ gitea_run_user }}"
server: server:
SSH_DOMAIN: "{{ gitea_domain }}" SSH_DOMAIN: "{{ gitea_domain }}"
DOMAIN: "{{ gitea_domain }}" DOMAIN: "{{ gitea_domain }}"
@ -24,11 +24,11 @@ gitea_config_base:
mailer: mailer:
ENABLED: "{{ gitea_config_mailer_enabled }}" ENABLED: "{{ gitea_config_mailer_enabled }}"
MAILER_TYP: "{{ gitea_config_mailer_type }}" MAILER_TYP: "{{ gitea_config_mailer_type }}"
HOST: "{{ gitea_config_mailer_host }}" SMTP_ADDR: "{{ gitea_config_mailer_smtp_addr }}"
USER: "{{ gitea_config_mailer_user }}" USER: "{{ gitea_config_mailer_user }}"
PASSWD: "{{ gitea_config_mailer_passwd }}" PASSWD: "{{ gitea_config_mailer_passwd }}"
IS_TLS_ENABLED: "{{ gitea_config_mailer_tls }}" PROTOCOL: "{{ gitea_config_mailer_protocol }}"
FROM: "{{ gitea_config_mailer_from_addr }}" FROM: "{{ gitea_config_mailer_from }}"
SENDMAIL_PATH: "{{ gitea_config_mailer_sendmail_path }}" SENDMAIL_PATH: "{{ gitea_config_mailer_sendmail_path }}"
metrics: metrics:
ENABLED: "{{ gitea_config_metrics_enabled }}" ENABLED: "{{ gitea_config_metrics_enabled }}"

View File

@ -1,7 +1,7 @@
--- ---
jellyfin_user: jellyfin jellyfin_user: jellyfin
jellyfin_version: 10.8.10 jellyfin_version: 10.9.11
jellyfin_base_path: /opt/jellyfin jellyfin_base_path: /opt/jellyfin
jellyfin_config_path: "{{ jellyfin_base_path }}/config" jellyfin_config_path: "{{ jellyfin_base_path }}/config"

View File

@ -1,29 +0,0 @@
# `finallycoffee.services.minio` ansible role
## Overview
This role deploys a [min.io](https://min.io) server (s3-compatible object storage server)
using the official docker container image.
## Configuration
The role requires setting the password for the `root` user (name can be changed by
setting `minio_root_username`) in `minio_root_password`. That user has full control
over the minio-server instance.
### Useful config hints
Most configuration is done by setting environment variables in
`minio_container_extra_env`, for example:
```yaml
minio_container_extra_env:
# disable the "console" web browser UI
MINIO_BROWSER: off
# enable public prometheus metrics on `/minio/v2/metrics/cluster`
MINIO_PROMETHEUS_AUTH_TYPE: public
```
When serving minio (or any s3-compatible server) on a "subfolder",
see https://docs.aws.amazon.com/AmazonS3/latest/userguide/RESTRedirect.html
and https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html

View File

@ -1,40 +0,0 @@
---
minio_user: ~
minio_data_path: /opt/minio
minio_create_user: false
minio_manage_host_filesystem: false
minio_root_username: root
minio_root_password: ~
minio_container_name: minio
minio_container_image_name: docker.io/minio/minio
minio_container_image_tag: latest
minio_container_image: "{{ minio_container_image_name }}:{{ minio_container_image_tag }}"
minio_container_networks: []
minio_container_ports: []
minio_container_base_volumes:
- "{{ minio_data_path }}:{{ minio_container_data_path }}:z"
minio_container_extra_volumes: []
minio_container_base_env:
MINIO_ROOT_USER: "{{ minio_root_username }}"
MINIO_ROOT_PASSWORD: "{{ minio_root_password }}"
minio_container_extra_env: {}
minio_container_labels: {}
minio_container_command:
- "server"
- "{{ minio_container_data_path }}"
- "--console-address \":{{ minio_container_listen_port_console }}\""
minio_container_restart_policy: "unless-stopped"
minio_container_image_force_source: "{{ (minio_container_image_tag == 'latest')|bool }}"
minio_container_listen_port_api: 9000
minio_container_listen_port_console: 8900
minio_container_data_path: /storage

View File

@ -1,37 +0,0 @@
---
- name: Ensure minio run user is present
user:
name: "{{ minio_user }}"
state: present
system: yes
when: minio_create_user
- name: Ensure filesystem mounts ({{ minio_data_path }}) for container volumes are present
file:
path: "{{ minio_data_path }}"
state: directory
user: "{{ minio_user|default(omit, True) }}"
group: "{{ minio_user|default(omit, True) }}"
when: minio_manage_host_filesystem
- name: Ensure container image for minio is present
community.docker.docker_image:
name: "{{ minio_container_image }}"
state: present
source: pull
force_source: "{{ minio_container_image_force_source }}"
- name: Ensure container {{ minio_container_name }} is running
docker_container:
name: "{{ minio_container_name }}"
image: "{{ minio_container_image }}"
volumes: "{{ minio_container_volumes }}"
env: "{{ minio_container_env }}"
labels: "{{ minio_container_labels }}"
networks: "{{ minio_container_networks }}"
ports: "{{ minio_container_ports }}"
user: "{{ minio_user|default(omit, True) }}"
command: "{{ minio_container_command }}"
restart_policy: "{{ minio_container_restart_policy }}"
state: started

View File

@ -1,5 +0,0 @@
---
minio_container_volumes: "{{ minio_container_base_volumes + minio_container_extra_volumes }}"
minio_container_env: "{{ minio_container_base_env | combine(minio_container_extra_env) }}"

View File

@ -1,28 +0,0 @@
# `finallycoffee.services.nginx` ansible role
## Description
Runs `nginx`, a HTTP reverse proxy, in a docker container.
## Usage
For the role to do anything, `nginx_config` needs to be populated with the configuration for nginx.
An example would be:
```yaml
nginx_config: |+
server {
listen 80 default_server;
server_name my.server.fqdn;
location / { return 200; }
}
```
The container is named `nginx` by default, this can be overridden in `nginx_container_name`.
When running this role multiple times, `nginx_base_path` should also be changed for each run,
otherwise the configuration files collide in the filesystem.
For exposing this server to the host and/or internet, the `nginx_container_ports` (port forwarding host
from host to container), `nginx_container_networks` (docker networking) or `nginx_container_labels`
(for label-based routing discovery like traefik) can be used. The options correspond to the arguments
of the `community.docker.docker_container` module.

View File

@ -1,33 +0,0 @@
---
nginx_version: "1.25.1"
nginx_flavour: alpine
nginx_base_path: /opt/nginx
nginx_config_file: "{{ nginx_base_path }}/nginx.conf"
nginx_container_name: nginx
nginx_container_image_reference: >-
{{
nginx_container_image_repository
+ ':' + (nginx_container_image_tag
| default(nginx_version
+ (('-' + nginx_flavour) if nginx_flavour is defined else ''), true))
}}
nginx_container_image_repository: >-
{{
(
container_registries[nginx_container_image_registry]
| default(nginx_container_image_registry)
)
+ '/'
+ nginx_container_image_namespace | default('')
+ nginx_container_image_name
}}
nginx_container_image_registry: "docker.io"
nginx_container_image_name: "nginx"
nginx_container_image_tag: ~
nginx_container_restart_policy: "unless-stopped"
nginx_container_volumes:
- "{{ nginx_config_file }}:/etc/nginx/conf.d/nginx.conf:ro"

View File

@ -1,8 +0,0 @@
---
- name: Ensure nginx container '{{ nginx_container_name }}' is restarted
community.docker.docker_container:
name: "{{ nginx_container_name }}"
state: started
restart: true
listen: restart-nginx

View File

@ -1,37 +0,0 @@
---
- name: Ensure base path '{{ nginx_base_path }}' exists
ansible.builtin.file:
path: "{{ nginx_base_path }}"
state: directory
mode: 0755
- name: Ensure nginx config file is templated
ansible.builtin.copy:
dest: "{{ nginx_config_file }}"
content: "{{ nginx_config }}"
mode: 0640
notify:
- restart-nginx
- name: Ensure docker container image is present
community.docker.docker_image:
name: "{{ nginx_container_image_reference }}"
state: present
source: pull
force_source: "{{ nginx_container_image_tag is defined and nginx_container_image_tag | string != '' }}"
- name: Ensure docker container '{{ nginx_container_name }}' is running
community.docker.docker_container:
name: "{{ nginx_container_name }}"
image: "{{ nginx_container_image_reference }}"
env: "{{ nginx_container_env | default(omit, true) }}"
user: "{{ nginx_container_user | default(omit, true) }}"
ports: "{{ nginx_container_ports | default(omit, true) }}"
labels: "{{ nginx_container_labels | default(omit, true) }}"
volumes: "{{ nginx_container_volumes | default(omit, true) }}"
etc_hosts: "{{ nginx_container_etc_hosts | default(omit, true) }}"
networks: "{{ nginx_container_networks | default(omit, true) }}"
purge_networks: "{{ nginx_container_purge_networks | default(omit, true) }}"
restart_policy: "{{ nginx_container_restart_policy }}"
state: started

View File

@ -0,0 +1,21 @@
# `finallycoffee.services.openproject` ansible role
Deploys [openproject](https://www.openproject.org/) using docker-compose.
## Configuration
To set configuration variables for OpenProject, set them in `openproject_compose_overrides`:
```yaml
openproject_compose_overrides:
version: "3.7"
services:
proxy:
[...]
volumes:
pgdata:
driver: local
driver_opts:
o: bind
type: none
device: /var/lib/postgresql
```

View File

@ -0,0 +1,11 @@
---
openproject_base_path: "/opt/openproject"
openproject_upstream_git_url: "https://github.com/opf/openproject-deploy.git"
openproject_upstream_git_branch: "stable/13"
openproject_compose_project_path: "{{ openproject_base_path }}/compose"
openproject_compose_project_name: "openproject"
openproject_compose_project_env_file: "{{ openproject_compose_project_path }}/.env"
openproject_compose_project_override_file: "{{ openproject_compose_project_path }}/docker-compose.override.yml"
openproject_compose_project_env: {}

View File

@ -0,0 +1,39 @@
---
- name: Ensure base directory '{{ openproject_base_path }}' is present
ansible.builtin.file:
path: "{{ openproject_base_path }}"
state: directory
- name: Ensure upstream repository is cloned
ansible.builtin.git:
dest: "{{ openproject_base_path }}"
repo: "{{ openproject_upstream_git_url }}"
version: "{{ openproject_upstream_git_branch }}"
clone: true
depth: 1
- name: Ensure environment is configured
ansible.builtin.lineinfile:
line: "{{ item.key}}={{ item.value}}"
path: "{{ openproject_compose_project_env_file }}"
state: present
create: true
loop: "{{ openproject_compose_project_env | dict2items(key_name='key', value_name='value') }}"
- name: Ensure docker compose overrides are set
ansible.builtin.copy:
dest: "{{ openproject_compose_project_override_file }}"
content: "{{ openproject_compose_overrides | default({}) | to_nice_yaml }}"
- name: Ensure containers are pulled
community.docker.docker_compose:
project_src: "{{ openproject_compose_project_path }}"
project_name: "{{ openproject_compose_project_name }}"
pull: true
- name: Ensure services are running
community.docker.docker_compose:
project_src: "{{ openproject_compose_project_path }}"
project_name: "{{ openproject_compose_project_name }}"
state: "present"
build: false

View File

@ -1,77 +0,0 @@
# `finallycoffee.services.restic`
Ansible role for backup up data using `restic`, utilizing `systemd` timers for scheduling.
## Overview
As restic encrypts the data before storing it, the `restic_repo_password` needs
to be populated with a strong key, and saved accordingly as only this key can
be used to decrypt the data for a restore!
### Backends
#### S3 Backend
To use a `s3`-compatible backend like AWS buckets or minio, both `restic_s3_key_id`
and `restic_s3_access_key` need to be populated, and the `restic_repo_url` has the
format `s3:https://my.s3.endpoint:port/bucket-name`.
#### SFTP Backend
Using the `sftp` backend requires the configured `restic_user` to be able to
authenticate to the configured SFTP-Server using password-less methods like
publickey-authentication. The `restic_repo_url` then follows the format
`sftp:{user}@{server}:/my-restic-repository` (or without leading `/` for relative
paths to the `{user}`s home directory.
### Backing up data
A job name like `$service-postgres` or similar needs to be set in `restic_job_name`,
which is used for naming the `systemd` units, their syslog identifiers etc.
If backing up filesystem locations, the paths need to be specified in
`restic_backup_paths` as lists of strings representing absolute filesystem
locations.
If backing up f.ex. database or other data which is generating backups using
a command like `pg_dump`, use `restic_backup_stdin_command` (which needs to output
to `stdout`) in conjunction with `restic_backup_stdin_command_filename` to name
the resulting output (required).
### Policy
The backup policy can be adjusted by overriding the `restic_policy_keep_*`
variables, with the defaults being:
```yaml
restic_policy_keep_all_within: 1d
restic_policy_keep_hourly: 6
restic_policy_keep_daily: 2
restic_policy_keep_weekly: 7
restic_policy_keep_monthly: 4
restic_policy_backup_frequency: hourly
```
**Note:** `restic_policy_backup_frequency` must conform to `systemd`s
`OnCalendar` syntax, which can be checked using `systemd-analyze calender $x`.
## Role behaviour
Per default, when the systemd unit for a job changes, the job is not immediately
started. This can be overridden using `restic_start_job_on_unit_change: true`,
which will immediately start the backup job if it's configuration changed.
The systemd unit runs with `restic_user`, which is root by default, guaranteeing
that filesystem paths are always readable. The `restic_user` can be overridden,
but care needs to be taken to ensure the user has permission to read all the
provided filesystem paths / the backup command may be executed by the user.
If ansible should create the user, set `restic_create_user` to `true`, which
will attempt to create the `restic_user` as a system user.
### Installing
For Debian and RedHat, the role attempts to install restic using the default
package manager's ansible module (apt/dnf). For other distributions, the generic
`package` module tries to install `restic_package_name` (default: `restic`),
which can be overridden if needed.

View File

@ -1,37 +0,0 @@
---
restic_repo_url: ~
restic_repo_password: ~
restic_s3_key_id: ~
restic_s3_access_key: ~
restic_backup_paths: []
restic_backup_stdin_command: ~
restic_backup_stdin_command_filename: ~
restic_policy_keep_all_within: 1d
restic_policy_keep_hourly: 6
restic_policy_keep_daily: 2
restic_policy_keep_weekly: 7
restic_policy_keep_monthly: 4
restic_policy_backup_frequency: hourly
restic_policy:
keep_within: "{{ restic_policy_keep_all_within }}"
hourly: "{{ restic_policy_keep_hourly }}"
daily: "{{ restic_policy_keep_daily }}"
weekly: "{{ restic_policy_keep_weekly }}"
monthly: "{{ restic_policy_keep_monthly }}"
frequency: "{{ restic_policy_backup_frequency }}"
restic_user: root
restic_create_user: false
restic_start_job_on_unit_change: false
restic_job_name: ~
restic_job_description: "Restic backup job for {{ restic_job_name }}"
restic_systemd_unit_naming_scheme: "restic.{{ restic_job_name }}"
restic_systemd_working_directory: /tmp
restic_systemd_syslog_identifier: "restic-{{ restic_job_name }}"
restic_package_name: restic

View File

@ -1,13 +0,0 @@
---
- name: Ensure system daemon is reloaded
listen: reload-systemd
systemd:
daemon_reload: true
- name: Ensure systemd service for '{{ restic_job_name }}' is started immediately
listen: trigger-restic
systemd:
name: "{{ restic_systemd_unit_naming_scheme }}.service"
state: started
when: restic_start_job_on_unit_change

View File

@ -1,77 +0,0 @@
---
- name: Ensure {{ restic_user }} system user exists
user:
name: "{{ restic_user }}"
state: present
system: true
when: restic_create_user
- name: Ensure either backup_paths or backup_stdin_command is populated
when: restic_backup_paths|length > 0 and restic_backup_stdin_command
fail:
msg: "Setting both `restic_backup_paths` and `restic_backup_stdin_command` is not supported"
- name: Ensure a filename for stdin_command backup is given
when: restic_backup_stdin_command and not restic_backup_stdin_command_filename
fail:
msg: "`restic_backup_stdin_command` was set but no filename for the resulting output was supplied in `restic_backup_stdin_command_filename`"
- name: Ensure backup frequency adheres to systemd's OnCalender syntax
command:
cmd: "systemd-analyze calendar {{ restic_policy.frequency }}"
register: systemd_calender_parse_res
failed_when: systemd_calender_parse_res.rc != 0
changed_when: false
- name: Ensure restic is installed
block:
- name: Ensure restic is installed via apt
apt:
package: restic
state: latest
when: ansible_os_family == 'Debian'
- name: Ensure restic is installed via dnf
dnf:
name: restic
state: latest
when: ansible_os_family == 'RedHat'
- name: Ensure restic is installed using the auto-detected package-manager
package:
name: "{{ restic_package_name }}"
state: present
when: ansible_os_family not in ['RedHat', 'Debian']
- name: Ensure systemd service file for '{{ restic_job_name }}' is templated
template:
dest: "/etc/systemd/system/{{ restic_systemd_unit_naming_scheme }}.service"
src: restic.service.j2
owner: root
group: root
mode: 0640
notify:
- reload-systemd
- trigger-restic
- name: Ensure systemd service file for '{{ restic_job_name }}' is templated
template:
dest: "/etc/systemd/system/{{ restic_systemd_unit_naming_scheme }}.timer"
src: restic.timer.j2
owner: root
group: root
mode: 0640
notify:
- reload-systemd
- name: Flush handlers to ensure systemd knows about '{{ restic_job_name }}'
meta: flush_handlers
- name: Ensure systemd timer for '{{ restic_job_name }}' is activated
systemd:
name: "{{ restic_systemd_unit_naming_scheme }}.timer"
enabled: true
- name: Ensure systemd timer for '{{ restic_job_name }}' is started
systemd:
name: "{{ restic_systemd_unit_naming_scheme }}.timer"
state: started

View File

@ -1,28 +0,0 @@
[Unit]
Description={{ restic_job_description }}
[Service]
Type=oneshot
User={{ restic_user }}
WorkingDirectory={{ restic_systemd_working_directory }}
SyslogIdentifier={{ restic_systemd_syslog_identifier }}
Environment=RESTIC_REPOSITORY={{ restic_repo_url }}
Environment=RESTIC_PASSWORD={{ restic_repo_password }}
{% if restic_s3_key_id and restic_s3_access_key %}
Environment=AWS_ACCESS_KEY_ID={{ restic_s3_key_id }}
Environment=AWS_SECRET_ACCESS_KEY={{ restic_s3_access_key }}
{% endif %}
ExecStartPre=-/bin/sh -c '/usr/bin/restic snapshots || /usr/bin/restic init'
{% if restic_backup_stdin_command %}
ExecStart=/bin/sh -c '{{ restic_backup_stdin_command }} | /usr/bin/restic backup --verbose --stdin --stdin-filename {{ restic_backup_stdin_command_filename }}'
{% else %}
ExecStart=/usr/bin/restic --verbose backup {{ restic_backup_paths | join(' ') }}
{% endif %}
ExecStartPost=/usr/bin/restic forget --prune --keep-within={{ restic_policy.keep_within }} --keep-hourly={{ restic_policy.hourly }} --keep-daily={{ restic_policy.daily }} --keep-weekly={{ restic_policy.weekly }} --keep-monthly={{ restic_policy.monthly }}
ExecStartPost=-/usr/bin/restic snapshots
ExecStartPost=/usr/bin/restic check
[Install]
WantedBy=multi-user.target

View File

@ -1,10 +0,0 @@
[Unit]
Description=Run {{ restic_job_name }}
[Timer]
OnCalendar={{ restic_policy.frequency }}
Persistent=True
Unit={{ restic_systemd_unit_naming_scheme }}.service
[Install]
WantedBy=timers.target

View File

@ -1,7 +1,7 @@
--- ---
vouch_proxy_user: vouch-proxy vouch_proxy_user: vouch-proxy
vouch_proxy_version: 0.39.0 vouch_proxy_version: 0.40.0
vouch_proxy_base_path: /opt/vouch-proxy vouch_proxy_base_path: /opt/vouch-proxy
vouch_proxy_config_path: "{{ vouch_proxy_base_path }}/config" vouch_proxy_config_path: "{{ vouch_proxy_base_path }}/config"
vouch_proxy_config_file: "{{ vouch_proxy_config_path }}/config.yaml" vouch_proxy_config_file: "{{ vouch_proxy_config_path }}/config.yaml"