This allows people to try out the new Element X clients, which need to
run against the sliding-sync proxy (https://github.com/matrix-org/sliding-sync).
Supersedes https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/2515
The code is based on the existing PR (#2515), but heavily reworked. Major changes:
- lots of internal refactoring and variable renaming
- fixed self-building to support non-amd64 architectures
- changed to talk to the homeserver locally, over the container network (not
publicly)
- no more matrix-nginx-proxy support due to complexity (see below)
- no more `matrix_server_fqn_sliding_sync_proxy` in favor of
`matrix_sliding_sync_hostname` and `matrix_sliding_sync_path_prefix`
- runs on `matrix.DOMAIN/sliding-sync` by default, so it can tried
easily without having to create new DNS records
The variable was necessary when multiple playbooks could have
potentially tried to manage a shared `devture-traefik.serivce` systemd service
and shared `/devture-traefik` directory.
Since adcc6d9723, we use our own `/matrix/traefik`
(`matrix-traefik.service`) installation and no conflicts can arise.
It's safe to always enable the role, just like we do with all the other roles.
The migration is automatic. Existing users should experience a bit of
downtime until the playbook runs to completion, but don't need to do
anything manually.
This change is provoked by https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/2535
While my statements there ("Traefik is a shared component among
sibling/related playbooks and should retain its global
non-matrix-prefixed name and path") do make sense, there's another point
of view as well.
With the addition of docker-socket-proxy support in bf2b540807,
we potentially introduced another non-`matrix-`-prefixed systemd service
and global path (`/devture-container-socket-proxy`). It would have
started to become messy.
Traefik always being called `devture-traefik.service` and using the `/devture-traefik` path
has the following downsides:
- different playbooks may write to the same place, unintentionally,
before you disable the Traefik role in some of them.
If each playbook manages its own installation, no such conflicts
arise and you'll learn about the conflict when one of them starts its
Traefik service and fails because the ports are already in use
- the data is scattered - backing up `/matrix` is no longer enough when
some stuff lives in `/devture-traefik` or `/devture-container-socket-proxy` as well;
similarly, deleting `/matrix` is no longer enough to clean up
For this reason, the Traefik instance managed by this playbook
will now be called `matrix-traefik` and live under `/matrix/traefik`.
This also makes it obvious to users running multiple playbooks, which
Traefik instance (powered by which playbook) is the active one.
Previously, you'd look at `devture-traefik.service` and wonder which
role was managing it.
We don't need these 2 roughly-the-same settings related to the
traefik-certs-dumper role.
For Traefik, it makes sense, because it's a component used by the
various related playbooks and they could step onto each other's toes
if the role is enabled, but Traefik is disabled (in that case, uninstall
tasks will run).
As for Traefik certs dumper, the other related playbooks don't have it,
so there's no conflict. Even if they used it, each one would use its own
instance (different `devture_traefik_certs_dumper_identifier`), so there
wouldn't be a conflict and uninstall tasks can run without any danger.
This allows people wishing to change or unset the resolver,
to have a single variable which they can toggle.
Unsetting the resolver is useful for using your own certificates
(not coming from a certificate resolver).
The newly extracted role also has native Traefik support,
so we no longer need to rely on `matrix-nginx-proxy` for
reverse-proxying to Ntfy.
The new role uses port `80` inside the container (not `8080`, like
before), because that's the default assumption of the officially
published container image. Using a custom port (like `8080`), means the
default healthcheck command (which hardcodes port `80`) doesn't work.
Instead of fiddling to override the healthcheck command, we've decided
to stick to the default port instead. This only affects the
inside-the-container port, not any external ports.
The new role also supports adding the network ranges of the container's
multiple additional networks as "exempt hosts". Previously, only one
network's address range was added to "exempt hosts".
Previously, it had to go through matrix-nginx-proxy.
It's exposed to Traefik directly via container labels now
Serving at a path other than `/` doesn't work well yet.
We'd like to auto-enable traefik-certs-dumper for these setups.
`devture_traefik_certs_dumper_ssl_dir_path` will be empty though,
so the role's validation will point people in the right direction.
This gets us started on adding a Traefik role and hooking Traefik:
- directly to services which support Traefik - we only have a few of
these right now, but the list will grow
- to matrix-nginx-proxy for most services that integrate with
matrix-nginx-proxy right now
Traefik usage should be disabled by default for now and nothing should
change for people just yet.
Enabling these experiments requires additional configuration like this:
```yaml
devture_traefik_ssl_email_address: '.....'
matrix_playbook_traefik_role_enabled: true
matrix_playbook_traefik_labels_enabled: true
matrix_ssl_retrieval_method: none
matrix_nginx_proxy_https_enabled: false
matrix_nginx_proxy_container_http_host_bind_port: ''
matrix_nginx_proxy_container_federation_host_bind_port: ''
matrix_nginx_proxy_trust_forwarded_proto: true
matrix_nginx_proxy_x_forwarded_for: '$proxy_add_x_forwarded_for'
matrix_coturn_enabled: false
```
What currently works is:
reverse-proxying for all nginx-proxy based services **except** for the Matrix homeserver
(both Client-Server an Federation traffic for the homeserver don't work yet)
Related to https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/2427
This just enables the endpoint, which is somewhat helpful, but not
really enough to scrape them. Ideally, we'd be injecting these targets
into the Prometheus scrape config too.
For now, registering targets with Prometheus is very manual
(`matrix_prometheus_scraper_postgres_enabled`, `matrix_prometheus_scraper_hookshot_enabled`, ..).
This should be redone - e.g. a new `matrix_prometheus_scrape_config_jobs_auto` variable,
which is dynamically built in `group_vars/matrix_servers`.
selectattr() returns a generator object, an iterator. This leads to an exception later, lists can't concated to iterators, only to other lists. So '| list' converts the iterator to a list and the script runs happily.
This extends the collection with support for seamless authentication at the Jitsi server using Matrix OpenID.
1. New role for installing the [Matrix User Verification Service](https://github.com/matrix-org/matrix-user-verification-service)
2. Changes to Jitsi role: Installing Jitsi Prosody Mods and configuring Jitsi Auth
3. Changes to Jitsi and nginx-proxy roles: Serving .well-known/element/jitsi from jitsi.DOMAIN
4. We updated the Jitsi documentation on authentication and added documentation for the user verification service.
* add prometheus-nginxlog-exporter role
* Rename matrix_prometheus_nginxlog_exporter_container_url to matrix_prometheus_nginxlog_exporter_container_hostname
* avoid referencing variables from other roles, handover info using group_vars/matrix_servers
* fix: stop service when uninstalling
fix: typo
move available arch's into a var
fix: text
* fix: prometheus enabled condition
Co-authored-by: ikkemaniac <ikkemaniac@localhost>