Merge remote-tracking branch 'upstream/master'

This commit is contained in:
Marcus Proest 2021-02-19 19:02:48 +01:00
commit 2ca8211184
62 changed files with 2325 additions and 237 deletions

View File

@ -1,3 +1,27 @@
# 2021-02-19
## GroupMe bridging support via mx-puppet-groupme
Thanks to [Cody Neiman](https://github.com/xangelix), the playbook can now install the [mx-puppet-groupme](https://gitlab.com/robintown/mx-puppet-groupme) bridge for bridging to [GroupMe](https://groupme.com).
This brings the total number of bridges supported by the playbook up to 18. See all supported bridges [here](docs/configuring-playbook.md#bridging-other-networks).
To get started, follow our [Setting up MX Puppet GroupMe](docs/configuring-playbook-bridge-mx-puppet-groupme.md) docs.
## Synapse workers support
After [lots and lots of work](https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/456) (done over many months by [Marcel Partap](https://github.com/eMPee584), [Max Klenk](https://github.com/maxklenk), a few others from the [Technical University of Dresden, Germany](https://tu-dresden.de/) and various other contributors), support for Synapse workers has finally landed.
Having support for workers makes the playbook suitable for larger homeserver deployments.
Our setup is not yet perfect (we don't support all types of workers; scaling some of them (like `pusher`, `federation_sender`) beyond a single instance is not yet supported). Still, it's a great start and can already power homeservers with thousands of users, like the [Matrix deployment at TU Dresden](https://doc.matrix.tu-dresden.de/en/) discussed in [Matrix Live S06E09 - TU Dresden on their Matrix deployment](https://www.youtube.com/watch?v=UHJX2pmT2gk).
By default, workers are disabled and Synapse runs as a single process (homeservers don't necessarily need the complexity and increased memory requirements of running a worker-based setup).
To enable Synapse workers, follow our [Load balancing with workers](docs/configuring-playbook-synapse.md#load-balancing-with-workers) documentation.
# 2021-02-12 # 2021-02-12
## (Potential Breaking Change) Monitoring/metrics support using Prometheus and Grafana ## (Potential Breaking Change) Monitoring/metrics support using Prometheus and Grafana

View File

@ -75,6 +75,8 @@ Using this playbook, you can get the following services configured on your serve
- (optional) the [mx-puppet-discord](https://github.com/matrix-discord/mx-puppet-discord) bridge for [Discord](https://discordapp.com/) - see [docs/configuring-playbook-bridge-mx-puppet-discord.md](docs/configuring-playbook-bridge-mx-puppet-discord.md) for setup documentation - (optional) the [mx-puppet-discord](https://github.com/matrix-discord/mx-puppet-discord) bridge for [Discord](https://discordapp.com/) - see [docs/configuring-playbook-bridge-mx-puppet-discord.md](docs/configuring-playbook-bridge-mx-puppet-discord.md) for setup documentation
- (optional) the [mx-puppet-groupme](https://gitlab.com/robintown/mx-puppet-groupme) bridge for [GroupMe](https://groupme.com/) - see [docs/configuring-playbook-bridge-mx-puppet-groupme.md](docs/configuring-playbook-bridge-mx-puppet-groupme.md) for setup documentation
- (optional) the [mx-puppet-steam](https://github.com/icewind1991/mx-puppet-steam) bridge for [Steam](https://steamapp.com/) - see [docs/configuring-playbook-bridge-mx-puppet-steam.md](docs/configuring-playbook-bridge-mx-puppet-steam.md) for setup documentation - (optional) the [mx-puppet-steam](https://github.com/icewind1991/mx-puppet-steam) bridge for [Steam](https://steamapp.com/) - see [docs/configuring-playbook-bridge-mx-puppet-steam.md](docs/configuring-playbook-bridge-mx-puppet-steam.md) for setup documentation
- (optional) [Email2Matrix](https://github.com/devture/email2matrix) for relaying email messages to Matrix rooms - see [docs/configuring-playbook-email2matrix.md](docs/configuring-playbook-email2matrix.md) for setup documentation - (optional) [Email2Matrix](https://github.com/devture/email2matrix) for relaying email messages to Matrix rooms - see [docs/configuring-playbook-email2matrix.md](docs/configuring-playbook-email2matrix.md) for setup documentation

View File

@ -0,0 +1,38 @@
# Setting up MX Puppet GroupMe (optional)
The playbook can install and configure
[mx-puppet-groupme](https://gitlab.com/robintown/mx-puppet-groupme) for you.
See the project page to learn what it does and why it might be useful to you.
To enable the [GroupMe](https://groupme.com/) bridge just use the following
playbook configuration:
```yaml
matrix_mx_puppet_groupme_enabled: true
matrix_mx_puppet_groupme_client_id: ""
matrix_mx_puppet_groupme_client_secret: ""
```
## Usage
Once the bot is enabled you need to start a chat with `GroupMe Puppet Bridge` with
the handle `@_groupmepuppet_bot:YOUR_DOMAIN` (where `YOUR_DOMAIN` is your base
domain, not the `matrix.` domain).
One authentication method is available.
To link your GroupMe account, go to [dev.groupme.com](https://dev.groupme.com/), sign in, and select "Access Token" from the top menu. Copy the token and message the bridge with:
```
link <access token>
```
Once logged in, send `listrooms` to the bot user to list the available rooms.
Clicking rooms in the list will result in you receiving an invitation to the
bridged room.
Also send `help` to the bot to see the commands available.

View File

@ -18,6 +18,35 @@ Alternatively, **if there is no pre-defined variable** for a Synapse setting you
- or, if extending the configuration is still not powerful enough for your needs, you can **override the configuration completely** using `matrix_synapse_configuration` (or `matrix_synapse_configuration_yaml`). You can find information about this in [`roles/matrix-synapse/defaults/main.yml`](../roles/matrix-synapse/defaults/main.yml). - or, if extending the configuration is still not powerful enough for your needs, you can **override the configuration completely** using `matrix_synapse_configuration` (or `matrix_synapse_configuration_yaml`). You can find information about this in [`roles/matrix-synapse/defaults/main.yml`](../roles/matrix-synapse/defaults/main.yml).
## Load balancing with workers
To have Synapse gracefully handle thousands of users, worker support should be enabled. It factors out some homeserver tasks and spreads the load of incoming client and server-to-server traffic between multiple processes. More information can be found in the [official Synapse workers documentation](https://github.com/matrix-org/synapse/blob/master/docs/workers.md).
To enable Synapse worker support, update your `inventory/host_vars/matrix.DOMAIN/vars.yml` file:
```yaml
matrix_synapse_workers_enabled: true
```
We support a few configuration presets (`matrix_synapse_workers_preset: one-of-each` being the default configuration):
- `little-federation-helper` - a very minimal worker configuration to improve federation performance
- `one-of-each` - one worker of each supported type
If you'd like more customization power, you can start with one of the presets and tweak various `matrix_synapse_workers_*_count` variables manually.
If you increase worker counts too much, you may need to increase the maximum number of Postgres connections too (example):
```yaml
matrix_postgres_process_extra_arguments: [
"-c 'max_connections=200'"
]
```
If you're using the default setup (the `matrix-nginx-proxy` webserver being enabled) or you're using your own `nginx` server (which imports the configuration files generated by the playbook), you're good to go. If you use some other webserver, you may need to tweak your reverse-proxy setup manually to forward traffic to the various workers.
In case any problems occur, make sure to have a look at the [list of synapse issues about workers](https://github.com/matrix-org/synapse/issues?q=workers+in%3Atitle) and your `journalctl --unit 'matrix-*'`.
## Synapse Admin ## Synapse Admin
Certain Synapse administration tasks (managing users and rooms, etc.) can be performed via a web user-interace, if you install [Synapse Admin](configuring-playbook-synapse-admin.md). Certain Synapse administration tasks (managing users and rooms, etc.) can be performed via a web user-interace, if you install [Synapse Admin](configuring-playbook-synapse-admin.md).

View File

@ -116,6 +116,8 @@ When you're done with all the configuration you'd like to do, continue with [Ins
- [Setting up MX Puppet Discord bridging](configuring-playbook-bridge-mx-puppet-discord.md) (optional) - [Setting up MX Puppet Discord bridging](configuring-playbook-bridge-mx-puppet-discord.md) (optional)
- [Setting up MX Puppet GroupMe bridging](configuring-playbook-bridge-mx-puppet-groupme.md) (optional)
- [Setting up MX Puppet Steam bridging](configuring-playbook-bridge-mx-puppet-steam.md) (optional) - [Setting up MX Puppet Steam bridging](configuring-playbook-bridge-mx-puppet-steam.md) (optional)
- [Setting up Email2Matrix](configuring-playbook-email2matrix.md) (optional) - [Setting up Email2Matrix](configuring-playbook-email2matrix.md) (optional)

View File

@ -70,6 +70,8 @@ These services are not part of our default installation, but can be enabled by [
- [sorunome/mx-puppet-discord](https://hub.docker.com/r/sorunome/mx-puppet-discord) - the [mx-puppet-discord](https://github.com/matrix-discord/mx-puppet-discord) bridge to [Discord](https://discordapp.com) (optional) - [sorunome/mx-puppet-discord](https://hub.docker.com/r/sorunome/mx-puppet-discord) - the [mx-puppet-discord](https://github.com/matrix-discord/mx-puppet-discord) bridge to [Discord](https://discordapp.com) (optional)
- [xangelix/mx-puppet-groupme](https://hub.docker.com/r/xangelix/mx-puppet-groupme) - the [mx-puppet-groupme](https://gitlab.com/robintown/mx-puppet-groupme) bridge to [GroupMe](https://groupme.com/) (optional)
- [icewind1991/mx-puppet-steam](https://hub.docker.com/r/icewind1991/mx-puppet-steam) - the [mx-puppet-steam](https://github.com/icewind1991/mx-puppet-steam) bridge to [Steam](https://steampowered.com) (optional) - [icewind1991/mx-puppet-steam](https://hub.docker.com/r/icewind1991/mx-puppet-steam) - the [mx-puppet-steam](https://github.com/icewind1991/mx-puppet-steam) bridge to [Steam](https://steampowered.com) (optional)
- [turt2live/matrix-dimension](https://hub.docker.com/r/turt2live/matrix-dimension) - the [Dimension](https://dimension.t2bot.io/) integrations manager (optional) - [turt2live/matrix-dimension](https://hub.docker.com/r/turt2live/matrix-dimension) - the [Dimension](https://dimension.t2bot.io/) integrations manager (optional)

View File

@ -18,6 +18,10 @@
matrix_identity_server_url: "{{ ('https://' + matrix_server_fqn_matrix) if matrix_ma1sd_enabled else None }}" matrix_identity_server_url: "{{ ('https://' + matrix_server_fqn_matrix) if matrix_ma1sd_enabled else None }}"
# If Synapse workers are enabled and matrix-nginx-proxy is disabled, certain APIs may not work over 'http://matrix-synapse:8008'.
# This is because we explicitly disable them for the main Synapse process.
matrix_homeserver_container_url: "{{ 'http://matrix-nginx-proxy:12080' if matrix_nginx_proxy_enabled else 'http://matrix-synapse:8008' }}"
###################################################################### ######################################################################
# #
# /matrix-base # /matrix-base
@ -323,7 +327,7 @@ matrix_mautrix_signal_systemd_required_services_list: |
matrix_mautrix_signal_homeserver_domain: '{{ matrix_domain }}' matrix_mautrix_signal_homeserver_domain: '{{ matrix_domain }}'
matrix_mautrix_signal_homeserver_address: "{{ 'http://matrix-synapse:8008' if matrix_synapse_enabled else '' }}" matrix_mautrix_signal_homeserver_address: "{{ matrix_homeserver_container_url }}"
matrix_mautrix_signal_homeserver_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'si.hs.token') | to_uuid }}" matrix_mautrix_signal_homeserver_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'si.hs.token') | to_uuid }}"
@ -662,6 +666,41 @@ matrix_mx_puppet_steam_database_password: "{{ matrix_synapse_macaroon_secret_key
# #
###################################################################### ######################################################################
######################################################################
#
# matrix-bridge-mx-puppet-groupme
#
######################################################################
# We don't enable bridges by default.
matrix_mx_puppet_groupme_enabled: false
matrix_mx_puppet_groupme_container_image_self_build: "{{ matrix_architecture != 'amd64'}}"
matrix_mx_puppet_groupme_systemd_required_services_list: |
{{
['docker.service']
+
(['matrix-synapse.service'] if matrix_synapse_enabled else [])
+
(['matrix-postgres.service'] if matrix_postgres_enabled else [])
}}
matrix_mx_puppet_groupme_appservice_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'mxgro.as.tok') | to_uuid }}"
matrix_mx_puppet_groupme_homeserver_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'mxgro.hs.tok') | to_uuid }}"
matrix_mx_puppet_groupme_login_shared_secret: "{{ matrix_synapse_ext_password_provider_shared_secret_auth_shared_secret if matrix_synapse_ext_password_provider_shared_secret_auth_enabled else '' }}"
# Postgres is the default, except if not using `matrix_postgres` (internal postgres)
matrix_mx_puppet_groupme_database_engine: "{{ 'postgres' if matrix_postgres_enabled else 'sqlite' }}"
matrix_mx_puppet_groupme_database_password: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'mxpup.groupme.db') | to_uuid }}"
######################################################################
#
# /matrix-bridge-mx-puppet-groupme
#
######################################################################
###################################################################### ######################################################################
# #
@ -713,7 +752,8 @@ matrix_corporal_systemd_required_services_list: |
(['matrix-synapse.service']) (['matrix-synapse.service'])
}} }}
matrix_corporal_matrix_homeserver_api_endpoint: "http://matrix-synapse:8008" # This goes to Synapse's vhost
matrix_corporal_matrix_homeserver_api_endpoint: "{{ matrix_homeserver_container_url }}"
matrix_corporal_matrix_auth_shared_secret: "{{ matrix_synapse_ext_password_provider_shared_secret_auth_shared_secret }}" matrix_corporal_matrix_auth_shared_secret: "{{ matrix_synapse_ext_password_provider_shared_secret_auth_shared_secret }}"
@ -955,7 +995,7 @@ matrix_ma1sd_synapsesql_connection: //{{ matrix_synapse_database_host }}/{{ matr
matrix_ma1sd_dns_overwrite_enabled: true matrix_ma1sd_dns_overwrite_enabled: true
matrix_ma1sd_dns_overwrite_homeserver_client_name: "{{ matrix_server_fqn_matrix }}" matrix_ma1sd_dns_overwrite_homeserver_client_name: "{{ matrix_server_fqn_matrix }}"
matrix_ma1sd_dns_overwrite_homeserver_client_value: "http://{{ 'matrix-corporal:41080' if matrix_corporal_enabled else 'matrix-synapse:8008' }}" matrix_ma1sd_dns_overwrite_homeserver_client_value: "http://{{ matrix_nginx_proxy_proxy_matrix_client_api_addr_with_container }}"
# By default, we send mail through the `matrix-mailer` service. # By default, we send mail through the `matrix-mailer` service.
matrix_ma1sd_threepid_medium_email_identity_from: "{{ matrix_mailer_sender_address }}" matrix_ma1sd_threepid_medium_email_identity_from: "{{ matrix_mailer_sender_address }}"
@ -1002,8 +1042,8 @@ matrix_ma1sd_database_password: "{{ matrix_synapse_macaroon_secret_key | passwor
# If that's not the case, you may wish to disable this and take care of proxying yourself. # If that's not the case, you may wish to disable this and take care of proxying yourself.
matrix_nginx_proxy_enabled: true matrix_nginx_proxy_enabled: true
matrix_nginx_proxy_proxy_matrix_client_api_addr_with_container: "{{ 'matrix-corporal:41080' if matrix_corporal_enabled else 'matrix-synapse:8008' }}" matrix_nginx_proxy_proxy_matrix_client_api_addr_with_container: "{{ 'matrix-corporal:41080' if matrix_corporal_enabled else 'matrix-nginx-proxy:12080' }}"
matrix_nginx_proxy_proxy_matrix_client_api_addr_sans_container: "{{ '127.0.0.1:41080' if matrix_corporal_enabled else '127.0.0.1:8008' }}" matrix_nginx_proxy_proxy_matrix_client_api_addr_sans_container: "{{ '127.0.0.1:41080' if matrix_corporal_enabled else '127.0.0.1:12080' }}"
matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb: "{{ matrix_synapse_max_upload_size_mb }}" matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb: "{{ matrix_synapse_max_upload_size_mb }}"
matrix_nginx_proxy_proxy_matrix_client_api_forwarded_location_synapse_admin_api_enabled: "{{ matrix_synapse_admin_enabled }}" matrix_nginx_proxy_proxy_matrix_client_api_forwarded_location_synapse_admin_api_enabled: "{{ matrix_synapse_admin_enabled }}"
@ -1027,8 +1067,12 @@ matrix_nginx_proxy_proxy_matrix_identity_api_addr_sans_container: "127.0.0.1:809
# By default, we do TLS termination for the Matrix Federation API (port 8448) at matrix-nginx-proxy. # By default, we do TLS termination for the Matrix Federation API (port 8448) at matrix-nginx-proxy.
# Unless this is handled there OR Synapse's federation listener port is disabled, we'll reverse-proxy. # Unless this is handled there OR Synapse's federation listener port is disabled, we'll reverse-proxy.
matrix_nginx_proxy_proxy_matrix_federation_api_enabled: "{{ matrix_synapse_federation_port_enabled and not matrix_synapse_tls_federation_listener_enabled }}" matrix_nginx_proxy_proxy_matrix_federation_api_enabled: "{{ matrix_synapse_federation_port_enabled and not matrix_synapse_tls_federation_listener_enabled }}"
matrix_nginx_proxy_proxy_matrix_federation_api_addr_with_container: "matrix-synapse:8048" matrix_nginx_proxy_proxy_matrix_federation_api_addr_with_container: "matrix-nginx-proxy:12088"
matrix_nginx_proxy_proxy_matrix_federation_api_addr_sans_container: "127.0.0.1:8048" matrix_nginx_proxy_proxy_matrix_federation_api_addr_sans_container: "127.0.0.1:12088"
# Settings controlling matrix-synapse-proxy.conf
matrix_nginx_proxy_proxy_synapse_enabled: "{{ matrix_synapse_enabled }}"
matrix_nginx_proxy_proxy_synapse_federation_api_enabled: "{{ matrix_nginx_proxy_proxy_matrix_federation_api_enabled }}"
matrix_nginx_proxy_container_federation_host_bind_port: "{{ matrix_federation_public_port }}" matrix_nginx_proxy_container_federation_host_bind_port: "{{ matrix_federation_public_port }}"
@ -1045,6 +1089,16 @@ matrix_nginx_proxy_proxy_matrix_user_directory_search_addr_sans_container: "{{ m
matrix_nginx_proxy_self_check_validate_certificates: "{{ false if matrix_ssl_retrieval_method == 'self-signed' else true }}" matrix_nginx_proxy_self_check_validate_certificates: "{{ false if matrix_ssl_retrieval_method == 'self-signed' else true }}"
matrix_nginx_proxy_synapse_presence_disabled: "{{ not matrix_synapse_use_presence }}"
matrix_nginx_proxy_synapse_workers_enabled: "{{ matrix_synapse_workers_enabled }}"
matrix_nginx_proxy_synapse_workers_list: "{{ matrix_synapse_workers_enabled_list }}"
matrix_nginx_proxy_synapse_generic_worker_client_server_locations: "{{ matrix_synapse_workers_generic_worker_client_server_endpoints }}"
matrix_nginx_proxy_synapse_generic_worker_federation_locations: "{{ matrix_synapse_workers_generic_worker_federation_endpoints }}"
matrix_nginx_proxy_synapse_media_repository_locations: "{{matrix_synapse_workers_media_repository_endpoints|default([]) }}"
matrix_nginx_proxy_synapse_user_dir_locations: "{{ matrix_synapse_workers_user_dir_endpoints|default([]) }}"
matrix_nginx_proxy_synapse_frontend_proxy_locations: "{{ matrix_synapse_workers_frontend_proxy_endpoints|default([]) }}"
matrix_nginx_proxy_systemd_wanted_services_list: | matrix_nginx_proxy_systemd_wanted_services_list: |
{{ {{
(['matrix-synapse.service']) (['matrix-synapse.service'])
@ -1225,6 +1279,12 @@ matrix_postgres_additional_databases: |
'password': matrix_mx_puppet_steam_database_password, 'password': matrix_mx_puppet_steam_database_password,
}] if (matrix_mx_puppet_steam_enabled and matrix_mx_puppet_steam_database_engine == 'postgres' and matrix_mx_puppet_steam_database_hostname == 'matrix-postgres') else []) }] if (matrix_mx_puppet_steam_enabled and matrix_mx_puppet_steam_database_engine == 'postgres' and matrix_mx_puppet_steam_database_hostname == 'matrix-postgres') else [])
+ +
([{
'name': matrix_mx_puppet_groupme_database_name,
'username': matrix_mx_puppet_groupme_database_username,
'password': matrix_mx_puppet_groupme_database_password,
}] if (matrix_mx_puppet_groupme_enabled and matrix_mx_puppet_groupme_database_engine == 'postgres' and matrix_mx_puppet_groupme_database_hostname == 'matrix-postgres') else [])
+
([{ ([{
'name': matrix_dimension_database_name, 'name': matrix_dimension_database_name,
'username': matrix_dimension_database_username, 'username': matrix_dimension_database_username,
@ -1260,6 +1320,22 @@ matrix_postgres_import_databases_to_ignore: |
######################################################################
#
# matrix-redis
#
######################################################################
matrix_redis_enabled: "{{ matrix_synapse_workers_enabled }}"
######################################################################
#
# /matrix-redis
#
######################################################################
###################################################################### ######################################################################
# #
# matrix-client-element # matrix-client-element
@ -1340,6 +1416,9 @@ matrix_synapse_container_metrics_api_host_bind_port: "{{ '127.0.0.1:9100' if (ma
# #
# For exposing the Synapse Manhole port (plain HTTP) to the local host. # For exposing the Synapse Manhole port (plain HTTP) to the local host.
matrix_synapse_container_manhole_api_host_bind_port: "{{ '127.0.0.1:9000' if matrix_synapse_manhole_enabled else '' }}" matrix_synapse_container_manhole_api_host_bind_port: "{{ '127.0.0.1:9000' if matrix_synapse_manhole_enabled else '' }}"
#
# For exposing the Synapse worker (and metrics) ports to the local host.
matrix_synapse_workers_container_host_bind_address: "{{ '127.0.0.1' if (matrix_synapse_workers_enabled and not matrix_nginx_proxy_enabled) else '' }}"
matrix_synapse_database_password: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'synapse.db') | to_uuid }}" matrix_synapse_database_password: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'synapse.db') | to_uuid }}"
@ -1394,6 +1473,11 @@ matrix_synapse_systemd_wanted_services_list: |
(['matrix-mailer.service'] if matrix_mailer_enabled else []) (['matrix-mailer.service'] if matrix_mailer_enabled else [])
}} }}
# Synapse workers (used for parallel load-scaling) need Redis for IPC.
matrix_synapse_redis_enabled: "{{ matrix_redis_enabled }}"
matrix_synapse_redis_host: "{{ 'matrix-redis' if matrix_redis_enabled else '' }}"
matrix_synapse_redis_password: "{{ matrix_redis_connection_password if matrix_redis_enabled else '' }}"
###################################################################### ######################################################################
# #
# /matrix-synapse # /matrix-synapse
@ -1511,7 +1595,7 @@ matrix_registration_riot_instance: "{{ ('https://' + matrix_server_fqn_element)
matrix_registration_shared_secret: "{{ matrix_synapse_registration_shared_secret if matrix_synapse_enabled else '' }}" matrix_registration_shared_secret: "{{ matrix_synapse_registration_shared_secret if matrix_synapse_enabled else '' }}"
matrix_registration_server_location: "{{ 'http://matrix-synapse:8008' if matrix_synapse_enabled else '' }}" matrix_registration_server_location: "{{ matrix_homeserver_container_url }}"
matrix_registration_api_validate_certs: "{{ false if matrix_ssl_retrieval_method == 'self-signed' else true }}" matrix_registration_api_validate_certs: "{{ false if matrix_ssl_retrieval_method == 'self-signed' else true }}"

View File

@ -76,6 +76,11 @@ matrix_ntpd_service: "{{ 'ntpd' if ansible_os_family == 'RedHat' or ansible_dist
matrix_homeserver_url: "https://{{ matrix_server_fqn_matrix }}" matrix_homeserver_url: "https://{{ matrix_server_fqn_matrix }}"
# Specifies where the homeserver is on the container network.
# Where this is depends on whether there's a reverse-proxy in front of it, etc.
# This likely gets overriden elsewhere.
matrix_homeserver_container_url: "http://matrix-synapse:8008"
matrix_identity_server_url: ~ matrix_identity_server_url: ~
matrix_integration_manager_rest_url: ~ matrix_integration_manager_rest_url: ~

View File

@ -15,11 +15,14 @@ if [ "$sure" != "Yes, I really want to remove everything!" ]; then
exit 0 exit 0
else else
echo "Stop and remove matrix services" echo "Stop and remove matrix services"
for s in $(find {{ matrix_systemd_path }}/ -name "matrix-*" -printf "%f\n"); do
systemctl stop $s for s in $(find {{ matrix_systemd_path }}/ -type f -name "matrix-*" -printf "%f\n"); do
systemctl disable --now $s
rm -f {{ matrix_systemd_path }}/$s rm -f {{ matrix_systemd_path }}/$s
done done
systemctl daemon-reload systemctl daemon-reload
echo "Remove matrix scripts" echo "Remove matrix scripts"
find {{ matrix_local_bin_path }}/ -name "matrix-*" -delete find {{ matrix_local_bin_path }}/ -name "matrix-*" -delete
echo "Remove unused Docker images and resources" echo "Remove unused Docker images and resources"

View File

@ -58,7 +58,7 @@ matrix_bot_matrix_reminder_bot_matrix_user_id: '@{{ matrix_bot_matrix_reminder_b
# The password that the bot uses to authenticate. # The password that the bot uses to authenticate.
matrix_bot_matrix_reminder_bot_matrix_user_password: '' matrix_bot_matrix_reminder_bot_matrix_user_password: ''
matrix_bot_matrix_reminder_bot_matrix_homeserver_url: 'http://matrix-synapse:8008' matrix_bot_matrix_reminder_bot_matrix_homeserver_url: "{{ matrix_homeserver_container_url }}"
# The timezone to use when creating reminders. # The timezone to use when creating reminders.
# Examples: 'Europe/London', 'Etc/UTC' # Examples: 'Europe/London', 'Etc/UTC'

View File

@ -7,7 +7,7 @@
- name: Ensure matrix-matrix-reminder-bot is stopped - name: Ensure matrix-matrix-reminder-bot is stopped
service: service:
name: matrix-matrix-reminder-bot name: matrix-bot-matrix-reminder-bot
state: stopped state: stopped
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result

View File

@ -14,7 +14,7 @@ matrix_appservice_irc_base_path: "{{ matrix_base_data_path }}/appservice-irc"
matrix_appservice_irc_config_path: "{{ matrix_appservice_irc_base_path }}/config" matrix_appservice_irc_config_path: "{{ matrix_appservice_irc_base_path }}/config"
matrix_appservice_irc_data_path: "{{ matrix_appservice_irc_base_path }}/data" matrix_appservice_irc_data_path: "{{ matrix_appservice_irc_base_path }}/data"
matrix_appservice_irc_homeserver_url: 'http://matrix-synapse:8008' matrix_appservice_irc_homeserver_url: "{{ matrix_homeserver_container_url }}"
matrix_appservice_irc_homeserver_media_url: 'https://{{ matrix_server_fqn_matrix }}' matrix_appservice_irc_homeserver_media_url: 'https://{{ matrix_server_fqn_matrix }}'
matrix_appservice_irc_homeserver_domain: '{{ matrix_domain }}' matrix_appservice_irc_homeserver_domain: '{{ matrix_domain }}'
matrix_appservice_irc_homeserver_enablePresence: true matrix_appservice_irc_homeserver_enablePresence: true

View File

@ -16,7 +16,7 @@ matrix_mautrix_facebook_config_path: "{{ matrix_mautrix_facebook_base_path }}/co
matrix_mautrix_facebook_data_path: "{{ matrix_mautrix_facebook_base_path }}/data" matrix_mautrix_facebook_data_path: "{{ matrix_mautrix_facebook_base_path }}/data"
matrix_mautrix_facebook_docker_src_files_path: "{{ matrix_mautrix_facebook_base_path }}/docker-src" matrix_mautrix_facebook_docker_src_files_path: "{{ matrix_mautrix_facebook_base_path }}/docker-src"
matrix_mautrix_facebook_homeserver_address: 'http://matrix-synapse:8008' matrix_mautrix_facebook_homeserver_address: "{{ matrix_homeserver_container_url }}"
matrix_mautrix_facebook_homeserver_domain: '{{ matrix_domain }}' matrix_mautrix_facebook_homeserver_domain: '{{ matrix_domain }}'
matrix_mautrix_facebook_appservice_address: 'http://matrix-mautrix-facebook:29319' matrix_mautrix_facebook_appservice_address: 'http://matrix-mautrix-facebook:29319'

View File

@ -18,7 +18,7 @@ matrix_mautrix_hangouts_docker_src_files_path: "{{ matrix_mautrix_hangouts_base_
matrix_mautrix_hangouts_public_endpoint: '/mautrix-hangouts' matrix_mautrix_hangouts_public_endpoint: '/mautrix-hangouts'
matrix_mautrix_hangouts_homeserver_address: 'http://matrix-synapse:8008' matrix_mautrix_hangouts_homeserver_address: "{{ matrix_homeserver_container_url }}"
matrix_mautrix_hangouts_homeserver_domain: '{{ matrix_domain }}' matrix_mautrix_hangouts_homeserver_domain: '{{ matrix_domain }}'
matrix_mautrix_hangouts_appservice_address: 'http://matrix-mautrix-hangouts:8080' matrix_mautrix_hangouts_appservice_address: 'http://matrix-mautrix-hangouts:8080'

View File

@ -25,7 +25,7 @@ matrix_mautrix_telegram_bot_token: disabled
# Example: /741a0483-ba17-4682-9900-30bd7269f1cc # Example: /741a0483-ba17-4682-9900-30bd7269f1cc
matrix_mautrix_telegram_public_endpoint: '' matrix_mautrix_telegram_public_endpoint: ''
matrix_mautrix_telegram_homeserver_address: 'http://matrix-synapse:8008' matrix_mautrix_telegram_homeserver_address: "{{ matrix_homeserver_container_url }}"
matrix_mautrix_telegram_homeserver_domain: '{{ matrix_domain }}' matrix_mautrix_telegram_homeserver_domain: '{{ matrix_domain }}'
matrix_mautrix_telegram_appservice_address: 'http://matrix-mautrix-telegram:8080' matrix_mautrix_telegram_appservice_address: 'http://matrix-mautrix-telegram:8080'
matrix_mautrix_telegram_appservice_public_external: 'https://{{ matrix_server_fqn_matrix }}{{ matrix_mautrix_telegram_public_endpoint }}' matrix_mautrix_telegram_appservice_public_external: 'https://{{ matrix_server_fqn_matrix }}{{ matrix_mautrix_telegram_public_endpoint }}'

View File

@ -22,7 +22,7 @@ matrix_mx_puppet_discord_docker_src_files_path: "{{ matrix_mx_puppet_discord_bas
matrix_mx_puppet_discord_appservice_port: "8432" matrix_mx_puppet_discord_appservice_port: "8432"
matrix_mx_puppet_discord_homeserver_address: 'http://matrix-synapse:8008' matrix_mx_puppet_discord_homeserver_address: "{{ matrix_homeserver_container_url }}"
matrix_mx_puppet_discord_homeserver_domain: '{{ matrix_domain }}' matrix_mx_puppet_discord_homeserver_domain: '{{ matrix_domain }}'
matrix_mx_puppet_discord_appservice_address: 'http://matrix-mx-puppet-discord:{{ matrix_mx_puppet_discord_appservice_port }}' matrix_mx_puppet_discord_appservice_address: 'http://matrix-mx-puppet-discord:{{ matrix_mx_puppet_discord_appservice_port }}'

View File

@ -0,0 +1,110 @@
# Mx Puppet GroupMe is a Matrix <-> GroupMe bridge
# See: https://gitlab.com/robintown/mx-puppet-groupme
matrix_mx_puppet_groupme_enabled: true
matrix_mx_puppet_groupme_container_image_self_build: false
matrix_mx_puppet_groupme_container_image_self_build_repo: "https://gitlab.com/robintown/mx-puppet-groupme"
# Controls whether the mx-puppet-groupme container exposes its HTTP port (tcp/8437 in the container).
#
# Takes an "<ip>:<port>" or "<port>" value (e.g. "127.0.0.1:8437"), or empty string to not expose.
matrix_mx_puppet_groupme_container_http_host_bind_port: ''
matrix_mx_puppet_groupme_docker_image: "{{ matrix_mx_puppet_groupme_docker_image_name_prefix }}xangelix/mx-puppet-groupme:latest"
matrix_mx_puppet_groupme_docker_image_name_prefix: "{{ 'localhost/' if matrix_mx_puppet_groupme_container_image_self_build else 'docker.io/' }}"
matrix_mx_puppet_groupme_docker_image_force_pull: "{{ matrix_mx_puppet_groupme_docker_image.endswith(':latest') }}"
matrix_mx_puppet_groupme_base_path: "{{ matrix_base_data_path }}/mx-puppet-groupme"
matrix_mx_puppet_groupme_config_path: "{{ matrix_mx_puppet_groupme_base_path }}/config"
matrix_mx_puppet_groupme_data_path: "{{ matrix_mx_puppet_groupme_base_path }}/data"
matrix_mx_puppet_groupme_docker_src_files_path: "{{ matrix_mx_puppet_groupme_base_path }}/docker-src"
matrix_mx_puppet_groupme_appservice_port: "8437"
matrix_mx_puppet_groupme_homeserver_address: 'http://matrix-synapse:8008'
matrix_mx_puppet_groupme_homeserver_domain: '{{ matrix_domain }}'
matrix_mx_puppet_groupme_appservice_address: 'http://matrix-mx-puppet-groupme:{{ matrix_mx_puppet_groupme_appservice_port }}'
matrix_mx_puppet_groupme_client_id: ''
matrix_mx_puppet_groupme_client_secret: ''
# "@user:server.com" to allow specific user
# "@.*:yourserver.com" to allow users on a specific homeserver
# "@.*" to allow anyone
matrix_mx_puppet_groupme_provisioning_whitelist:
- "@.*:{{ matrix_domain|regex_escape }}"
# Leave empty to disable blacklist
# "@user:server.com" disallow a specific user
# "@.*:yourserver.com" disallow users on a specific homeserver
matrix_mx_puppet_groupme_provisioning_blacklist: []
# A list of extra arguments to pass to the container
matrix_mx_puppet_groupme_container_extra_arguments: []
# List of systemd services that matrix-puppet-groupme.service depends on.
matrix_mx_puppet_groupme_systemd_required_services_list: ['docker.service']
# List of systemd services that matrix-puppet-groupme.service wants
matrix_mx_puppet_groupme_systemd_wanted_services_list: []
matrix_mx_puppet_groupme_appservice_token: ''
matrix_mx_puppet_groupme_homeserver_token: ''
# Can be set to enable automatic double-puppeting via Shared Secret Auth (https://github.com/devture/matrix-synapse-shared-secret-auth).
matrix_mx_puppet_groupme_login_shared_secret: ''
matrix_mx_puppet_groupme_database_engine: sqlite
matrix_mx_puppet_groupme_sqlite_database_path_local: "{{ matrix_mx_puppet_groupme_data_path }}/database.db"
matrix_mx_puppet_groupme_sqlite_database_path_in_container: "/data/database.db"
matrix_mx_puppet_groupme_database_username: matrix_mx_puppet_groupme
matrix_mx_puppet_groupme_database_password: ~
matrix_mx_puppet_groupme_database_hostname: 'matrix-postgres'
matrix_mx_puppet_groupme_database_port: 5432
matrix_mx_puppet_groupme_database_name: matrix_mx_puppet_groupme
matrix_mx_puppet_groupme_database_connection_string: 'postgresql://{{ matrix_mx_puppet_groupme_database_username }}:{{ matrix_mx_puppet_groupme_database_password }}@{{ matrix_mx_puppet_groupme_database_hostname }}:{{ matrix_mx_puppet_groupme_database_port }}/{{ matrix_mx_puppet_groupme_database_name }}?sslmode=disable'
# Default configuration template which covers the generic use case.
# You can customize it by controlling the various variables inside it.
#
# For a more advanced customization, you can extend the default (see `matrix_mx_puppet_groupme_configuration_extension_yaml`)
# or completely replace this variable with your own template.
matrix_mx_puppet_groupme_configuration_yaml: "{{ lookup('template', 'templates/config.yaml.j2') }}"
matrix_mx_puppet_groupme_configuration_extension_yaml: |
# Your custom YAML configuration goes here.
# This configuration extends the default starting configuration (`matrix_mx_puppet_groupme_configuration_yaml`).
#
# You can override individual variables from the default configuration, or introduce new ones.
#
# If you need something more special, you can take full control by
# completely redefining `matrix_mx_puppet_groupme_configuration_yaml`.
matrix_mx_puppet_groupme_configuration_extension: "{{ matrix_mx_puppet_groupme_configuration_extension_yaml|from_yaml if matrix_mx_puppet_groupme_configuration_extension_yaml|from_yaml is mapping else {} }}"
# Holds the final configuration (a combination of the default and its extension).
# You most likely don't need to touch this variable. Instead, see `matrix_mx_puppet_groupme_configuration_yaml`.
matrix_mx_puppet_groupme_configuration: "{{ matrix_mx_puppet_groupme_configuration_yaml|from_yaml|combine(matrix_mx_puppet_groupme_configuration_extension, recursive=True) }}"
matrix_mx_puppet_groupme_registration_yaml: |
as_token: "{{ matrix_mx_puppet_groupme_appservice_token }}"
hs_token: "{{ matrix_mx_puppet_groupme_homeserver_token }}"
id: groupme-puppet
namespaces:
users:
- exclusive: true
regex: '@_groupmepuppet_.*:{{ matrix_mx_puppet_groupme_homeserver_domain|regex_escape }}'
rooms: []
aliases:
- exclusive: true
regex: '#_groupmepuppet_.*:{{ matrix_mx_puppet_groupme_homeserver_domain|regex_escape }}'
protocols: []
rate_limited: false
sender_localpart: _groupmepuppet_bot
url: {{ matrix_mx_puppet_groupme_appservice_address }}
matrix_mx_puppet_groupme_registration: "{{ matrix_mx_puppet_groupme_registration_yaml|from_yaml }}"

View File

@ -0,0 +1,23 @@
- set_fact:
matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-mx-puppet-groupme.service'] }}"
when: matrix_mx_puppet_groupme_enabled|bool
# If the matrix-synapse role is not used, these variables may not exist.
- set_fact:
matrix_synapse_container_extra_arguments: >
{{ matrix_synapse_container_extra_arguments|default([]) }}
+
["--mount type=bind,src={{ matrix_mx_puppet_groupme_config_path }}/registration.yaml,dst=/matrix-mx-puppet-groupme-registration.yaml,ro"]
matrix_synapse_app_service_config_files: >
{{ matrix_synapse_app_service_config_files|default([]) }}
+
{{ ["/matrix-mx-puppet-groupme-registration.yaml"] }}
when: matrix_mx_puppet_groupme_enabled|bool
# ansible lower than 2.8, does not support docker_image build parameters
# for self buildig it is explicitly needed, so we rather fail here
- name: Fail if running on Ansible lower than 2.8 and trying self building
fail:
msg: "To self build Puppet Slack image, you should usa ansible 2.8 or higher. E.g. pip contains such packages."
when: "ansible_version.major == 2 and ansible_version.minor < 8 and matrix_mx_puppet_groupme_container_image_self_build"

View File

@ -0,0 +1,21 @@
- import_tasks: "{{ role_path }}/tasks/init.yml"
tags:
- always
- import_tasks: "{{ role_path }}/tasks/validate_config.yml"
when: "run_setup|bool and matrix_mx_puppet_groupme_enabled|bool"
tags:
- setup-all
- setup-mx-puppet-groupme
- import_tasks: "{{ role_path }}/tasks/setup_install.yml"
when: "run_setup|bool and matrix_mx_puppet_groupme_enabled|bool"
tags:
- setup-all
- setup-mx-puppet-groupme
- import_tasks: "{{ role_path }}/tasks/setup_uninstall.yml"
when: "run_setup|bool and not matrix_mx_puppet_groupme_enabled|bool"
tags:
- setup-all
- setup-mx-puppet-groupme

View File

@ -0,0 +1,127 @@
---
# If the matrix-synapse role is not used, `matrix_synapse_role_executed` won't exist.
# We don't want to fail in such cases.
- name: Fail if matrix-synapse role already executed
fail:
msg: >-
The matrix-bridge-mx-puppet-groupme role needs to execute before the matrix-synapse role.
when: "matrix_synapse_role_executed|default(False)"
- name: Ensure MX Puppet Groupme paths exist
file:
path: "{{ item.path }}"
state: directory
mode: 0750
owner: "{{ matrix_user_username }}"
group: "{{ matrix_user_groupname }}"
with_items:
- { path: "{{ matrix_mx_puppet_groupme_base_path }}", when: true }
- { path: "{{ matrix_mx_puppet_groupme_config_path }}", when: true }
- { path: "{{ matrix_mx_puppet_groupme_data_path }}", when: true }
- { path: "{{ matrix_mx_puppet_groupme_docker_src_files_path }}", when: "{{ matrix_mx_puppet_groupme_container_image_self_build }}" }
when: matrix_mx_puppet_groupme_enabled|bool and item.when|bool
- name: Check if an old database file already exists
stat:
path: "{{ matrix_mx_puppet_groupme_base_path }}/database.db"
register: matrix_mx_puppet_groupme_stat_database
- name: (Data relocation) Ensure matrix-mx-puppet-groupme.service is stopped
service:
name: matrix-mx-puppet-groupme
state: stopped
daemon_reload: yes
failed_when: false
when: "matrix_mx_puppet_groupme_stat_database.stat.exists"
- name: (Data relocation) Move mx-puppet-groupme database file to ./data directory
command: "mv {{ matrix_mx_puppet_groupme_base_path }}/database.db {{ matrix_mx_puppet_groupme_data_path }}/database.db"
when: "matrix_mx_puppet_groupme_stat_database.stat.exists"
- set_fact:
matrix_mx_puppet_groupme_requires_restart: false
- block:
- name: Check if an SQLite database already exists
stat:
path: "{{ matrix_mx_puppet_groupme_sqlite_database_path_local }}"
register: matrix_mx_puppet_groupme_sqlite_database_path_local_stat_result
- block:
- set_fact:
matrix_postgres_db_migration_request:
src: "{{ matrix_mx_puppet_groupme_sqlite_database_path_local }}"
dst: "{{ matrix_mx_puppet_groupme_database_connection_string }}"
caller: "{{ role_path|basename }}"
engine_variable_name: 'matrix_mx_puppet_groupme_database_engine'
engine_old: 'sqlite'
systemd_services_to_stop: ['matrix-mx-puppet-groupme.service']
- import_tasks: "{{ role_path }}/../matrix-postgres/tasks/util/migrate_db_to_postgres.yml"
- set_fact:
matrix_mx_puppet_groupme_requires_restart: true
when: "matrix_mx_puppet_groupme_sqlite_database_path_local_stat_result.stat.exists|bool"
when: "matrix_mx_puppet_groupme_database_engine == 'postgres'"
- name: Ensure MX Puppet Groupme image is pulled
docker_image:
name: "{{ matrix_mx_puppet_groupme_docker_image }}"
source: "{{ 'pull' if ansible_version.major > 2 or ansible_version.minor > 7 else omit }}"
force_source: "{{ matrix_mx_puppet_groupme_docker_image_force_pull if ansible_version.major > 2 or ansible_version.minor >= 8 else omit }}"
force: "{{ omit if ansible_version.major > 2 or ansible_version.minor >= 8 else matrix_mx_puppet_groupme_docker_image_force_pull }}"
when: matrix_mx_puppet_groupme_enabled|bool and not matrix_mx_puppet_groupme_container_image_self_build
- name: Ensure MX Puppet Groupme repository is present on self build
git:
repo: "{{ matrix_mx_puppet_groupme_container_image_self_build_repo }}"
dest: "{{ matrix_mx_puppet_groupme_docker_src_files_path }}"
force: "yes"
register: matrix_mx_puppet_groupme_git_pull_results
when: "matrix_mx_puppet_groupme_enabled|bool and matrix_mx_puppet_groupme_container_image_self_build"
- name: Ensure MX Puppet Groupme Docker image is built
docker_image:
name: "{{ matrix_mx_puppet_groupme_docker_image }}"
source: build
force_source: "{{ matrix_mx_puppet_groupme_git_pull_results.changed }}"
build:
dockerfile: Dockerfile
path: "{{ matrix_mx_puppet_groupme_docker_src_files_path }}"
pull: yes
when: "matrix_mx_puppet_groupme_enabled|bool and matrix_mx_puppet_groupme_container_image_self_build"
- name: Ensure mx-puppet-groupme config.yaml installed
copy:
content: "{{ matrix_mx_puppet_groupme_configuration|to_nice_yaml }}"
dest: "{{ matrix_mx_puppet_groupme_config_path }}/config.yaml"
mode: 0644
owner: "{{ matrix_user_username }}"
group: "{{ matrix_user_groupname }}"
- name: Ensure mx-puppet-groupme groupme-registration.yaml installed
copy:
content: "{{ matrix_mx_puppet_groupme_registration|to_nice_yaml }}"
dest: "{{ matrix_mx_puppet_groupme_config_path }}/registration.yaml"
mode: 0644
owner: "{{ matrix_user_username }}"
group: "{{ matrix_user_groupname }}"
- name: Ensure matrix-mx-puppet-groupme.service installed
template:
src: "{{ role_path }}/templates/systemd/matrix-mx-puppet-groupme.service.j2"
dest: "/etc/systemd/system/matrix-mx-puppet-groupme.service"
mode: 0644
register: matrix_mx_puppet_groupme_systemd_service_result
- name: Ensure systemd reloaded after matrix-mx-puppet-groupme.service installation
service:
daemon_reload: yes
when: "matrix_mx_puppet_groupme_systemd_service_result.changed"
- name: Ensure matrix-mx-puppet-groupme.service restarted, if necessary
service:
name: "matrix-mx-puppet-groupme.service"
state: restarted
when: "matrix_mx_puppet_groupme_requires_restart|bool"

View File

@ -0,0 +1,24 @@
---
- name: Check existence of matrix-mx-puppet-groupme service
stat:
path: "/etc/systemd/system/matrix-mx-puppet-groupme.service"
register: matrix_mx_puppet_groupme_service_stat
- name: Ensure matrix-mx-puppet-groupme is stopped
service:
name: matrix-mx-puppet-groupme
state: stopped
daemon_reload: yes
when: "matrix_mx_puppet_groupme_service_stat.stat.exists"
- name: Ensure matrix-mx-puppet-groupme.service doesn't exist
file:
path: "/etc/systemd/system/matrix-mx-puppet-groupme.service"
state: absent
when: "matrix_mx_puppet_groupme_service_stat.stat.exists"
- name: Ensure systemd reloaded after matrix-mx-puppet-groupme.service removal
service:
daemon_reload: yes
when: "matrix_mx_puppet_groupme_service_stat.stat.exists"

View File

@ -0,0 +1,10 @@
---
- name: Fail if required settings not defined
fail:
msg: >-
You need to define a required configuration setting (`{{ item }}`).
when: "vars[item] == ''"
with_items:
- "matrix_mx_puppet_groupme_appservice_token"
- "matrix_mx_puppet_groupme_homeserver_token"

View File

@ -0,0 +1,86 @@
#jinja2: lstrip_blocks: "True"
bridge:
# Port to host the bridge on
# Used for communication between the homeserver and the bridge
port: {{ matrix_mx_puppet_groupme_appservice_port }}
# The host connections to the bridge's webserver are allowed from
bindAddress: 0.0.0.0
# Public domain of the homeserver
domain: {{ matrix_mx_puppet_groupme_homeserver_domain }}
# Reachable URL of the Matrix homeserver
homeserverUrl: {{ matrix_mx_puppet_groupme_homeserver_address }}
{% if matrix_mx_puppet_groupme_login_shared_secret != '' %}
loginSharedSecretMap:
{{ matrix_domain }}: {{ matrix_mx_puppet_groupme_login_shared_secret }}
{% endif %}
# Display name of the bridge bot
displayname: GroupMe Puppet Bridge
# Optionally specify a different media URL used for the media store
#
# This is where GroupMe will download user profile pictures and media
# from
#mediaUrl: https://external-url.org
presence:
# Bridge GroupMe online/offline status
enabled: true
# How often to send status to the homeserver in milliseconds
interval: 5000
provisioning:
# Regex of Matrix IDs allowed to use the puppet bridge
whitelist: {{ matrix_mx_puppet_groupme_provisioning_whitelist|to_json }}
# Allow a specific user
#- "@user:server\\.com"
# Allow users on a specific homeserver
#- "@.*:yourserver\\.com"
# Allow anyone
#- ".*"
# Regex of Matrix IDs forbidden from using the puppet bridge
#blacklist:
# Disallow a specific user
#- "@user:server\\.com"
# Disallow users on a specific homeserver
#- "@.*:yourserver\\.com"
blacklist: {{ matrix_mx_puppet_groupme_provisioning_blacklist|to_json }}
relay:
# Regex of Matrix IDs who are allowed to use the bridge in relay mode.
# Relay mode is when a single GroupMe bot account relays messages of
# multiple Matrix users
#
# Same format as in provisioning
whitelist: {{ matrix_mx_puppet_groupme_provisioning_whitelist|to_json }}
blacklist: {{ matrix_mx_puppet_groupme_provisioning_blacklist|to_json }}
selfService:
# Regex of Matrix IDs who are allowed to use bridge self-servicing (plumbed rooms)
#
# Same format as in provisioning
whitelist: {{ matrix_mx_puppet_groupme_provisioning_whitelist|to_json }}
blacklist: {{ matrix_mx_puppet_groupme_provisioning_blacklist|to_json }}
database:
{% if matrix_mx_puppet_groupme_database_engine == 'postgres' %}
# Use Postgres as a database backend
# If set, will be used instead of SQLite3
# Connection string to connect to the Postgres instance
# with username "user", password "pass", host "localhost" and database name "dbname".
# Modify each value as necessary
connString: {{ matrix_mx_puppet_groupme_database_connection_string|to_json }}
{% else %}
# Use SQLite3 as a database backend
# The name of the database file
filename: {{ matrix_mx_puppet_groupme_sqlite_database_path_in_container|to_json }}
{% endif %}
logging:
# Log level of console output
# Allowed values starting with most verbose:
# silly, debug, verbose, info, warn, error
console: info
# Date and time formatting
lineDateFormat: MMM-D HH:mm:ss.SSS
# Logging files
# Log files are rotated daily by default
files: []

View File

@ -0,0 +1,43 @@
#jinja2: lstrip_blocks: "True"
[Unit]
Description=Matrix Mx Puppet Groupme bridge
{% for service in matrix_mx_puppet_groupme_systemd_required_services_list %}
Requires={{ service }}
After={{ service }}
{% endfor %}
{% for service in matrix_mx_puppet_groupme_systemd_wanted_services_list %}
Wants={{ service }}
{% endfor %}
DefaultDependencies=no
[Service]
Type=simple
Environment="HOME={{ matrix_systemd_unit_home_path }}"
ExecStartPre=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} kill matrix-mx-puppet-groupme 2>/dev/null'
ExecStartPre=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} rm matrix-mx-puppet-groupme 2>/dev/null'
# Intentional delay, so that the homeserver (we likely depend on) can manage to start.
ExecStartPre={{ matrix_host_command_sleep }} 5
ExecStart={{ matrix_host_command_docker }} run --rm --name matrix-mx-puppet-groupme \
--log-driver=none \
--user={{ matrix_user_uid }}:{{ matrix_user_gid }} \
--cap-drop=ALL \
--network={{ matrix_docker_network }} \
-e CONFIG_PATH=/config/config.yaml \
-e REGISTRATION_PATH=/config/registration.yaml \
-v {{ matrix_mx_puppet_groupme_config_path }}:/config:z \
-v {{ matrix_mx_puppet_groupme_data_path }}:/data:z \
{% for arg in matrix_mx_puppet_groupme_container_extra_arguments %}
{{ arg }} \
{% endfor %}
{{ matrix_mx_puppet_groupme_docker_image }}
ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} kill matrix-mx-puppet-groupme 2>/dev/null'
ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} rm matrix-mx-puppet-groupme 2>/dev/null'
Restart=always
RestartSec=30
SyslogIdentifier=matrix-mx-puppet-groupme
[Install]
WantedBy=multi-user.target

View File

@ -16,7 +16,7 @@ matrix_mx_puppet_instagram_data_path: "{{ matrix_mx_puppet_instagram_base_path }
matrix_mx_puppet_instagram_docker_src_files_path: "{{ matrix_mx_puppet_instagram_base_path }}/docker-src" matrix_mx_puppet_instagram_docker_src_files_path: "{{ matrix_mx_puppet_instagram_base_path }}/docker-src"
matrix_mx_puppet_instagram_appservice_port: "8440" matrix_mx_puppet_instagram_appservice_port: "8440"
matrix_mx_puppet_instagram_homeserver_address: 'http://matrix-synapse:8008' matrix_mx_puppet_instagram_homeserver_address: "{{ matrix_homeserver_container_url }}"
matrix_mx_puppet_instagram_homeserver_domain: '{{ matrix_domain }}' matrix_mx_puppet_instagram_homeserver_domain: '{{ matrix_domain }}'
matrix_mx_puppet_instagram_appservice_address: 'http://matrix-mx-puppet-instagram:{{ matrix_mx_puppet_instagram_appservice_port }}' matrix_mx_puppet_instagram_appservice_address: 'http://matrix-mx-puppet-instagram:{{ matrix_mx_puppet_instagram_appservice_port }}'

View File

@ -17,7 +17,7 @@ matrix_mx_puppet_skype_docker_src_files_path: "{{ matrix_mx_puppet_skype_base_pa
matrix_mx_puppet_skype_appservice_port: "8438" matrix_mx_puppet_skype_appservice_port: "8438"
matrix_mx_puppet_skype_homeserver_address: 'http://matrix-synapse:8008' matrix_mx_puppet_skype_homeserver_address: "{{ matrix_homeserver_container_url }}"
matrix_mx_puppet_skype_appservice_address: 'http://matrix-mx-puppet-skype:{{ matrix_mx_puppet_skype_appservice_port }}' matrix_mx_puppet_skype_appservice_address: 'http://matrix-mx-puppet-skype:{{ matrix_mx_puppet_skype_appservice_port }}'
# "@user:server.com" to allow specific user # "@user:server.com" to allow specific user

View File

@ -22,7 +22,7 @@ matrix_mx_puppet_slack_docker_src_files_path: "{{ matrix_mx_puppet_slack_base_pa
matrix_mx_puppet_slack_appservice_port: "8432" matrix_mx_puppet_slack_appservice_port: "8432"
matrix_mx_puppet_slack_homeserver_address: 'http://matrix-synapse:8008' matrix_mx_puppet_slack_homeserver_address: "{{ matrix_homeserver_container_url }}"
matrix_mx_puppet_slack_homeserver_domain: '{{ matrix_domain }}' matrix_mx_puppet_slack_homeserver_domain: '{{ matrix_domain }}'
matrix_mx_puppet_slack_appservice_address: 'http://matrix-mx-puppet-slack:{{ matrix_mx_puppet_slack_appservice_port }}' matrix_mx_puppet_slack_appservice_address: 'http://matrix-mx-puppet-slack:{{ matrix_mx_puppet_slack_appservice_port }}'

View File

@ -22,7 +22,7 @@ matrix_mx_puppet_steam_docker_src_files_path: "{{ matrix_mx_puppet_steam_base_pa
matrix_mx_puppet_steam_appservice_port: "8432" matrix_mx_puppet_steam_appservice_port: "8432"
matrix_mx_puppet_steam_homeserver_address: 'http://matrix-synapse:8008' matrix_mx_puppet_steam_homeserver_address: "{{ matrix_homeserver_container_url }}"
matrix_mx_puppet_steam_homeserver_domain: '{{ matrix_domain }}' matrix_mx_puppet_steam_homeserver_domain: '{{ matrix_domain }}'
matrix_mx_puppet_steam_appservice_address: 'http://matrix-mx-puppet-steam:{{ matrix_mx_puppet_steam_appservice_port }}' matrix_mx_puppet_steam_appservice_address: 'http://matrix-mx-puppet-steam:{{ matrix_mx_puppet_steam_appservice_port }}'

View File

@ -22,7 +22,7 @@ matrix_mx_puppet_twitter_docker_src_files_path: "{{ matrix_mx_puppet_twitter_bas
matrix_mx_puppet_twitter_appservice_port: "8432" matrix_mx_puppet_twitter_appservice_port: "8432"
matrix_mx_puppet_twitter_homeserver_address: 'http://matrix-synapse:8008' matrix_mx_puppet_twitter_homeserver_address: "{{ matrix_homeserver_container_url }}"
matrix_mx_puppet_twitter_homeserver_domain: '{{ matrix_domain }}' matrix_mx_puppet_twitter_homeserver_domain: '{{ matrix_domain }}'
matrix_mx_puppet_twitter_appservice_address: 'http://matrix-mx-puppet-twitter:{{ matrix_mx_puppet_twitter_appservice_port }}' matrix_mx_puppet_twitter_appservice_address: 'http://matrix-mx-puppet-twitter:{{ matrix_mx_puppet_twitter_appservice_port }}'

View File

@ -13,7 +13,7 @@ homeserver:
# The URL that Dimension, go-neb, and other services provisioned by Dimension should # The URL that Dimension, go-neb, and other services provisioned by Dimension should
# use to access the homeserver with. # use to access the homeserver with.
clientServerUrl: "http://matrix-synapse:8008" clientServerUrl: "{{ matrix_homeserver_container_url }}"
# The URL that Dimension should use when trying to communicate with federated APIs on # The URL that Dimension should use when trying to communicate with federated APIs on
# the homeserver. If not supplied or left empty Dimension will try to resolve the address # the homeserver. If not supplied or left empty Dimension will try to resolve the address

View File

@ -99,6 +99,10 @@ matrix_nginx_proxy_access_log_enabled: true
matrix_nginx_proxy_proxy_riot_compat_redirect_enabled: false matrix_nginx_proxy_proxy_riot_compat_redirect_enabled: false
matrix_nginx_proxy_proxy_riot_compat_redirect_hostname: "riot.{{ matrix_domain }}" matrix_nginx_proxy_proxy_riot_compat_redirect_hostname: "riot.{{ matrix_domain }}"
# Controls whether proxying the Synapse domain should be done.
matrix_nginx_proxy_proxy_synapse_enabled: false
matrix_nginx_proxy_proxy_synapse_hostname: "matrix-nginx-proxy"
# Controls whether proxying the Element domain should be done. # Controls whether proxying the Element domain should be done.
matrix_nginx_proxy_proxy_element_enabled: false matrix_nginx_proxy_proxy_element_enabled: false
matrix_nginx_proxy_proxy_element_hostname: "{{ matrix_server_fqn_element }}" matrix_nginx_proxy_proxy_element_hostname: "{{ matrix_server_fqn_element }}"
@ -150,8 +154,13 @@ matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_key: ""
# The addresses where the Matrix Client API is. # The addresses where the Matrix Client API is.
# Certain extensions (like matrix-corporal) may override this in order to capture all traffic. # Certain extensions (like matrix-corporal) may override this in order to capture all traffic.
matrix_nginx_proxy_proxy_matrix_client_api_addr_with_container: "matrix-synapse:8008" matrix_nginx_proxy_proxy_matrix_client_api_addr_with_container: "matrix-nginx-proxy:12080"
matrix_nginx_proxy_proxy_matrix_client_api_addr_sans_container: "127.0.0.1:8008" matrix_nginx_proxy_proxy_matrix_client_api_addr_sans_container: "127.0.0.1:12080"
# The addresses where the Matrix Client API is, when using Synapse.
matrix_nginx_proxy_proxy_synapse_client_api_addr_with_container: "matrix-synapse:8008"
matrix_nginx_proxy_proxy_synapse_client_api_addr_sans_container: "127.0.0.1:8008"
# This needs to be equal or higher than the maximum upload size accepted by Synapse. # This needs to be equal or higher than the maximum upload size accepted by Synapse.
matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb: 50 matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb: 50
@ -189,37 +198,44 @@ matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain: ""
# Controls whether proxying for the Matrix Federation API should be done. # Controls whether proxying for the Matrix Federation API should be done.
matrix_nginx_proxy_proxy_matrix_federation_api_enabled: false matrix_nginx_proxy_proxy_matrix_federation_api_enabled: false
matrix_nginx_proxy_proxy_matrix_federation_api_addr_with_container: "matrix-synapse:8048" matrix_nginx_proxy_proxy_matrix_federation_api_addr_with_container: "matrix-nginx-proxy:12088"
matrix_nginx_proxy_proxy_matrix_federation_api_addr_sans_container: "localhost:8048" matrix_nginx_proxy_proxy_matrix_federation_api_addr_sans_container: "localhost:12088"
matrix_nginx_proxy_proxy_matrix_federation_api_client_max_body_size_mb: "{{ (matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb | int) * 3 }}" matrix_nginx_proxy_proxy_matrix_federation_api_client_max_body_size_mb: "{{ (matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb | int) * 3 }}"
matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate: "{{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/fullchain.pem" matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate: "{{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/fullchain.pem"
matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate_key: "{{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/privkey.pem" matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate_key: "{{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/privkey.pem"
# The addresses where the Federation API is, when using Synapse.
matrix_nginx_proxy_proxy_synapse_federation_api_addr_with_container: "matrix-synapse:8048"
matrix_nginx_proxy_proxy_synapse_federation_api_addr_sans_container: "localhost:8048"
# The tmpfs at /tmp needs to be large enough to handle multiple concurrent file uploads. # The tmpfs at /tmp needs to be large enough to handle multiple concurrent file uploads.
matrix_nginx_proxy_tmp_directory_size_mb: "{{ (matrix_nginx_proxy_proxy_matrix_federation_api_client_max_body_size_mb | int) * 50 }}" matrix_nginx_proxy_tmp_directory_size_mb: "{{ (matrix_nginx_proxy_proxy_matrix_federation_api_client_max_body_size_mb | int) * 50 }}"
# A list of strings containing additional configuration blocks to add to the nginx http's server configuration. # A list of strings containing additional configuration blocks to add to the nginx http's server configuration (nginx-http.conf).
matrix_nginx_proxy_proxy_http_additional_server_configuration_blocks: [] matrix_nginx_proxy_proxy_http_additional_server_configuration_blocks: []
# A list of strings containing additional configuration blocks to add to the matrix synapse's server configuration. # A list of strings containing additional configuration blocks to add to the base matrix server configuration (matrix-domain.conf).
matrix_nginx_proxy_proxy_matrix_additional_server_configuration_blocks: [] matrix_nginx_proxy_proxy_matrix_additional_server_configuration_blocks: []
# A list of strings containing additional configuration blocks to add to Riot's server configuration. # A list of strings containing additional configuration blocks to add to the synapse's server configuration (matrix-synapse.conf).
matrix_nginx_proxy_proxy_synapse_additional_server_configuration_blocks: []
# A list of strings containing additional configuration blocks to add to Riot's server configuration (matrix-riot-web.conf).
matrix_nginx_proxy_proxy_riot_additional_server_configuration_blocks: [] matrix_nginx_proxy_proxy_riot_additional_server_configuration_blocks: []
# A list of strings containing additional configuration blocks to add to Element's server configuration. # A list of strings containing additional configuration blocks to add to Element's server configuration (matrix-client-element.conf).
matrix_nginx_proxy_proxy_element_additional_server_configuration_blocks: [] matrix_nginx_proxy_proxy_element_additional_server_configuration_blocks: []
# A list of strings containing additional configuration blocks to add to Dimension's server configuration. # A list of strings containing additional configuration blocks to add to Dimension's server configuration (matrix-dimension.conf).
matrix_nginx_proxy_proxy_dimension_additional_server_configuration_blocks: [] matrix_nginx_proxy_proxy_dimension_additional_server_configuration_blocks: []
# A list of strings containing additional configuration blocks to add to Jitsi's server configuration. # A list of strings containing additional configuration blocks to add to Jitsi's server configuration (matrix-jitsi.conf).
matrix_nginx_proxy_proxy_jitsi_additional_server_configuration_blocks: [] matrix_nginx_proxy_proxy_jitsi_additional_server_configuration_blocks: []
# A list of strings containing additional configuration blocks to add to Grafana's server configuration. # A list of strings containing additional configuration blocks to add to Grafana's server configuration (matrix-grafana.conf).
matrix_nginx_proxy_proxy_grafana_additional_server_configuration_blocks: [] matrix_nginx_proxy_proxy_grafana_additional_server_configuration_blocks: []
# A list of strings containing additional configuration blocks to add to the base domain server configuration. # A list of strings containing additional configuration blocks to add to the base domain server configuration (matrix-base-domain.conf).
matrix_nginx_proxy_proxy_domain_additional_server_configuration_blocks: [] matrix_nginx_proxy_proxy_domain_additional_server_configuration_blocks: []
# Specifies the SSL configuration that should be used for the SSL protocols and ciphers # Specifies the SSL configuration that should be used for the SSL protocols and ciphers
@ -331,3 +347,13 @@ matrix_ssl_pre_obtaining_required_service_start_wait_time_seconds: 60
# nginx status page configurations. # nginx status page configurations.
matrix_nginx_proxy_proxy_matrix_nginx_status_enabled: false matrix_nginx_proxy_proxy_matrix_nginx_status_enabled: false
matrix_nginx_proxy_proxy_matrix_nginx_status_allowed_addresses: ['{{ ansible_default_ipv4.address }}'] matrix_nginx_proxy_proxy_matrix_nginx_status_allowed_addresses: ['{{ ansible_default_ipv4.address }}']
# synapse worker activation and endpoint mappings
matrix_nginx_proxy_synapse_workers_enabled: false
matrix_nginx_proxy_synapse_workers_list: []
matrix_nginx_proxy_synapse_generic_worker_client_server_locations: []
matrix_nginx_proxy_synapse_generic_worker_federation_locations: []
matrix_nginx_proxy_synapse_media_repository_locations: []
matrix_nginx_proxy_synapse_user_dir_locations: []
matrix_nginx_proxy_synapse_frontend_proxy_locations: []

View File

@ -45,12 +45,18 @@
mode: 0644 mode: 0644
when: matrix_nginx_proxy_enabled|bool when: matrix_nginx_proxy_enabled|bool
- name: Ensure Matrix nginx-proxy configuration for matrix domain exists - name: Ensure Matrix nginx-proxy configuration for matrix-synapse exists
template: template:
src: "{{ role_path }}/templates/nginx/conf.d/matrix-synapse.conf.j2" src: "{{ role_path }}/templates/nginx/conf.d/matrix-synapse.conf.j2"
dest: "{{ matrix_nginx_proxy_confd_path }}/matrix-synapse.conf" dest: "{{ matrix_nginx_proxy_confd_path }}/matrix-synapse.conf"
mode: 0644 mode: 0644
when: matrix_nginx_proxy_proxy_matrix_enabled|bool when: matrix_nginx_proxy_proxy_synapse_enabled|bool
- name: Ensure Matrix nginx-proxy configuration for matrix-synapse deleted
file:
path: "{{ matrix_nginx_proxy_confd_path }}/matrix-synapse.conf"
state: absent
when: "not matrix_nginx_proxy_proxy_synapse_enabled|bool"
- name: Ensure Matrix nginx-proxy configuration for Element domain exists - name: Ensure Matrix nginx-proxy configuration for Element domain exists
template: template:
@ -87,6 +93,12 @@
mode: 0644 mode: 0644
when: matrix_nginx_proxy_proxy_grafana_enabled|bool when: matrix_nginx_proxy_proxy_grafana_enabled|bool
- name: Ensure Matrix nginx-proxy configuration for Matrix domain exists
template:
src: "{{ role_path }}/templates/nginx/conf.d/matrix-domain.conf.j2"
dest: "{{ matrix_nginx_proxy_confd_path }}/matrix-domain.conf"
mode: 0644
- name: Ensure Matrix nginx-proxy data directory for base domain exists - name: Ensure Matrix nginx-proxy data directory for base domain exists
file: file:
path: "{{ matrix_nginx_proxy_data_path }}/matrix-domain" path: "{{ matrix_nginx_proxy_data_path }}/matrix-domain"
@ -107,8 +119,8 @@
- name: Ensure Matrix nginx-proxy configuration for base domain exists - name: Ensure Matrix nginx-proxy configuration for base domain exists
template: template:
src: "{{ role_path }}/templates/nginx/conf.d/matrix-domain.conf.j2" src: "{{ role_path }}/templates/nginx/conf.d/matrix-base-domain.conf.j2"
dest: "{{ matrix_nginx_proxy_confd_path }}/matrix-domain.conf" dest: "{{ matrix_nginx_proxy_confd_path }}/matrix-base-domain.conf"
mode: 0644 mode: 0644
when: matrix_nginx_proxy_base_domain_serving_enabled|bool when: matrix_nginx_proxy_base_domain_serving_enabled|bool
@ -168,7 +180,7 @@
- name: Ensure Matrix nginx-proxy configuration for matrix domain deleted - name: Ensure Matrix nginx-proxy configuration for matrix domain deleted
file: file:
path: "{{ matrix_nginx_proxy_confd_path }}/matrix-synapse.conf" path: "{{ matrix_nginx_proxy_confd_path }}/matrix-domain.conf"
state: absent state: absent
when: "not matrix_nginx_proxy_proxy_matrix_enabled|bool" when: "not matrix_nginx_proxy_proxy_matrix_enabled|bool"
@ -204,7 +216,7 @@
- name: Ensure Matrix nginx-proxy configuration for base domain deleted - name: Ensure Matrix nginx-proxy configuration for base domain deleted
file: file:
path: "{{ matrix_nginx_proxy_confd_path }}/matrix-domain.conf" path: "{{ matrix_nginx_proxy_confd_path }}/matrix-base-domain.conf"
state: absent state: absent
when: "not matrix_nginx_proxy_base_domain_serving_enabled|bool" when: "not matrix_nginx_proxy_base_domain_serving_enabled|bool"

View File

@ -0,0 +1,70 @@
#jinja2: lstrip_blocks: "True"
{% macro render_vhost_directives() %}
root /nginx-data/matrix-domain;
gzip on;
gzip_types text/plain application/json;
{% for configuration_block in matrix_nginx_proxy_proxy_domain_additional_server_configuration_blocks %}
{{- configuration_block }}
{% endfor %}
location /.well-known/matrix {
root {{ matrix_static_files_base_path }};
{#
A somewhat long expires value is used to prevent outages
in case this is unreachable due to network failure.
#}
expires 4h;
default_type application/json;
add_header Access-Control-Allow-Origin *;
}
{% endmacro %}
server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_base_domain_hostname }};
server_tokens off;
{% if matrix_nginx_proxy_https_enabled %}
location /.well-known/acme-challenge {
{% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s;
set $backend "matrix-certbot:8080";
proxy_pass http://$backend;
{% else %}
{# Generic configuration for use outside of our container setup #}
proxy_pass http://127.0.0.1:{{ matrix_ssl_lets_encrypt_certbot_standalone_http_port }};
{% endif %}
}
location / {
return 301 https://$http_host$request_uri;
}
{% else %}
{{ render_vhost_directives() }}
{% endif %}
}
{% if matrix_nginx_proxy_https_enabled %}
server {
listen {{ 8443 if matrix_nginx_proxy_enabled else 443 }} ssl http2;
listen [::]:{{ 8443 if matrix_nginx_proxy_enabled else 443 }} ssl http2;
server_name {{ matrix_nginx_proxy_base_domain_hostname }};
server_tokens off;
ssl_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_base_domain_hostname }}/fullchain.pem;
ssl_certificate_key {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_base_domain_hostname }}/privkey.pem;
ssl_protocols {{ matrix_nginx_proxy_ssl_protocols }};
{% if matrix_nginx_proxy_ssl_ciphers != '' %}
ssl_ciphers {{ matrix_nginx_proxy_ssl_ciphers }};
{% endif %}
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
{{ render_vhost_directives() }}
}
{% endif %}

View File

@ -1,31 +1,148 @@
#jinja2: lstrip_blocks: "True" #jinja2: lstrip_blocks: "True"
{% macro render_nginx_status_location_block(addresses) %}
{# Empty first line to make indentation prettier. #}
location /nginx_status {
stub_status on;
access_log off;
{% for address in addresses %}
allow {{ address }};
{% endfor %}
deny all;
}
{% endmacro %}
{% macro render_vhost_directives() %} {% macro render_vhost_directives() %}
root /nginx-data/matrix-domain;
gzip on; gzip on;
gzip_types text/plain application/json; gzip_types text/plain application/json;
{% for configuration_block in matrix_nginx_proxy_proxy_domain_additional_server_configuration_blocks %}
{{- configuration_block }}
{% endfor %}
location /.well-known/matrix { location /.well-known/matrix {
root {{ matrix_static_files_base_path }}; root {{ matrix_static_files_base_path }};
{# {#
A somewhat long expires value is used to prevent outages A somewhat long expires value is used to prevent outages
in case this is unreachable due to network failure. in case this is unreachable due to network failure or
due to the base domain's server completely dying.
#} #}
expires 4h; expires 4h;
default_type application/json; default_type application/json;
add_header Access-Control-Allow-Origin *; add_header Access-Control-Allow-Origin *;
} }
{% if matrix_nginx_proxy_proxy_matrix_nginx_status_enabled %}
{{ render_nginx_status_location_block(matrix_nginx_proxy_proxy_matrix_nginx_status_allowed_addresses) }}
{% endif %}
{% if matrix_nginx_proxy_proxy_matrix_corporal_api_enabled %}
location ^~ /_matrix/corporal {
{% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s;
set $backend "{{ matrix_nginx_proxy_proxy_matrix_corporal_api_addr_with_container }}";
proxy_pass http://$backend;
{% else %}
{# Generic configuration for use outside of our container setup #}
proxy_pass http://{{ matrix_nginx_proxy_proxy_matrix_corporal_api_addr_sans_container }};
{% endif %}
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
{% endif %}
{% if matrix_nginx_proxy_proxy_matrix_identity_api_enabled %}
location ^~ /_matrix/identity {
{% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s;
set $backend "{{ matrix_nginx_proxy_proxy_matrix_identity_api_addr_with_container }}";
proxy_pass http://$backend;
{% else %}
{# Generic configuration for use outside of our container setup #}
proxy_pass http://{{ matrix_nginx_proxy_proxy_matrix_identity_api_addr_sans_container }};
{% endif %}
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
{% endif %}
{% if matrix_nginx_proxy_proxy_matrix_user_directory_search_enabled %}
location ^~ /_matrix/client/r0/user_directory/search {
{% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s;
set $backend "{{ matrix_nginx_proxy_proxy_matrix_user_directory_search_addr_with_container }}";
proxy_pass http://$backend;
{% else %}
{# Generic configuration for use outside of our container setup #}
proxy_pass http://{{ matrix_nginx_proxy_proxy_matrix_user_directory_search_addr_sans_container }};
{% endif %}
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
{% endif %}
{% if matrix_nginx_proxy_proxy_matrix_3pid_registration_enabled %}
location ~ ^/_matrix/client/r0/register/(email|msisdn)/requestToken$ {
{% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s;
set $backend "{{ matrix_nginx_proxy_proxy_matrix_3pid_registration_addr_with_container }}";
proxy_pass http://$backend;
{% else %}
{# Generic configuration for use outside of our container setup #}
proxy_pass http://{{ matrix_nginx_proxy_proxy_matrix_3pid_registration_addr_sans_container }};
{% endif %}
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
{% endif %}
{% for configuration_block in matrix_nginx_proxy_proxy_matrix_additional_server_configuration_blocks %}
{{- configuration_block }}
{% endfor %}
{#
This handles the Matrix Client API only.
The Matrix Federation API is handled by a separate vhost.
#}
location ~* ^({{ matrix_nginx_proxy_proxy_matrix_client_api_forwarded_location_prefix_regexes|join('|') }}) {
{% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s;
set $backend "{{ matrix_nginx_proxy_proxy_matrix_client_api_addr_with_container }}";
proxy_pass http://$backend;
{% else %}
{# Generic configuration for use outside of our container setup #}
proxy_pass http://{{ matrix_nginx_proxy_proxy_matrix_client_api_addr_sans_container }};
{% endif %}
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
client_body_buffer_size 25M;
client_max_body_size {{ matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb }}M;
proxy_max_temp_file_size 0;
}
location / {
{% if matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain %}
return 302 $scheme://{{ matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain }}$request_uri;
{% else %}
rewrite ^/$ /_matrix/static/ last;
{% endif %}
}
{% endmacro %} {% endmacro %}
server { server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }}; listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_proxy_matrix_hostname }};
server_name {{ matrix_nginx_proxy_base_domain_hostname }};
server_tokens off; server_tokens off;
root /dev/null;
{% if matrix_nginx_proxy_https_enabled %} {% if matrix_nginx_proxy_https_enabled %}
location /.well-known/acme-challenge { location /.well-known/acme-challenge {
@ -40,6 +157,10 @@ server {
{% endif %} {% endif %}
} }
{% if matrix_nginx_proxy_proxy_matrix_nginx_status_enabled %}
{{ render_nginx_status_location_block(matrix_nginx_proxy_proxy_matrix_nginx_status_allowed_addresses) }}
{% endif %}
location / { location / {
return 301 https://$http_host$request_uri; return 301 https://$http_host$request_uri;
} }
@ -53,11 +174,13 @@ server {
listen {{ 8443 if matrix_nginx_proxy_enabled else 443 }} ssl http2; listen {{ 8443 if matrix_nginx_proxy_enabled else 443 }} ssl http2;
listen [::]:{{ 8443 if matrix_nginx_proxy_enabled else 443 }} ssl http2; listen [::]:{{ 8443 if matrix_nginx_proxy_enabled else 443 }} ssl http2;
server_name {{ matrix_nginx_proxy_base_domain_hostname }}; server_name {{ matrix_nginx_proxy_proxy_matrix_hostname }};
server_tokens off;
ssl_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_base_domain_hostname }}/fullchain.pem; server_tokens off;
ssl_certificate_key {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_base_domain_hostname }}/privkey.pem; root /dev/null;
ssl_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/fullchain.pem;
ssl_certificate_key {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/privkey.pem;
ssl_protocols {{ matrix_nginx_proxy_ssl_protocols }}; ssl_protocols {{ matrix_nginx_proxy_ssl_protocols }};
{% if matrix_nginx_proxy_ssl_ciphers != '' %} {% if matrix_nginx_proxy_ssl_ciphers != '' %}
@ -68,3 +191,56 @@ server {
{{ render_vhost_directives() }} {{ render_vhost_directives() }}
} }
{% endif %} {% endif %}
{% if matrix_nginx_proxy_proxy_matrix_federation_api_enabled %}
{#
This federation vhost is a little special.
It serves federation over HTTP or HTTPS, depending on `matrix_nginx_proxy_https_enabled`.
#}
server {
{% if matrix_nginx_proxy_https_enabled %}
listen 8448 ssl http2;
listen [::]:8448 ssl http2;
{% else %}
listen 8448;
{% endif %}
server_name {{ matrix_nginx_proxy_proxy_matrix_hostname }};
server_tokens off;
root /dev/null;
gzip on;
gzip_types text/plain application/json;
{% if matrix_nginx_proxy_https_enabled %}
ssl_certificate {{ matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate }};
ssl_certificate_key {{ matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate_key }};
ssl_protocols {{ matrix_nginx_proxy_ssl_protocols }};
{% if matrix_nginx_proxy_ssl_ciphers != '' %}
ssl_ciphers {{ matrix_nginx_proxy_ssl_ciphers }};
{% endif %}
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
{% endif %}
location / {
{% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s;
set $backend "{{ matrix_nginx_proxy_proxy_matrix_federation_api_addr_with_container }}";
proxy_pass http://$backend;
{% else %}
{# Generic configuration for use outside of our container setup #}
proxy_pass http://{{ matrix_nginx_proxy_proxy_matrix_federation_api_addr_sans_container }};
{% endif %}
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
client_body_buffer_size 25M;
client_max_body_size {{ matrix_nginx_proxy_proxy_matrix_federation_api_client_max_body_size_mb }}M;
proxy_max_temp_file_size 0;
}
}
{% endif %}

View File

@ -1,107 +1,139 @@
#jinja2: lstrip_blocks: "True" #jinja2: lstrip_blocks: "True"
{% macro render_nginx_status_location_block(addresses) %}
{# Empty first line to make indentation prettier. #}
location /nginx_status { {% set generic_workers = matrix_nginx_proxy_synapse_workers_list|selectattr('type', 'equalto', 'generic_worker')|list %}
stub_status on; {% set media_repository_workers = matrix_nginx_proxy_synapse_workers_list|selectattr('type', 'equalto', 'media_repository')|list %}
access_log off; {% set user_dir_workers = matrix_nginx_proxy_synapse_workers_list|selectattr('type', 'equalto', 'user_dir')|list %}
{% for address in addresses %} {% set frontend_proxy_workers = matrix_nginx_proxy_synapse_workers_list|selectattr('type', 'equalto', 'frontend_proxy')|list %}
allow {{ address }}; {% if matrix_nginx_proxy_synapse_workers_enabled %}
# Round Robin "upstream" pools for workers
{% if generic_workers %}
upstream generic_worker_upstream {
# ensures that requests from the same client will always be passed
# to the same server (except when this server is unavailable)
ip_hash;
{% for worker in generic_workers %}
{% if matrix_nginx_proxy_enabled %}
server "matrix-synapse-worker-{{ worker.type }}-{{ worker.instanceId }}:{{ worker.port }}";
{% else %}
server "127.0.0.1:{{ worker.port }}";
{% endif %}
{% endfor %} {% endfor %}
deny all;
} }
{% endmacro %} {% endif %}
{% if frontend_proxy_workers %}
upstream frontend_proxy_upstream {
{% for worker in frontend_proxy_workers %}
{% if matrix_nginx_proxy_enabled %}
server "matrix-synapse-worker-{{ worker.type }}-{{ worker.instanceId }}:{{ worker.port }}";
{% else %}
server "127.0.0.1:{{ worker.port }}";
{% endif %}
{% endfor %}
}
{% endif %}
{% if media_repository_workers %}
upstream media_repository_upstream {
{% for worker in media_repository_workers %}
{% if matrix_nginx_proxy_enabled %}
server "matrix-synapse-worker-{{ worker.type }}-{{ worker.instanceId }}:{{ worker.port }}";
{% else %}
server "127.0.0.1:{{ worker.port }}";
{% endif %}
{% endfor %}
}
{% endif %}
{% if user_dir_workers %}
upstream user_dir_upstream {
{% for worker in user_dir_workers %}
{% if matrix_nginx_proxy_enabled %}
server "matrix-synapse-worker-{{ worker.type }}-{{ worker.instanceId }}:{{ worker.port }}";
{% else %}
server "127.0.0.1:{{ worker.port }}";
{% endif %}
{% endfor %}
}
{% endif %}
{% endif %}
server {
listen 12080;
server_name {{ matrix_nginx_proxy_proxy_synapse_hostname }};
server_tokens off;
root /dev/null;
{% macro render_vhost_directives() %}
gzip on; gzip on;
gzip_types text/plain application/json; gzip_types text/plain application/json;
location /.well-known/matrix { {% if matrix_nginx_proxy_synapse_workers_enabled %}
root {{ matrix_static_files_base_path }}; {# Workers redirects BEGIN #}
{#
A somewhat long expires value is used to prevent outages {% if generic_workers %}
in case this is unreachable due to network failure or # https://github.com/matrix-org/synapse/blob/master/docs/workers.md#synapseappgeneric_worker
due to the base domain's server completely dying. {% for location in matrix_nginx_proxy_synapse_generic_worker_client_server_locations %}
#} location ~ {{ location }} {
expires 4h; proxy_pass http://generic_worker_upstream$request_uri;
default_type application/json; proxy_set_header Host $host;
add_header Access-Control-Allow-Origin *; proxy_set_header X-Forwarded-For $remote_addr;
} }
{% endfor %}
{% if matrix_nginx_proxy_proxy_matrix_nginx_status_enabled %}
{{ render_nginx_status_location_block(matrix_nginx_proxy_proxy_matrix_nginx_status_allowed_addresses) }}
{% endif %} {% endif %}
{% if matrix_nginx_proxy_proxy_matrix_corporal_api_enabled %} {% if media_repository_workers %}
location ^~ /_matrix/corporal { # https://github.com/matrix-org/synapse/blob/master/docs/workers.md#synapseappmedia_repository
{% if matrix_nginx_proxy_enabled %} {% for location in matrix_nginx_proxy_synapse_media_repository_locations %}
{# Use the embedded DNS resolver in Docker containers to discover the service #} location ~ {{ location }} {
resolver 127.0.0.11 valid=5s; proxy_pass http://media_repository_upstream$request_uri;
set $backend "{{ matrix_nginx_proxy_proxy_matrix_corporal_api_addr_with_container }}"; proxy_set_header Host $host;
proxy_pass http://$backend; proxy_set_header X-Forwarded-For $remote_addr;
{% else %}
{# Generic configuration for use outside of our container setup #} client_body_buffer_size 25M;
proxy_pass http://{{ matrix_nginx_proxy_proxy_matrix_corporal_api_addr_sans_container }}; client_max_body_size {{ matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb }}M;
proxy_max_temp_file_size 0;
}
{% endfor %}
{% endif %} {% endif %}
{% if user_dir_workers %}
# FIXME: obsolete if matrix_nginx_proxy_proxy_matrix_user_directory_search_enabled is set
# https://github.com/matrix-org/synapse/blob/master/docs/workers.md#synapseappuser_dir
{% for location in matrix_nginx_proxy_synapse_user_dir_locations %}
location ~ {{ location }} {
proxy_pass http://user_dir_upstream$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
{% endfor %}
{% endif %}
{% if frontend_proxy_workers %}
# https://github.com/matrix-org/synapse/blob/master/docs/workers.md#synapseappfrontend_proxy
{% for location in matrix_nginx_proxy_synapse_frontend_proxy_locations %}
location ~ {{ location }} {
proxy_pass http://frontend_proxy_upstream$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
{% endfor %}
{% if matrix_nginx_proxy_synapse_presence_disabled %}
# FIXME: keep in sync with synapse workers documentation manually
location ~ ^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status {
proxy_pass http://frontend_proxy_upstream$request_uri;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
} }
{% endif %} {% endif %}
{% endif %}
{% if matrix_nginx_proxy_proxy_matrix_identity_api_enabled %} {# Workers redirects END #}
location ^~ /_matrix/identity {
{% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s;
set $backend "{{ matrix_nginx_proxy_proxy_matrix_identity_api_addr_with_container }}";
proxy_pass http://$backend;
{% else %}
{# Generic configuration for use outside of our container setup #}
proxy_pass http://{{ matrix_nginx_proxy_proxy_matrix_identity_api_addr_sans_container }};
{% endif %} {% endif %}
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
{% endif %}
{% if matrix_nginx_proxy_proxy_matrix_user_directory_search_enabled %} {% for configuration_block in matrix_nginx_proxy_proxy_synapse_additional_server_configuration_blocks %}
location ^~ /_matrix/client/r0/user_directory/search {
{% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s;
set $backend "{{ matrix_nginx_proxy_proxy_matrix_user_directory_search_addr_with_container }}";
proxy_pass http://$backend;
{% else %}
{# Generic configuration for use outside of our container setup #}
proxy_pass http://{{ matrix_nginx_proxy_proxy_matrix_user_directory_search_addr_sans_container }};
{% endif %}
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
{% endif %}
{% if matrix_nginx_proxy_proxy_matrix_3pid_registration_enabled %}
location ~ ^/_matrix/client/r0/register/(email|msisdn)/requestToken$ {
{% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s;
set $backend "{{ matrix_nginx_proxy_proxy_matrix_3pid_registration_addr_with_container }}";
proxy_pass http://$backend;
{% else %}
{# Generic configuration for use outside of our container setup #}
proxy_pass http://{{ matrix_nginx_proxy_proxy_matrix_3pid_registration_addr_sans_container }};
{% endif %}
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
{% endif %}
{% for configuration_block in matrix_nginx_proxy_proxy_matrix_additional_server_configuration_blocks %}
{{- configuration_block }} {{- configuration_block }}
{% endfor %} {% endfor %}
@ -127,19 +159,16 @@
} }
{% endif %} {% endif %}
{# {# Everything else just goes to the API server ##}
This handles the Matrix Client API only. location / {
The Matrix Federation API is handled by a separate vhost.
#}
location ~* ^({{ matrix_nginx_proxy_proxy_matrix_client_api_forwarded_location_prefix_regexes|join('|') }}) {
{% if matrix_nginx_proxy_enabled %} {% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #} {# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s; resolver 127.0.0.11 valid=5s;
set $backend "{{ matrix_nginx_proxy_proxy_matrix_client_api_addr_with_container }}"; set $backend "{{ matrix_nginx_proxy_proxy_synapse_client_api_addr_with_container }}";
proxy_pass http://$backend; proxy_pass http://$backend;
{% else %} {% else %}
{# Generic configuration for use outside of our container setup #} {# Generic configuration for use outside of our container setup #}
proxy_pass http://{{ matrix_nginx_proxy_proxy_matrix_client_api_addr_sans_container }}; proxy_pass http://{{ matrix_nginx_proxy_proxy_synapse_client_api_addr_sans_container }};
{% endif %} {% endif %}
proxy_set_header Host $host; proxy_set_header Host $host;
@ -149,85 +178,13 @@
client_max_body_size {{ matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb }}M; client_max_body_size {{ matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb }}M;
proxy_max_temp_file_size 0; proxy_max_temp_file_size 0;
} }
location / {
{% if matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain %}
return 302 $scheme://{{ matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain }}$request_uri;
{% else %}
rewrite ^/$ /_matrix/static/ last;
{% endif %}
}
{% endmacro %}
server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_proxy_matrix_hostname }};
server_tokens off;
root /dev/null;
{% if matrix_nginx_proxy_https_enabled %}
location /.well-known/acme-challenge {
{% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s;
set $backend "matrix-certbot:8080";
proxy_pass http://$backend;
{% else %}
{# Generic configuration for use outside of our container setup #}
proxy_pass http://127.0.0.1:{{ matrix_ssl_lets_encrypt_certbot_standalone_http_port }};
{% endif %}
}
{% if matrix_nginx_proxy_proxy_matrix_nginx_status_enabled %}
{{ render_nginx_status_location_block(matrix_nginx_proxy_proxy_matrix_nginx_status_allowed_addresses) }}
{% endif %}
location / {
return 301 https://$http_host$request_uri;
}
{% else %}
{{ render_vhost_directives() }}
{% endif %}
} }
{% if matrix_nginx_proxy_https_enabled %} {% if matrix_nginx_proxy_proxy_synapse_federation_api_enabled %}
server { server {
listen {{ 8443 if matrix_nginx_proxy_enabled else 443 }} ssl http2; listen 12088;
listen [::]:{{ 8443 if matrix_nginx_proxy_enabled else 443 }} ssl http2;
server_name {{ matrix_nginx_proxy_proxy_matrix_hostname }}; server_name {{ matrix_nginx_proxy_proxy_synapse_hostname }};
server_tokens off;
root /dev/null;
ssl_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/fullchain.pem;
ssl_certificate_key {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/privkey.pem;
ssl_protocols {{ matrix_nginx_proxy_ssl_protocols }};
{% if matrix_nginx_proxy_ssl_ciphers != '' %}
ssl_ciphers {{ matrix_nginx_proxy_ssl_ciphers }};
{% endif %}
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
{{ render_vhost_directives() }}
}
{% endif %}
{% if matrix_nginx_proxy_proxy_matrix_federation_api_enabled %}
{#
This federation vhost is a little special.
It serves federation over HTTP or HTTPS, depending on `matrix_nginx_proxy_https_enabled`.
#}
server {
{% if matrix_nginx_proxy_https_enabled %}
listen 8448 ssl http2;
listen [::]:8448 ssl http2;
{% else %}
listen 8448;
{% endif %}
server_name {{ matrix_nginx_proxy_proxy_matrix_hostname }};
server_tokens off; server_tokens off;
root /dev/null; root /dev/null;
@ -235,27 +192,42 @@ server {
gzip on; gzip on;
gzip_types text/plain application/json; gzip_types text/plain application/json;
{% if matrix_nginx_proxy_https_enabled %} {% if matrix_nginx_proxy_synapse_workers_enabled %}
ssl_certificate {{ matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate }}; {% if generic_workers %}
ssl_certificate_key {{ matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate_key }}; # https://github.com/matrix-org/synapse/blob/master/docs/workers.md#synapseappgeneric_worker
{% for location in matrix_nginx_proxy_synapse_generic_worker_federation_locations %}
ssl_protocols {{ matrix_nginx_proxy_ssl_protocols }}; location ~ {{ location }} {
{% if matrix_nginx_proxy_ssl_ciphers != '' %} proxy_pass http://generic_worker_upstream$request_uri;
ssl_ciphers {{ matrix_nginx_proxy_ssl_ciphers }}; proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
{% endfor %}
{% endif %} {% endif %}
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }}; {% if media_repository_workers %}
# https://github.com/matrix-org/synapse/blob/master/docs/workers.md#synapseappmedia_repository
{% for location in matrix_nginx_proxy_synapse_media_repository_locations %}
location ~ {{ location }} {
proxy_pass http://media_repository_upstream$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
client_body_buffer_size 25M;
client_max_body_size {{ matrix_nginx_proxy_proxy_matrix_federation_api_client_max_body_size_mb }}M;
proxy_max_temp_file_size 0;
}
{% endfor %}
{% endif %}
{% endif %} {% endif %}
location / { location / {
{% if matrix_nginx_proxy_enabled %} {% if matrix_nginx_proxy_enabled %}
{# Use the embedded DNS resolver in Docker containers to discover the service #} {# Use the embedded DNS resolver in Docker containers to discover the service #}
resolver 127.0.0.11 valid=5s; resolver 127.0.0.11 valid=5s;
set $backend "{{ matrix_nginx_proxy_proxy_matrix_federation_api_addr_with_container }}"; set $backend "{{ matrix_nginx_proxy_proxy_synapse_federation_api_addr_with_container }}";
proxy_pass http://$backend; proxy_pass http://$backend;
{% else %} {% else %}
{# Generic configuration for use outside of our container setup #} {# Generic configuration for use outside of our container setup #}
proxy_pass http://{{ matrix_nginx_proxy_proxy_matrix_federation_api_addr_sans_container }}; proxy_pass http://{{ matrix_nginx_proxy_proxy_synapse_federation_api_addr_sans_container }};
{% endif %} {% endif %}
proxy_set_header Host $host; proxy_set_header Host $host;

View File

@ -32,6 +32,10 @@ matrix_postgres_docker_image_force_pull: "{{ matrix_postgres_docker_image_to_use
# A list of extra arguments to pass to the container # A list of extra arguments to pass to the container
matrix_postgres_container_extra_arguments: [] matrix_postgres_container_extra_arguments: []
# A list of extra arguments to pass to the postgres process
# e.g. "-c 'max_connections=200'"
matrix_postgres_process_extra_arguments: []
# Controls whether the matrix-postgres container exposes a port (tcp/5432 in the # Controls whether the matrix-postgres container exposes a port (tcp/5432 in the
# container) that can be used to access the database from outside the container (e.g. with psql) # container) that can be used to access the database from outside the container (e.g. with psql)
# #

View File

@ -8,7 +8,7 @@ DefaultDependencies=no
[Service] [Service]
Type=simple Type=simple
Environment="HOME={{ matrix_systemd_unit_home_path }}" Environment="HOME={{ matrix_systemd_unit_home_path }}"
ExecStartPre=-{{ matrix_host_command_docker }} stop matrix-postgres ExecStartPre=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} kill matrix-postgres 2>/dev/null'
ExecStartPre=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} rm matrix-postgres 2>/dev/null' ExecStartPre=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} rm matrix-postgres 2>/dev/null'
ExecStart={{ matrix_host_command_docker }} run --rm --name matrix-postgres \ ExecStart={{ matrix_host_command_docker }} run --rm --name matrix-postgres \
@ -28,9 +28,10 @@ ExecStart={{ matrix_host_command_docker }} run --rm --name matrix-postgres \
{% for arg in matrix_postgres_container_extra_arguments %} {% for arg in matrix_postgres_container_extra_arguments %}
{{ arg }} \ {{ arg }} \
{% endfor %} {% endfor %}
{{ matrix_postgres_docker_image_to_use }} {{ matrix_postgres_docker_image_to_use }} \
postgres {{ matrix_postgres_process_extra_arguments|join(' ') }}
ExecStop=-{{ matrix_host_command_docker }} stop matrix-postgres ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} kill matrix-postgres 2>/dev/null'
ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} rm matrix-postgres 2>/dev/null' ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} rm matrix-postgres 2>/dev/null'
Restart=always Restart=always
RestartSec=30 RestartSec=30

View File

@ -0,0 +1,21 @@
matrix_redis_enabled: true
matrix_redis_connection_password: ""
matrix_redis_base_path: "{{ matrix_base_data_path }}/redis"
matrix_redis_data_path: "{{ matrix_redis_base_path }}/data"
matrix_redis_docker_image_v6: "docker.io/redis:6.0.10-alpine"
matrix_redis_docker_image_latest: "{{ matrix_redis_docker_image_v6 }}"
matrix_redis_docker_image_to_use: '{{ matrix_redis_docker_image_latest }}'
matrix_redis_docker_image_force_pull: "{{ matrix_redis_docker_image_to_use.endswith(':latest') }}"
# A list of extra arguments to pass to the container
matrix_redis_container_extra_arguments: []
# Controls whether the matrix-redis container exposes a port (tcp/6379 in the container)
# that can be used to access redis from outside the container
#
# Takes an "<ip>:<port>" or "<port>" value (e.g. "127.0.0.1:6379"), or empty string to not expose.
matrix_redis_container_redis_bind_port: ""

View File

@ -0,0 +1,3 @@
- set_fact:
matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-redis'] }}"
when: matrix_redis_enabled|bool

View File

@ -0,0 +1,9 @@
- import_tasks: "{{ role_path }}/tasks/init.yml"
tags:
- always
- import_tasks: "{{ role_path }}/tasks/setup_redis.yml"
when: run_setup|bool
tags:
- setup-all
- setup-redis

View File

@ -0,0 +1,99 @@
---
#
# Tasks related to setting up an internal redis server
#
- name: Ensure redis Docker image is pulled
docker_image:
name: "{{ matrix_redis_docker_image_to_use }}"
source: "{{ 'pull' if ansible_version.major > 2 or ansible_version.minor > 7 else omit }}"
force_source: "{{ matrix_redis_docker_image_force_pull if ansible_version.major > 2 or ansible_version.minor >= 8 else omit }}"
force: "{{ omit if ansible_version.major > 2 or ansible_version.minor >= 8 else matrix_redis_docker_image_force_pull }}"
when: matrix_redis_enabled|bool
- name: Ensure redis paths exist
file:
path: "{{ item }}"
state: directory
mode: 0700
owner: "{{ matrix_user_username }}"
group: "{{ matrix_user_username }}"
with_items:
- "{{ matrix_redis_base_path }}"
- "{{ matrix_redis_data_path }}"
when: matrix_redis_enabled|bool
# We do this as a separate task, because:
# - we'd like to do it for the data path only, not for the base path (which contains root-owned environment variable files we'd like to leave as-is)
# - we need to do it without `mode`, or we risk making certain `.conf` and other files's executable bit to flip to true
- name: Ensure redis data path ownership is correct
file:
path: "{{ matrix_redis_data_path }}"
state: directory
owner: "{{ matrix_user_username }}"
group: "{{ matrix_user_username }}"
recurse: yes
when: matrix_redis_enabled|bool
- name: Ensure redis environment variables file created
template:
src: "{{ role_path }}/templates/{{ item }}.j2"
dest: "{{ matrix_redis_base_path }}/{{ item }}"
mode: 0644
with_items:
- "redis.conf"
when: matrix_redis_enabled|bool
- name: Ensure matrix-redis.service installed
template:
src: "{{ role_path }}/templates/systemd/matrix-redis.service.j2"
dest: "{{ matrix_systemd_path }}/matrix-redis.service"
mode: 0644
register: matrix_redis_systemd_service_result
when: matrix_redis_enabled|bool
- name: Ensure systemd reloaded after matrix-redis.service installation
service:
daemon_reload: yes
when: "matrix_redis_enabled|bool and matrix_redis_systemd_service_result.changed"
#
# Tasks related to getting rid of the internal redis server (if it was previously enabled)
#
- name: Check existence of matrix-redis service
stat:
path: "{{ matrix_systemd_path }}/matrix-redis.service"
register: matrix_redis_service_stat
when: "not matrix_redis_enabled|bool"
- name: Ensure matrix-redis is stopped
service:
name: matrix-redis
state: stopped
daemon_reload: yes
when: "not matrix_redis_enabled|bool and matrix_redis_service_stat.stat.exists"
- name: Ensure matrix-redis.service doesn't exist
file:
path: "{{ matrix_systemd_path }}/matrix-redis.service"
state: absent
when: "not matrix_redis_enabled|bool and matrix_redis_service_stat.stat.exists"
- name: Ensure systemd reloaded after matrix-redis.service removal
service:
daemon_reload: yes
when: "not matrix_redis_enabled|bool and matrix_redis_service_stat.stat.exists"
- name: Check existence of matrix-redis local data path
stat:
path: "{{ matrix_redis_data_path }}"
register: matrix_redis_data_path_stat
when: "not matrix_redis_enabled|bool"
# We just want to notify the user. Deleting data is too destructive.
- name: Notify if matrix-redis local data remains
debug:
msg: "Note: You are not using a local redis instance, but some old data remains from before in `{{ matrix_redis_data_path }}`. Feel free to delete it."
when: "not matrix_redis_enabled|bool and matrix_redis_data_path_stat.stat.exists"

View File

@ -0,0 +1,4 @@
#jinja2: lstrip_blocks: "True"
{% if matrix_redis_connection_password %}
requirepass {{ matrix_redis_connection_password }}
{% endif %}

View File

@ -0,0 +1,36 @@
#jinja2: lstrip_blocks: "True"
[Unit]
Description=Matrix Redis server
After=docker.service
Requires=docker.service
[Service]
Type=simple
ExecStartPre=-/usr/bin/docker stop matrix-redis
ExecStartPre=-/usr/bin/docker rm matrix-redis
ExecStart=/usr/bin/docker run --rm --name matrix-redis \
--log-driver=none \
--user={{ matrix_user_uid }}:{{ matrix_user_gid }} \
--cap-drop=ALL \
--read-only \
--tmpfs=/tmp:rw,noexec,nosuid,size=100m \
--network={{ matrix_docker_network }} \
{% if matrix_redis_container_redis_bind_port %}
-p {{ matrix_redis_container_redis_bind_port }}:6379 \
{% endif %}
-v {{ matrix_redis_base_path }}/redis.conf:/usr/local/etc/redis/redis.conf \
{% for arg in matrix_redis_container_extra_arguments %}
{{ arg }} \
{% endfor %}
{{ matrix_redis_docker_image_to_use }} \
redis-server /usr/local/etc/redis/redis.conf
ExecStop=-/usr/bin/docker stop matrix-redis
ExecStop=-/usr/bin/docker rm matrix-redis
Restart=always
RestartSec=30
SyslogIdentifier=matrix-redis
[Install]
WantedBy=multi-user.target

View File

@ -121,6 +121,18 @@ matrix_synapse_rc_login:
per_second: 0.17 per_second: 0.17
burst_count: 3 burst_count: 3
matrix_synapse_rc_admin_redaction:
per_second: 1
burst_count: 50
matrix_synapse_rc_joins:
local:
per_second: 0.1
burst_count: 3
remote:
per_second: 0.01
burst_count: 3
matrix_synapse_rc_federation: matrix_synapse_rc_federation:
window_size: 1000 window_size: 1000
sleep_limit: 10 sleep_limit: 10
@ -290,6 +302,127 @@ matrix_synapse_metrics_port: 9100
# See https://github.com/matrix-org/synapse/blob/master/docs/manhole.md # See https://github.com/matrix-org/synapse/blob/master/docs/manhole.md
matrix_synapse_manhole_enabled: false matrix_synapse_manhole_enabled: false
# Enable support for Synapse workers
matrix_synapse_workers_enabled: false
# Specifies worker configuration that should be used when workers are enabled.
#
# The posible values (as seen in `matrix_synapse_workers_presets`) are:
# - "little-federation-helper" - a very minimal worker configuration to improve federation performance
# - "one-of-each" - one worker of each supported type
#
# You can override `matrix_synapse_workers_presets` to define your own presets, which is ill-advised, because it's fragile.
# To use a more custom configuration, start with one of these presets as a base and configure `matrix_synapse_workers_*_count` variables manually, to suit your liking.
matrix_synapse_workers_preset: one-of-each
matrix_synapse_workers_presets:
little-federation-helper:
generic_workers_count: 0
pusher_workers_count: 0
appservice_workers_count: 0
federation_sender_workers_count: 1
media_repository_workers_count: 0
user_dir_workers_count: 0
frontend_proxy_workers_count: 0
one-of-each:
generic_workers_count: 1
pusher_workers_count: 1
appservice_workers_count: 1
federation_sender_workers_count: 1
media_repository_workers_count: 1
# Disabled until https://github.com/matrix-org/synapse/issues/8787 is resolved.
user_dir_workers_count: 0
frontend_proxy_workers_count: 1
# Controls whether the matrix-synapse container exposes the various worker ports
# (see `port` and `metrics_port` in `matrix_synapse_workers_enabled_list`) outside of the container.
#
# Takes an "<ip>" value (e.g. "127.0.0.1", "0.0.0.0", etc), or empty string to not expose.
# It takes "*" to signify "bind on all interfaces" ("0.0.0.0" is IPv4-only).
matrix_synapse_workers_container_host_bind_address: ''
matrix_synapse_workers_generic_workers_count: "{{ matrix_synapse_workers_presets[matrix_synapse_workers_preset]['generic_workers_count'] }}"
matrix_synapse_workers_generic_workers_port_range_start: 18111
matrix_synapse_workers_generic_workers_metrics_range_start: 19111
# matrix_synapse_workers_pusher_workers_count can only be 0 or 1 for now.
# More instances are not supported due to a playbook limitation having to do with keeping `pusher_instances` in `homeserver.yaml` updated.
# See https://github.com/matrix-org/synapse/commit/ddfdf945064925eba761ae3748e38f3a1c73c328
matrix_synapse_workers_pusher_workers_count: "{{ matrix_synapse_workers_presets[matrix_synapse_workers_preset]['pusher_workers_count'] }}"
matrix_synapse_workers_pusher_workers_metrics_range_start: 19200
# matrix_synapse_workers_appservice_workers_count can only be 0 or 1. More instances are not supported.
matrix_synapse_workers_appservice_workers_count: "{{ matrix_synapse_workers_presets[matrix_synapse_workers_preset]['appservice_workers_count'] }}"
matrix_synapse_workers_appservice_workers_metrics_range_start: 19300
# matrix_synapse_workers_federation_sender_workers_count can only be 0 or 1 for now.
# More instances are not supported due to a playbook limitation having to do with keeping `federation_sender_instances` in `homeserver.yaml` updated.
# See https://github.com/matrix-org/synapse/blob/master/docs/workers.md#synapseappfederation_sender
matrix_synapse_workers_federation_sender_workers_count: "{{ matrix_synapse_workers_presets[matrix_synapse_workers_preset]['federation_sender_workers_count'] }}"
matrix_synapse_workers_federation_sender_workers_metrics_range_start: 19400
matrix_synapse_workers_media_repository_workers_count: "{{ matrix_synapse_workers_presets[matrix_synapse_workers_preset]['media_repository_workers_count'] }}"
matrix_synapse_workers_media_repository_workers_port_range_start: 18551
matrix_synapse_workers_media_repository_workers_metrics_range_start: 19551
# Disabled until https://github.com/matrix-org/synapse/issues/8787 is resolved.
matrix_synapse_workers_user_dir_workers_count: "{{ matrix_synapse_workers_presets[matrix_synapse_workers_preset]['user_dir_workers_count'] }}"
matrix_synapse_workers_user_dir_workers_port_range_start: 18661
matrix_synapse_workers_user_dir_workers_metrics_range_start: 19661
matrix_synapse_workers_frontend_proxy_workers_count: "{{ matrix_synapse_workers_presets[matrix_synapse_workers_preset]['frontend_proxy_workers_count'] }}"
matrix_synapse_workers_frontend_proxy_workers_port_range_start: 18771
matrix_synapse_workers_frontend_proxy_workers_metrics_range_start: 19771
# Default list of workers to spawn.
#
# Unless you populate this manually, this list is dynamically generated
# based on other variables above:
# - `matrix_synapse_workers_*_workers_count`
# - `matrix_synapse_workers_*_workers_port_range_start`
# - `matrix_synapse_workers_*_workers_port_metrics_range_start`
#
# We advise that you use those variables and let this list be populated dynamically.
# Doing that is simpler and also protects you from shooting yourself in the foot,
# as certain workers can only be spawned just once.
#
# Each worker instance in the list defines the following fields:
# - `type` - the type of worker (`generic_worker`, etc.)
# - `instanceId` - a string that identifies the worker. The combination of (`type` + `instanceId`) represents the name of the worker and must be unique.
# - `port` - an HTTP port where the worker listens for requests (can be `0` for workers that don't do HTTP request processing)
# - `metrics_port` - an HTTP port where the worker exports Prometheus metrics
#
# Example of what this needs to look like, if you're defining it manually:
# matrix_synapse_workers_enabled_list:
# - { type: generic_worker, instanceId: '18111', port: 18111, metrics_port: 19111 }
# - { type: generic_worker, instanceId: '18112', port: 18112, metrics_port: 19112 }
# - { type: generic_worker, instanceId: '18113', port: 18113, metrics_port: 19113 }
# - { type: generic_worker, instanceId: '18114', port: 18114, metrics_port: 19114 }
# - { type: generic_worker, instanceId: '18115', port: 18115, metrics_port: 19115 }
# - { type: generic_worker, instanceId: '18116', port: 18116, metrics_port: 19116 }
# - { type: pusher, instanceId: '0', port: 0, metrics_port: 19200 }
# - { type: appservice, instanceId: '0', port: 0, metrics_port: 19300 }
# - { type: federation_sender, instanceId: '0', port: 0, metrics_port: 19400 }
# - { type: media_repository, instanceId: '18551', port: 18551, metrics_port: 19551 }
matrix_synapse_workers_enabled_list: []
# Redis information
matrix_synapse_redis_enabled: false
matrix_synapse_redis_host: ""
matrix_synapse_redis_port: 6379
matrix_synapse_redis_password: ""
# Controls whether Synapse starts a replication listener necessary for workers.
#
# If Redis is available, we prefer to use that, instead of talking over Synapse's custom replication protocol.
#
# matrix_synapse_replication_listener_enabled: "{{ matrix_synapse_workers_enabled and not matrix_redis_enabled }}"
# We force-enable this listener for now until we debug why communication via Redis fails.
matrix_synapse_replication_listener_enabled: true
# Port used for communication between main synapse process and workers.
# Only gets used if `matrix_synapse_replication_listener_enabled: true`
matrix_synapse_replication_http_port: 9093
# Send ERROR logs to sentry.io for easier tracking # Send ERROR logs to sentry.io for easier tracking
# To set this up: go to sentry.io, create a python project, and set # To set this up: go to sentry.io, create a python project, and set

View File

@ -0,0 +1,146 @@
#!/usr/bin/awk
# Hackish approach to get a machine-readable list of current matrix
# synapse REST API endpoints from the official documentation at
# https://github.com/matrix-org/synapse/raw/master/docs/workers.md
#
# invoke in shell with:
# URL=https://github.com/matrix-org/synapse/raw/master/docs/workers.md
# curl -L ${URL} | awk -f workers-doc-to-yaml.awk -
function worker_stanza_append(string) {
worker_stanza = worker_stanza string
}
function line_is_endpoint_url(line) {
# probably API endpoint if it starts with white-space and ^ or /
return (line ~ /^ +[\^\/].*\//)
}
# Put YAML marker at beginning of file.
BEGIN {
print "---"
endpoint_conditional_comment = " # FIXME: ADDITIONAL CONDITIONS REQUIRED: to be enabled manually\n"
}
# Enable further processing after the introductory text.
# Read each synapse worker section as record and its lines as fields.
/Available worker applications/ {
enable_parsing = 1
# set record separator to markdown section header
RS = "\n### "
# set field separator to newline
FS = "\n"
}
# Once parsing is active, this will process each section as record.
enable_parsing {
# Each worker section starts with a synapse.app.X headline
if ($1 ~ /synapse\.app\./) {
# get rid of the backticks and extract worker type from headline
gsub("`", "", $1)
gsub("synapse.app.", "", $1)
worker_type = $1
# initialize empty worker stanza
worker_stanza = ""
# track if any endpoints are mentioned in a specific section
worker_has_urls = 0
# some endpoint descriptions contain flag terms
endpoints_seem_conditional = 0
# also, collect a list of available workers
workers = (workers ? workers "\n" : "") " - " worker_type
# loop through the lines (2 - number of fields in record)
for (i = 2; i < NF + 1; i++) {
# copy line for gsub replacements
line = $i
# end all lines but the last with a linefeed
linefeed = (i < NF - 1) ? "\n" : ""
# line starts with white-space and a hash: endpoint block headline
if (line ~ /^ +#/) {
# copy to output verbatim, normalizing white-space
gsub(/^ +/, "", line)
worker_stanza_append(" " line linefeed)
} else if (line_is_endpoint_url(line)) {
# mark section for special output formatting
worker_has_urls = 1
# remove leading white-space
gsub(/^ +/, "", line)
api_endpoint_regex = line
# FIXME: https://github.com/matrix-org/synapse/issues/new
# munge inconsistent media_repository endpoint notation
if (api_endpoint_regex == "/_matrix/media/") {
api_endpoint_regex = "^" line
}
# FIXME: https://github.com/matrix-org/synapse/issues/7530
# https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/456#issuecomment-719015911
if (api_endpoint_regex == "^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$") {
worker_stanza_append(" # FIXME: possible bug with SSO and multiple generic workers\n")
worker_stanza_append(" # see https://github.com/matrix-org/synapse/issues/7530\n")
worker_stanza_append(" # " api_endpoint_regex linefeed)
continue
}
# disable endpoints which specify complications
if (endpoints_seem_conditional) {
# only add notice if previous line didn't match
if (!line_is_endpoint_url($(i - 1))) {
worker_stanza_append(endpoint_conditional_comment)
}
worker_stanza_append(" # " api_endpoint_regex linefeed)
} else {
# output endpoint regex
worker_stanza_append(" - " api_endpoint_regex linefeed)
}
# white-space only line?
} else if (line ~ /^ *$/) {
if (i > 3 && i < NF) {
# print white-space lines unless 1st or last line in section
worker_stanza_append(line linefeed)
}
# nothing of the above: the line is regular documentation text
} else {
# include this text line as comment
worker_stanza_append(" # " line linefeed)
# and take note of words hinting at additional conditions to be met
if (line ~ /(^| )[Ii]f |(^| )[Ff]or /) {
endpoints_seem_conditional = 1
}
}
}
if (worker_has_urls) {
print "\nmatrix_synapse_workers_" worker_type "_endpoints:"
print worker_stanza
} else {
# include workers without endpoints as well for reference
print "\n# " worker_type " worker (no API endpoints) ["
print worker_stanza
print "# ]"
}
}
}
END {
print "\nmatrix_synapse_workers_avail_list:"
print workers | "sort"
}
# vim: tabstop=4 shiftwidth=4 expandtab autoindent

View File

@ -0,0 +1,6 @@
#!/bin/sh
# Fetch the synapse worker documentation and extract endpoint URLs
# matrix-org/synapse master branch points to current stable release
URL=https://github.com/matrix-org/synapse/raw/master/docs/workers.md
curl -L ${URL} | awk -f workers-doc-to-yaml.awk > ../vars/workers.yml

View File

@ -1,7 +1,19 @@
# Unless `matrix_synapse_workers_enabled_list` is explicitly defined,
# we'll generate it dynamically.
- import_tasks: "{{ role_path }}/tasks/synapse/workers/init.yml"
when: "matrix_synapse_enabled and matrix_synapse_workers_enabled and matrix_synapse_workers_enabled_list|length == 0"
- set_fact: - set_fact:
matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-synapse.service'] }}" matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-synapse.service'] }}"
when: matrix_synapse_enabled|bool when: matrix_synapse_enabled|bool
- name: Ensure systemd services for workers are injected
include_tasks: "{{ role_path }}/tasks/synapse/workers/util/inject_systemd_services_for_worker.yml"
with_items: "{{ matrix_synapse_workers_enabled_list }}"
loop_control:
loop_var: matrix_synapse_worker_details
when: matrix_synapse_enabled|bool and matrix_synapse_workers_enabled|bool
- set_fact: - set_fact:
matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-goofys.service'] }}" matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-goofys.service'] }}"
when: matrix_s3_media_store_enabled|bool when: matrix_s3_media_store_enabled|bool

View File

@ -18,6 +18,8 @@
- import_tasks: "{{ role_path }}/tasks/ext/setup.yml" - import_tasks: "{{ role_path }}/tasks/ext/setup.yml"
- import_tasks: "{{ role_path }}/tasks/synapse/workers/setup.yml"
- import_tasks: "{{ role_path }}/tasks/synapse/setup.yml" - import_tasks: "{{ role_path }}/tasks/synapse/setup.yml"
- import_tasks: "{{ role_path }}/tasks/goofys/setup.yml" - import_tasks: "{{ role_path }}/tasks/goofys/setup.yml"

View File

@ -0,0 +1,86 @@
# Below is a huge hack for dynamically building a list of workers and finally assigning it to `matrix_synapse_workers_enabled_list`.
#
# set_fact within a loop does not work reliably in Ansible (it only executes on the first iteration for some reason),
# so we're forced to do something much uglier.
- name: Build generic workers
set_fact:
worker:
type: 'generic_worker'
instanceId: "{{ matrix_synapse_workers_generic_workers_port_range_start + item }}"
port: "{{ matrix_synapse_workers_generic_workers_port_range_start + item }}"
metrics_port: "{{ matrix_synapse_workers_generic_workers_metrics_range_start + item }}"
register: "matrix_synapse_workers_list_results_generic_workers"
loop: "{{ range(0, matrix_synapse_workers_generic_workers_count|int)|list }}"
- name: Build federation sender workers
set_fact:
worker:
type: 'federation_sender'
instanceId: "{{ item }}"
port: 0
metrics_port: "{{ matrix_synapse_workers_federation_sender_workers_metrics_range_start + item }}"
register: "matrix_synapse_workers_list_results_federation_sender_workers"
loop: "{{ range(0, matrix_synapse_workers_federation_sender_workers_count|int)|list }}"
# This type of worker can only have a count of 1, at most
- name: Build pusher workers
set_fact:
worker:
type: 'pusher'
instanceId: "{{ item }}"
port: 0
metrics_port: "{{ matrix_synapse_workers_pusher_workers_metrics_range_start + item }}"
register: "matrix_synapse_workers_list_results_pusher_workers"
loop: "{{ range(0, matrix_synapse_workers_pusher_workers_count|int)|list }}"
# This type of worker can only have a count of 1, at most
- name: Build appservice workers
set_fact:
worker:
type: 'appservice'
instanceId: "{{ item }}"
port: 0
metrics_port: "{{ matrix_synapse_workers_appservice_workers_metrics_range_start + item }}"
register: "matrix_synapse_workers_list_results_appservice_workers"
loop: "{{ range(0, matrix_synapse_workers_appservice_workers_count|int)|list }}"
- name: Build media_repository workers
set_fact:
worker:
type: 'media_repository'
instanceId: "{{ matrix_synapse_workers_media_repository_workers_port_range_start + item }}"
port: "{{ matrix_synapse_workers_media_repository_workers_port_range_start + item }}"
metrics_port: "{{ matrix_synapse_workers_media_repository_workers_metrics_range_start + item }}"
register: "matrix_synapse_workers_list_results_media_repository_workers"
loop: "{{ range(0, matrix_synapse_workers_media_repository_workers_count|int)|list }}"
- name: Build frontend_proxy workers
set_fact:
worker:
type: 'frontend_proxy'
instanceId: "{{ matrix_synapse_workers_frontend_proxy_workers_port_range_start + item }}"
port: "{{ matrix_synapse_workers_frontend_proxy_workers_port_range_start + item }}"
metrics_port: "{{ matrix_synapse_workers_frontend_proxy_workers_metrics_range_start + item }}"
register: "matrix_synapse_workers_list_results_frontend_proxy_workers"
loop: "{{ range(0, matrix_synapse_workers_frontend_proxy_workers_count|int)|list }}"
- set_fact:
matrix_synapse_dynamic_workers_list: "{{ matrix_synapse_dynamic_workers_list|default([]) + [item.ansible_facts.worker] }}"
with_items: |
{{
matrix_synapse_workers_list_results_generic_workers.results
+
matrix_synapse_workers_list_results_federation_sender_workers.results
+
matrix_synapse_workers_list_results_pusher_workers.results
+
matrix_synapse_workers_list_results_appservice_workers.results
+
matrix_synapse_workers_list_results_media_repository_workers.results
+
matrix_synapse_workers_list_results_frontend_proxy_workers.results
}}
- set_fact:
matrix_synapse_workers_enabled_list: "{{ matrix_synapse_dynamic_workers_list }}"

View File

@ -0,0 +1,21 @@
---
# A previous version of the worker setup used this.
# This is a temporary cleanup for people who ran that version.
- name: Ensure old matrix-synapse.service.wants directory is gone
file:
path: "{{ matrix_systemd_path }}/matrix-synapse.service.wants"
state: absent
# Same. This was part of a previous version of the worker setup.
# No longer necessary.
- name: Ensure matrix-synapse-worker-write-pid script is removed
file:
path: "{{ matrix_local_bin_path }}/matrix-synapse-worker-write-pid"
state: absent
- include_tasks: "{{ role_path }}/tasks/synapse/workers/setup_install.yml"
when: "matrix_synapse_enabled|bool and matrix_synapse_workers_enabled|bool"
- include_tasks: "{{ role_path }}/tasks/synapse/workers/setup_uninstall.yml"
when: "not matrix_synapse_workers_enabled|bool"

View File

@ -0,0 +1,42 @@
---
- name: Determine current worker configs
find:
path: "{{ matrix_synapse_config_dir_path }}"
patterns: "worker.*.yaml"
use_regex: true
register: matrix_synapse_workers_current_config_files
# This also deletes some things which we need. They will be recreated below.
- name: Ensure previous worker configs are cleaned
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ matrix_synapse_workers_current_config_files.files }}"
- name: Determine current worker systemd services
find:
path: "{{ matrix_systemd_path }}"
patterns: "matrix-synapse-worker.*.service"
use_regex: true
register: matrix_synapse_workers_current_systemd_services
- name: Ensure unnecessary worker systemd services are stopped and disabled
service:
name: "{{ item.path|basename }}"
state: stopped
enabled: false
with_items: "{{ matrix_synapse_workers_current_systemd_services.files }}"
when: "not ansible_check_mode and item.path|basename not in matrix_systemd_services_list"
- name: Ensure unnecessary worker systemd services are cleaned
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ matrix_synapse_workers_current_systemd_services.files }}"
- name: Ensure creation of worker systemd service files and configuration files
include_tasks: "{{ role_path }}/tasks/synapse/workers/util/setup_files_for_worker.yml"
with_items: "{{ matrix_synapse_workers_enabled_list }}"
loop_control:
loop_var: matrix_synapse_worker_details

View File

@ -0,0 +1,36 @@
---
- name: Populate service facts
service_facts:
- name: Ensure any worker services are stopped
service:
name: "{{ item.key }}"
state: stopped
with_dict: "{{ ansible_facts.services|default({})|dict2items|selectattr('key', 'match', 'matrix-synapse-worker-.+\\.service')|list|items2dict }}"
- name: Find worker configs to be cleaned
find:
path: "{{ matrix_synapse_config_dir_path }}"
patterns: "worker.*.yaml"
use_regex: true
register: matrix_synapse_workers_current_config_files
- name: Ensure previous worker configs are cleaned
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ matrix_synapse_workers_current_config_files.files }}"
- name: Find worker systemd services to be cleaned
find:
path: "{{ matrix_systemd_path }}"
patterns: "matrix-synapse-worker.*.service"
use_regex: true
register: matrix_synapse_workers_current_systemd_services
- name: Ensure previous worker systemd services are cleaned
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ matrix_synapse_workers_current_systemd_services.files }}"

View File

@ -0,0 +1,18 @@
# The tasks below run before `validate_config.yml`.
# To avoid failing with a cryptic error message, we'll do validation here.
#
# This check is mostly relevant to people who explicitly define `matrix_synapse_workers_enabled_list`
# (Synapse Workers users from the earlier days of this PR - https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/456).
#
# In the future, it should be possible to remove this check.
# Our own code which dynamically builds `matrix_synapse_workers_enabled_list` does things right.
- name: Fail if instanceId not defined for worker
fail:
msg: "Synapse workers (like {{ matrix_synapse_worker_details|to_json }}) need to define an instanceId property (type + instanceId must be unique)"
when: "'instanceId' not in matrix_synapse_worker_details"
- set_fact:
matrix_synapse_worker_systemd_service_name: "matrix-synapse-worker-{{ matrix_synapse_worker_details.type }}-{{ matrix_synapse_worker_details.instanceId }}.service"
- set_fact:
matrix_systemd_services_list: "{{ matrix_systemd_services_list + [matrix_synapse_worker_systemd_service_name] }}"

View File

@ -0,0 +1,19 @@
- set_fact:
matrix_synapse_worker_systemd_service_name: "matrix-synapse-worker-{{ matrix_synapse_worker_details.type }}-{{ matrix_synapse_worker_details.instanceId }}"
- set_fact:
matrix_synapse_worker_container_name: "{{ matrix_synapse_worker_systemd_service_name }}"
- set_fact:
matrix_synapse_worker_config_file_name: "worker.{{ matrix_synapse_worker_details.type }}_{{ matrix_synapse_worker_details.instanceId }}.yaml"
- name: Ensure configuration exists for {{ matrix_synapse_worker_systemd_service_name }}
template:
src: "{{ role_path }}/templates/synapse/worker.yaml.j2"
dest: "{{ matrix_synapse_config_dir_path }}/{{ matrix_synapse_worker_config_file_name }}"
- name: Ensure systemd service exists for {{ matrix_synapse_worker_systemd_service_name }}
template:
src: "{{ role_path }}/templates/synapse/systemd/matrix-synapse-worker.service.j2"
dest: "{{ matrix_systemd_path }}/{{ matrix_synapse_worker_systemd_service_name }}.service"
mode: 0644

View File

@ -12,6 +12,16 @@
- "matrix_synapse_database_password" - "matrix_synapse_database_password"
- "matrix_synapse_database_database" - "matrix_synapse_database_database"
- name: Fail if asking for more than 1 instance of single-instance workers
fail:
msg: >-
`{{ item }}` cannot be more than 1. This is a single-instance worker.
when: "vars[item]|int > 1"
with_items:
- "matrix_synapse_workers_appservice_workers_count"
- "matrix_synapse_workers_pusher_workers_count"
- "matrix_synapse_workers_federation_sender_workers_count"
- name: (Deprecation) Catch and report renamed settings - name: (Deprecation) Catch and report renamed settings
fail: fail:
msg: >- msg: >-

View File

@ -277,6 +277,41 @@ listeners:
type: manhole type: manhole
{% endif %} {% endif %}
{% if matrix_synapse_workers_enabled %}
{% if matrix_synapse_replication_listener_enabled %}
# c.f. https://github.com/matrix-org/synapse/tree/master/docs/workers.md
# HTTP replication: for the workers to send data to the main synapse process
- port: {{ matrix_synapse_replication_http_port }}
bind_addresses: ['0.0.0.0']
type: http
resources:
- names: [replication]
{% endif %}
# c.f. https://github.com/matrix-org/synapse/tree/master/contrib/systemd-with-workers/README.md
worker_app: synapse.app.homeserver
# thx https://oznetnerd.com/2017/04/18/jinja2-selectattr-filter/
# reduce the main worker's offerings to core homeserver business
{% if matrix_synapse_workers_enabled_list|selectattr('type', 'equalto', 'appservice')|list %}
notify_appservices: false
{% endif %}
{% if matrix_synapse_workers_enabled_list|selectattr('type', 'equalto', 'federation_sender')|list %}
send_federation: false
{% endif %}
{% if matrix_synapse_workers_enabled_list|selectattr('type', 'equalto', 'media_repository')|list %}
enable_media_repo: false
{% endif %}
{% if matrix_synapse_workers_enabled_list|selectattr('type', 'equalto', 'pusher')|list %}
start_pushers: false
{% endif %}
{% if matrix_synapse_workers_enabled_list|selectattr('type', 'equalto', 'user_dir')|list %}
update_user_directory: false
{% endif %}
daemonize: false
{% endif %}
# Forward extremities can build up in a room due to networking delays between # Forward extremities can build up in a room due to networking delays between
# homeservers. Once this happens in a large room, calculation of the state of # homeservers. Once this happens in a large room, calculation of the state of
@ -812,6 +847,7 @@ rc_login: {{ matrix_synapse_rc_login|to_json }}
#rc_admin_redaction: #rc_admin_redaction:
# per_second: 1 # per_second: 1
# burst_count: 50 # burst_count: 50
rc_admin_redaction: {{ matrix_synapse_rc_admin_redaction|to_json }}
# #
#rc_joins: #rc_joins:
# local: # local:
@ -820,6 +856,7 @@ rc_login: {{ matrix_synapse_rc_login|to_json }}
# remote: # remote:
# per_second: 0.01 # per_second: 0.01
# burst_count: 3 # burst_count: 3
rc_joins: {{ matrix_synapse_rc_joins|to_json }}
# #
#rc_3pid_validation: #rc_3pid_validation:
# per_second: 0.003 # per_second: 0.003
@ -2810,16 +2847,16 @@ opentracing:
redis: redis:
# Uncomment the below to enable Redis support. # Uncomment the below to enable Redis support.
# #
#enabled: true enabled: {{ matrix_synapse_redis_enabled }}
# Optional host and port to use to connect to redis. Defaults to # Optional host and port to use to connect to redis. Defaults to
# localhost and 6379 # localhost and 6379
# #
#host: localhost host: {{ matrix_synapse_redis_host }}
#port: 6379 port: {{ matrix_synapse_redis_port }}
# Optional password if configured on the Redis instance # Optional password if configured on the Redis instance
# #
#password: <secret_password> password: {{ matrix_synapse_redis_password }}
# vim:ft=yaml # vim:ft=yaml

View File

@ -0,0 +1,58 @@
#jinja2: lstrip_blocks: "True"
[Unit]
Description=Synapse worker ({{ matrix_synapse_worker_container_name }})
AssertPathExists={{ matrix_synapse_config_dir_path }}/{{ matrix_synapse_worker_config_file_name }}
After=matrix-synapse.service
[Service]
Type=simple
Environment="HOME={{ matrix_systemd_unit_home_path }}"
ExecStartPre=-{{ matrix_host_command_docker }} kill {{ matrix_synapse_worker_container_name }}
ExecStartPre=-{{ matrix_host_command_docker }} rm {{ matrix_synapse_worker_container_name }}
# Intentional delay, so that the homeserver can manage to start.
ExecStartPre={{ matrix_host_command_sleep }} 5
ExecStart={{ matrix_host_command_docker }} run --rm --name {{ matrix_synapse_worker_container_name }} \
--log-driver=none \
--user={{ matrix_user_uid }}:{{ matrix_user_gid }} \
--cap-drop=ALL \
--entrypoint=python \
--read-only \
--tmpfs=/tmp:rw,noexec,nosuid,size={{ matrix_synapse_tmp_directory_size_mb }}m \
--network={{ matrix_docker_network }} \
{% if matrix_synapse_workers_enabled and matrix_synapse_workers_container_host_bind_address %}
{% if matrix_synapse_worker_details.port != 0 %}
-p {{ '' if matrix_synapse_workers_container_host_bind_address == '*' else (matrix_synapse_workers_container_host_bind_address + ':') }}{{ matrix_synapse_worker_details.port }}:{{ matrix_synapse_worker_details.port }} \
{% endif %}
{% if matrix_synapse_worker_details.metrics_port != 0 %}
-p {{ '' if matrix_synapse_workers_container_host_bind_address == '*' else (matrix_synapse_workers_container_host_bind_address + ':') }}{{ matrix_synapse_worker_details.metrics_port }}:{{ matrix_synapse_worker_details.metrics_port }} \
{% endif %}
{% endif %}
--mount type=bind,src={{ matrix_synapse_config_dir_path }},dst=/data,ro \
--mount type=bind,src={{ matrix_synapse_storage_path }},dst=/matrix-media-store-parent,bind-propagation=slave \
{% for volume in matrix_synapse_container_additional_volumes %}
-v {{ volume.src }}:{{ volume.dst }}:{{ volume.options }} \
{% endfor %}
{% for arg in matrix_synapse_container_extra_arguments %}
{{ arg }} \
{% endfor %}
{{ matrix_synapse_docker_image }} \
-m synapse.app.{{ matrix_synapse_worker_details.type }} -c /data/homeserver.yaml -c /data/{{ matrix_synapse_worker_config_file_name }}
ExecStop=-{{ matrix_host_command_docker }} kill {{ matrix_synapse_worker_container_name }}
ExecStop=-{{ matrix_host_command_docker }} rm {{ matrix_synapse_worker_container_name }}
ExecReload={{ matrix_host_command_docker }} exec {{ matrix_synapse_worker_container_name }} /bin/sh -c 'kill -HUP 1'
Restart=always
RestartSec=30
SyslogIdentifier={{ matrix_synapse_worker_container_name }}
# Intentionally not making this WantedBy=matrix-synapse.service,
# as matrix.synapse.service already has `Wants=` lines.
# Also, WantedBy will trigger the creation of some `matrix-synapse.service.wants/` directory,
# which we'd have to clean, etc. Better not.
[Install]
WantedBy=multi-user.target

View File

@ -4,10 +4,18 @@ Description=Synapse server
{% for service in matrix_synapse_systemd_required_services_list %} {% for service in matrix_synapse_systemd_required_services_list %}
Requires={{ service }} Requires={{ service }}
After={{ service }} After={{ service }}
{% endfor %} {% endfor %}
{% for service in matrix_synapse_systemd_wanted_services_list %} {% for service in matrix_synapse_systemd_wanted_services_list %}
Wants={{ service }} Wants={{ service }}
{% endfor %} {% endfor %}
{% if matrix_synapse_workers_enabled %}
{% for matrix_synapse_worker_details in matrix_synapse_workers_enabled_list %}
Wants=matrix-synapse-worker-{{ matrix_synapse_worker_details.type }}-{{ matrix_synapse_worker_details.port }}.service
{% endfor %}
{% endif %}
DefaultDependencies=no DefaultDependencies=no
[Service] [Service]
@ -58,7 +66,7 @@ ExecStart={{ matrix_host_command_docker }} run --rm --name matrix-synapse \
ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} kill matrix-synapse 2>/dev/null' ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} kill matrix-synapse 2>/dev/null'
ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} rm matrix-synapse 2>/dev/null' ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} rm matrix-synapse 2>/dev/null'
ExecReload={{ matrix_host_command_docker }} exec matrix-synapse kill -HUP 1 ExecReload={{ matrix_host_command_docker }} exec matrix-synapse /bin/sh -c 'kill -HUP 1'
Restart=always Restart=always
RestartSec=30 RestartSec=30
SyslogIdentifier=matrix-synapse SyslogIdentifier=matrix-synapse

View File

@ -0,0 +1,45 @@
#jinja2: lstrip_blocks: "True"
worker_app: synapse.app.{{ matrix_synapse_worker_details.type }}
worker_name: {{ matrix_synapse_worker_details.type ~ ':' ~ matrix_synapse_worker_details.port }}
{% if matrix_synapse_replication_listener_enabled %}
worker_replication_host: matrix-synapse
worker_replication_http_port: {{ matrix_synapse_replication_http_port }}
{% endif %}
{% set has_listeners = (matrix_synapse_worker_details.type not in [ 'appservice', 'federation_sender', 'pusher' ] or matrix_synapse_metrics_enabled) %}
{% set http_resources = [] %}
{% if matrix_synapse_worker_details.type in ['generic_worker', 'frontend_proxy', 'user_dir'] %}
{% set http_resources = http_resources + ['client'] %}
{% endif %}
{% if matrix_synapse_worker_details.type in ['generic_worker'] %}
{% set http_resources = http_resources+ ['federation'] %}
{% endif %}
{% if matrix_synapse_worker_details.type in ['media_repository'] %}
{% set http_resources = http_resources + ['media'] %}
{% endif %}
{% if http_resources|length > 0 or matrix_synapse_metrics_enabled %}
worker_listeners:
{% if http_resources|length > 0 %}
- type: http
bind_addresses: ['::']
port: {{ matrix_synapse_worker_details.port }}
resources:
- names: {{ http_resources|to_json }}
{% endif %}
{% if matrix_synapse_metrics_enabled %}
- type: metrics
bind_addresses: ['0.0.0.0']
port: {{ matrix_synapse_worker_details.metrics_port }}
{% endif %}
{% endif %}
{% if matrix_synapse_worker_details.type == 'frontend_proxy' %}
worker_main_http_uri: http://matrix-synapse:8008
{% endif %}
worker_daemonize: false
worker_log_config: /data/{{ matrix_server_fqn_matrix }}.log.config

View File

@ -8,3 +8,28 @@ matrix_synapse_role_executed: false
matrix_synapse_media_store_parent_path: "{{ matrix_synapse_media_store_path|dirname }}" matrix_synapse_media_store_parent_path: "{{ matrix_synapse_media_store_path|dirname }}"
matrix_synapse_media_store_directory_name: "{{ matrix_synapse_media_store_path|basename }}" matrix_synapse_media_store_directory_name: "{{ matrix_synapse_media_store_path|basename }}"
# A Synapse generic worker can handle both federation and client-server API endpoints.
# We wish to split these, as we normally serve federation separately and don't want them mixed up.
#
# This is some ugly Ansible/Jinja2 hack (seen here: https://stackoverflow.com/a/47831492),
# which takes a list of various strings and removes the ones NOT containing `/_matrix/client` anywhere in them.
#
# We intentionally don't do a diff between everything possible (`matrix_synapse_workers_generic_worker_endpoints`) and `matrix_synapse_workers_generic_worker_federation_endpoints`,
# because `matrix_synapse_workers_generic_worker_endpoints` also contains things like `/_synapse/client/`, etc.
# While /_synapse/client/ endpoints are somewhat client-server API-related, they're:
# - neither part of the client-server API spec (and are thus, different)
# - nor always OK to forward to a worker (we're supposed to obey `matrix_nginx_proxy_proxy_matrix_client_api_forwarded_location_synapse_client_api_enabled`)
#
# It's also not too many of these APIs (only `^/_synapse/client/password_reset/email/submit_token$` at the time of this writing / 2021-01-24),
# so it's not that important whether we forward them or not.
#
# Basically, we aim to cover most things. Skipping `/_synapse/client` or a few other minor things doesn't matter too much.
matrix_synapse_workers_generic_worker_client_server_endpoints: "{{ matrix_synapse_workers_generic_worker_endpoints|default([]) | map('regex_search', '.*/_matrix/client.*')| list | difference([none]) }}"
# A Synapse generic worker can handle both federation and client-server API endpoints.
# We wish to split these, as we normally serve federation separately and don't want them mixed up.
#
# This is some ugly Ansible/Jinja2 hack (seen here: https://stackoverflow.com/a/47831492),
# which takes a list of various strings and removes the ones NOT containing `/_matrix/federation` or `/_matrix/key` anywhere in them.
matrix_synapse_workers_generic_worker_federation_endpoints: "{{ matrix_synapse_workers_generic_worker_endpoints|default([]) | map('regex_search', '.*(/_matrix/federation|/_matrix/key).*')| list | difference([none]) }}"

View File

@ -0,0 +1,313 @@
---
matrix_synapse_workers_generic_worker_endpoints:
# This worker can handle API requests matching the following regular
# expressions:
# Sync requests
- ^/_matrix/client/(v2_alpha|r0)/sync$
- ^/_matrix/client/(api/v1|v2_alpha|r0)/events$
- ^/_matrix/client/(api/v1|r0)/initialSync$
- ^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
# Federation requests
- ^/_matrix/federation/v1/event/
- ^/_matrix/federation/v1/state/
- ^/_matrix/federation/v1/state_ids/
- ^/_matrix/federation/v1/backfill/
- ^/_matrix/federation/v1/get_missing_events/
- ^/_matrix/federation/v1/publicRooms
- ^/_matrix/federation/v1/query/
- ^/_matrix/federation/v1/make_join/
- ^/_matrix/federation/v1/make_leave/
- ^/_matrix/federation/v1/send_join/
- ^/_matrix/federation/v2/send_join/
- ^/_matrix/federation/v1/send_leave/
- ^/_matrix/federation/v2/send_leave/
- ^/_matrix/federation/v1/invite/
- ^/_matrix/federation/v2/invite/
- ^/_matrix/federation/v1/query_auth/
- ^/_matrix/federation/v1/event_auth/
- ^/_matrix/federation/v1/exchange_third_party_invite/
- ^/_matrix/federation/v1/user/devices/
- ^/_matrix/federation/v1/get_groups_publicised$
- ^/_matrix/key/v2/query
# Inbound federation transaction request
- ^/_matrix/federation/v1/send/
# Client API requests
- ^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
- ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
- ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
- ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
- ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
- ^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
- ^/_matrix/client/(api/v1|r0|unstable)/devices$
- ^/_matrix/client/(api/v1|r0|unstable)/keys/query$
- ^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
- ^/_matrix/client/versions$
- ^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
- ^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
- ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
- ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
# Registration/login requests
- ^/_matrix/client/(api/v1|r0|unstable)/login$
- ^/_matrix/client/(r0|unstable)/register$
# FIXME: possible bug with SSO and multiple generic workers
# see https://github.com/matrix-org/synapse/issues/7530
# ^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
# Event sending requests
- ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact
- ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
- ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
- ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
- ^/_matrix/client/(api/v1|r0|unstable)/join/
- ^/_matrix/client/(api/v1|r0|unstable)/profile/
# Additionally, the following REST endpoints can be handled for GET requests:
# FIXME: ADDITIONAL CONDITIONS REQUIRED: to be enabled manually
# ^/_matrix/federation/v1/groups/
# Pagination requests can also be handled, but all requests for a given
# room must be routed to the same instance. Additionally, care must be taken to
# ensure that the purge history admin API is not used while pagination requests
# for the room are in flight:
# FIXME: ADDITIONAL CONDITIONS REQUIRED: to be enabled manually
# ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
# Additionally, the following endpoints should be included if Synapse is configured
# to use SSO (you only need to include the ones for whichever SSO provider you're
# using):
# for all SSO providers
# FIXME: ADDITIONAL CONDITIONS REQUIRED: to be enabled manually
# ^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect
# ^/_synapse/client/pick_idp$
# ^/_synapse/client/pick_username
# ^/_synapse/client/new_user_consent$
# ^/_synapse/client/sso_register$
# OpenID Connect requests.
# FIXME: ADDITIONAL CONDITIONS REQUIRED: to be enabled manually
# ^/_synapse/client/oidc/callback$
# SAML requests.
# FIXME: ADDITIONAL CONDITIONS REQUIRED: to be enabled manually
# ^/_synapse/client/saml2/authn_response$
# CAS requests.
# FIXME: ADDITIONAL CONDITIONS REQUIRED: to be enabled manually
# ^/_matrix/client/(api/v1|r0|unstable)/login/cas/ticket$
# Ensure that all SSO logins go to a single process.
# For multiple workers not handling the SSO endpoints properly, see
# [#7530](https://github.com/matrix-org/synapse/issues/7530).
# Note that a HTTP listener with `client` and `federation` resources must be
# configured in the `worker_listeners` option in the worker config.
# #### Load balancing
# It is possible to run multiple instances of this worker app, with incoming requests
# being load-balanced between them by the reverse-proxy. However, different endpoints
# have different characteristics and so admins
# may wish to run multiple groups of workers handling different endpoints so that
# load balancing can be done in different ways.
# For `/sync` and `/initialSync` requests it will be more efficient if all
# requests from a particular user are routed to a single instance. Extracting a
# user ID from the access token or `Authorization` header is currently left as an
# exercise for the reader. Admins may additionally wish to separate out `/sync`
# requests that have a `since` query parameter from those that don't (and
# `/initialSync`), as requests that don't are known as "initial sync" that happens
# when a user logs in on a new device and can be *very* resource intensive, so
# isolating these requests will stop them from interfering with other users ongoing
# syncs.
# Federation and client requests can be balanced via simple round robin.
# The inbound federation transaction request `^/_matrix/federation/v1/send/`
# should be balanced by source IP so that transactions from the same remote server
# go to the same process.
# Registration/login requests can be handled separately purely to help ensure that
# unexpected load doesn't affect new logins and sign ups.
# Finally, event sending requests can be balanced by the room ID in the URI (or
# the full URI, or even just round robin), the room ID is the path component after
# `/rooms/`. If there is a large bridge connected that is sending or may send lots
# of events, then a dedicated set of workers can be provisioned to limit the
# effects of bursts of events from that bridge on events sent by normal users.
# #### Stream writers
# Additionally, there is *experimental* support for moving writing of specific
# streams (such as events) off of the main process to a particular worker. (This
# is only supported with Redis-based replication.)
# Currently supported streams are `events` and `typing`.
# To enable this, the worker must have a HTTP replication listener configured,
# have a `worker_name` and be listed in the `instance_map` config. For example to
# move event persistence off to a dedicated worker, the shared configuration would
# include:
# ```yaml
# instance_map:
# event_persister1:
# host: localhost
# port: 8034
# stream_writers:
# events: event_persister1
# ```
# The `events` stream also experimentally supports having multiple writers, where
# work is sharded between them by room ID. Note that you *must* restart all worker
# instances when adding or removing event persisters. An example `stream_writers`
# configuration with multiple writers:
# ```yaml
# stream_writers:
# events:
# - event_persister1
# - event_persister2
# ```
# #### Background tasks
# There is also *experimental* support for moving background tasks to a separate
# worker. Background tasks are run periodically or started via replication. Exactly
# which tasks are configured to run depends on your Synapse configuration (e.g. if
# stats is enabled).
# To enable this, the worker must have a `worker_name` and can be configured to run
# background tasks. For example, to move background tasks to a dedicated worker,
# the shared configuration would include:
# ```yaml
# run_background_tasks_on: background_worker
# ```
# You might also wish to investigate the `update_user_directory` and
# `media_instance_running_background_jobs` settings.
# pusher worker (no API endpoints) [
# Handles sending push notifications to sygnal and email. Doesn't handle any
# REST endpoints itself, but you should set `start_pushers: False` in the
# shared configuration file to stop the main synapse sending push notifications.
# Note this worker cannot be load-balanced: only one instance should be active.
# ]
# appservice worker (no API endpoints) [
# Handles sending output traffic to Application Services. Doesn't handle any
# REST endpoints itself, but you should set `notify_appservices: False` in the
# shared configuration file to stop the main synapse sending appservice notifications.
# Note this worker cannot be load-balanced: only one instance should be active.
# ]
# federation_sender worker (no API endpoints) [
# Handles sending federation traffic to other servers. Doesn't handle any
# REST endpoints itself, but you should set `send_federation: False` in the
# shared configuration file to stop the main synapse sending this traffic.
# If running multiple federation senders then you must list each
# instance in the `federation_sender_instances` option by their `worker_name`.
# All instances must be stopped and started when adding or removing instances.
# For example:
# ```yaml
# federation_sender_instances:
# - federation_sender1
# - federation_sender2
# ```
# ]
matrix_synapse_workers_media_repository_endpoints:
# Handles the media repository. It can handle all endpoints starting with:
- ^/_matrix/media/
# ... and the following regular expressions matching media-specific administration APIs:
- ^/_synapse/admin/v1/purge_media_cache$
- ^/_synapse/admin/v1/room/.*/media.*$
- ^/_synapse/admin/v1/user/.*/media.*$
- ^/_synapse/admin/v1/media/.*$
- ^/_synapse/admin/v1/quarantine_media/.*$
# You should also set `enable_media_repo: False` in the shared configuration
# file to stop the main synapse running background jobs related to managing the
# media repository.
# In the `media_repository` worker configuration file, configure the http listener to
# expose the `media` resource. For example:
# ```yaml
# worker_listeners:
# - type: http
# port: 8085
# resources:
# - names:
# - media
# ```
# Note that if running multiple media repositories they must be on the same server
# and you must configure a single instance to run the background tasks, e.g.:
# ```yaml
# media_instance_running_background_jobs: "media-repository-1"
# ```
# Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for both inbound client and federation requests (if they are handled separately).
matrix_synapse_workers_user_dir_endpoints:
# Handles searches in the user directory. It can handle REST endpoints matching
# the following regular expressions:
- ^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
# When using this worker you must also set `update_user_directory: False` in the
# shared configuration file to stop the main synapse running background
# jobs related to updating the user directory.
matrix_synapse_workers_frontend_proxy_endpoints:
# Proxies some frequently-requested client endpoints to add caching and remove
# load from the main synapse. It can handle REST endpoints matching the following
# regular expressions:
- ^/_matrix/client/(api/v1|r0|unstable)/keys/upload
# If `use_presence` is False in the homeserver config, it can also handle REST
# endpoints matching the following regular expressions:
# FIXME: ADDITIONAL CONDITIONS REQUIRED: to be enabled manually
# ^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
# This "stub" presence handler will pass through `GET` request but make the
# `PUT` effectively a no-op.
# It will proxy any requests it cannot handle to the main synapse instance. It
# must therefore be configured with the location of the main instance, via
# the `worker_main_http_uri` setting in the `frontend_proxy` worker configuration
# file. For example:
# worker_main_http_uri: http://127.0.0.1:8008
matrix_synapse_workers_avail_list:
- appservice
- federation_sender
- frontend_proxy
- generic_worker
- media_repository
- pusher
- user_dir

View File

@ -3,11 +3,15 @@
hosts: "{{ target if target is defined else 'matrix_servers' }}" hosts: "{{ target if target is defined else 'matrix_servers' }}"
become: true become: true
vars_files:
- roles/matrix-synapse/vars/workers.yml
roles: roles:
- matrix-base - matrix-base
- matrix-dynamic-dns - matrix-dynamic-dns
- matrix-mailer - matrix-mailer
- matrix-postgres - matrix-postgres
- matrix-redis
- matrix-corporal - matrix-corporal
- matrix-bridge-appservice-discord - matrix-bridge-appservice-discord
- matrix-bridge-appservice-slack - matrix-bridge-appservice-slack
@ -20,6 +24,7 @@
- matrix-bridge-mautrix-telegram - matrix-bridge-mautrix-telegram
- matrix-bridge-mautrix-whatsapp - matrix-bridge-mautrix-whatsapp
- matrix-bridge-mx-puppet-discord - matrix-bridge-mx-puppet-discord
- matrix-bridge-mx-puppet-groupme
- matrix-bridge-mx-puppet-steam - matrix-bridge-mx-puppet-steam
- matrix-bridge-mx-puppet-skype - matrix-bridge-mx-puppet-skype
- matrix-bridge-mx-puppet-slack - matrix-bridge-mx-puppet-slack