Compare commits
51 Commits
f7f0a8b647
...
3a07140a39
Author | SHA1 | Date | |
---|---|---|---|
3a07140a39 | |||
190d44f55d | |||
f893337a43 | |||
|
2606b41b16 | ||
|
e6afa05f7b | ||
|
57a6a98a50 | ||
|
b9c4e8ce16 | ||
|
d31b55b2a7 | ||
|
8bf4c52838 | ||
|
9c0fb98c0d | ||
|
400371f6dd | ||
|
578754e60e | ||
|
d156c8caa2 | ||
|
e4dd933cf0 | ||
|
2c3da6599b | ||
|
0dd4459799 | ||
|
c05021640d | ||
|
df99c338d3 | ||
|
4bd7d8b5e4 | ||
|
d5cd3d443d | ||
|
226d6a6f03 | ||
|
f481b1a84b | ||
|
71e271893b | ||
|
8cace72d95 | ||
|
8e6f1876f5 | ||
|
9121ef2604 | ||
|
840ff5e19b | ||
|
8fc55b30c5 | ||
|
2d4b039c55 | ||
|
2b4bada72a | ||
|
0adcef65e6 | ||
|
f70102e40c | ||
|
f03adc83f1 | ||
|
f4657b2cdb | ||
|
f827a3cc46 | ||
|
4e6f6e179b | ||
|
3dcc006932 | ||
|
33f0074862 | ||
|
c19508087a | ||
|
a198b87455 | ||
|
62112789d6 | ||
|
ac8a9989aa | ||
|
65035c62c1 | ||
|
cdaf4695c0 | ||
|
867ebb52ab | ||
|
9174448e5e | ||
|
0d5fe2d9f7 | ||
|
b10655ebb1 | ||
|
116bcaa13b | ||
|
37de7fc96a | ||
|
303de935d5 |
15
README.md
15
README.md
@ -29,7 +29,7 @@ Using this playbook, you can get the following services configured on your serve
|
|||||||
|
|
||||||
- (optional, default) an [Element](https://app.element.io/) ([formerly Riot](https://element.io/previously-riot)) web UI, which is configured to connect to your own Synapse server by default
|
- (optional, default) an [Element](https://app.element.io/) ([formerly Riot](https://element.io/previously-riot)) web UI, which is configured to connect to your own Synapse server by default
|
||||||
|
|
||||||
- (optional, default) an [ma1sd](https://github.com/ma1uta/ma1sd) Matrix Identity server
|
- (optional, default) a [ma1sd](https://github.com/ma1uta/ma1sd) Matrix Identity server
|
||||||
|
|
||||||
- (optional, default) an [Exim](https://www.exim.org/) mail server, through which all Matrix services send outgoing email (can be configured to relay through another SMTP server)
|
- (optional, default) an [Exim](https://www.exim.org/) mail server, through which all Matrix services send outgoing email (can be configured to relay through another SMTP server)
|
||||||
|
|
||||||
@ -47,7 +47,7 @@ Using this playbook, you can get the following services configured on your serve
|
|||||||
|
|
||||||
- (optional) the [mautrix-telegram](https://github.com/tulir/mautrix-telegram) bridge for bridging your Matrix server to [Telegram](https://telegram.org/)
|
- (optional) the [mautrix-telegram](https://github.com/tulir/mautrix-telegram) bridge for bridging your Matrix server to [Telegram](https://telegram.org/)
|
||||||
|
|
||||||
- (optional) the [mautrix-whatsapp](https://github.com/tulir/mautrix-whatsapp) bridge for bridging your Matrix server to [Whatsapp](https://www.whatsapp.com/)
|
- (optional) the [mautrix-whatsapp](https://github.com/tulir/mautrix-whatsapp) bridge for bridging your Matrix server to [WhatsApp](https://www.whatsapp.com/)
|
||||||
|
|
||||||
- (optional) the [mautrix-facebook](https://github.com/tulir/mautrix-facebook) bridge for bridging your Matrix server to [Facebook](https://facebook.com/)
|
- (optional) the [mautrix-facebook](https://github.com/tulir/mautrix-facebook) bridge for bridging your Matrix server to [Facebook](https://facebook.com/)
|
||||||
|
|
||||||
@ -103,7 +103,7 @@ Using this playbook, you can get the following services configured on your serve
|
|||||||
|
|
||||||
- (optional) the [Sygnal](https://github.com/matrix-org/sygnal) push gateway - see [Setting up the Sygnal push gateway](docs/configuring-playbook-sygnal.md) for setup documentation
|
- (optional) the [Sygnal](https://github.com/matrix-org/sygnal) push gateway - see [Setting up the Sygnal push gateway](docs/configuring-playbook-sygnal.md) for setup documentation
|
||||||
|
|
||||||
Basically, this playbook aims to get you up-and-running with all the basic necessities around Matrix, without you having to do anything else.
|
Basically, this playbook aims to get you up-and-running with all the necessities around Matrix, without you having to do anything else.
|
||||||
|
|
||||||
**Note**: the list above is exhaustive. It includes optional or even some advanced components that you will most likely not need.
|
**Note**: the list above is exhaustive. It includes optional or even some advanced components that you will most likely not need.
|
||||||
Sticking with the defaults (which install a subset of the above components) is the best choice, especially for a new installation.
|
Sticking with the defaults (which install a subset of the above components) is the best choice, especially for a new installation.
|
||||||
@ -128,4 +128,11 @@ When updating the playbook, refer to [the changelog](CHANGELOG.md) to catch up w
|
|||||||
|
|
||||||
- IRC channel: `#matrix-docker-ansible-deploy` on the [Freenode](https://freenode.net/) IRC network (irc.freenode.net)
|
- IRC channel: `#matrix-docker-ansible-deploy` on the [Freenode](https://freenode.net/) IRC network (irc.freenode.net)
|
||||||
|
|
||||||
- Github issues: [spantaleev/matrix-docker-ansible-deploy/issues](https://github.com/spantaleev/matrix-docker-ansible-deploy/issues)
|
- GitHub issues: [spantaleev/matrix-docker-ansible-deploy/issues](https://github.com/spantaleev/matrix-docker-ansible-deploy/issues)
|
||||||
|
|
||||||
|
|
||||||
|
## Services by the community
|
||||||
|
|
||||||
|
- [etke.cc](https://etke.cc) - matrix-docker-ansible-deploy and system stuff "as a service". That service will create your matrix homeserver on your domain and server (doesn't matter if it's cloud provider or on an old laptop in the corner of your room), (optional) maintains it (server's system updates, cleanup, security adjustments, tuning, etc.; matrix homeserver updates & maintenance) and (optional) provide full-featured email service for your domain
|
||||||
|
|
||||||
|
- [GoMatrixHosting](https://gomatrixhosting.com) - matrix-docker-ansible-deploy "as a service" with [Ansible AWX](https://github.com/ansible/awx). Members can be assigned a server from DigitalOcean, or they can connect their on-premises server. This AWX system can manage the updates, configuration, import and export, backups, and monitoring on its own. For more information [see our GitLab group](https://gitlab.com/GoMatrixHosting) or come [visit us on Matrix](https://matrix.to/#/#general:gomatrixhosting.com).
|
||||||
|
@ -32,9 +32,9 @@ Updates to this section are trailed here:
|
|||||||
|
|
||||||
## Does I need an AWX setup to use this? How do I configure it?
|
## Does I need an AWX setup to use this? How do I configure it?
|
||||||
|
|
||||||
Yes, you'll need to configure an AWX instance, the [Create AWX System](https://gitlab.com/GoMatrixHosting/create-awx-system) repository makes it easy to do. Just follow the steps listed in '/docs/Installation.md' of that repository.
|
Yes, you'll need to configure an AWX instance, the [Create AWX System](https://gitlab.com/GoMatrixHosting/create-awx-system) repository makes it easy to do. Just follow the steps listed in ['/docs/Installation.md' of that repository](https://gitlab.com/GoMatrixHosting/create-awx-system/-/blob/master/docs/Installation.md).
|
||||||
|
|
||||||
For simpler installation steps you can use to get started with this system, check out our minimal installation guide at '/doc/Installation_Minimal.md'.
|
For simpler installation steps you can use to get started with this system, check out our minimal installation guide at ['/doc/Installation_Minimal.md of that repository'](https://gitlab.com/GoMatrixHosting/create-awx-system/-/blob/master/docs/Installation_Minimal.md).
|
||||||
|
|
||||||
|
|
||||||
## Does I need a front-end WordPress site? And a DigitalOcean account?
|
## Does I need a front-end WordPress site? And a DigitalOcean account?
|
||||||
|
@ -55,6 +55,8 @@ Note that if your nginx version is old, it might not like our default choice of
|
|||||||
matrix_nginx_proxy_ssl_protocols: "TLSv1.2"
|
matrix_nginx_proxy_ssl_protocols: "TLSv1.2"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
If you are experiencing issues, try updating to a newer version of Nginx. As a data point in May 2021 a user reported that Nginx 1.14.2 was not working for them. They were getting errors about socket leaks. Updating to Nginx 1.19 fixed their issue.
|
||||||
|
|
||||||
|
|
||||||
### Using your own external Apache webserver
|
### Using your own external Apache webserver
|
||||||
|
|
||||||
|
@ -14,11 +14,7 @@ Table of contents:
|
|||||||
|
|
||||||
## Purging old data with the Purge History API
|
## Purging old data with the Purge History API
|
||||||
|
|
||||||
You can use the **Purge History API** to delete in-use (but old) data.
|
You can use the **[Purge History API](https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst)** to delete old messages on a per-room basis. **This is destructive** (especially for non-federated rooms), because it means **people will no longer have access to history past a certain point**.
|
||||||
|
|
||||||
**This is destructive** (especially for non-federated rooms), because it means **people will no longer have access to history past a certain point**.
|
|
||||||
|
|
||||||
Synapse's [Purge History API](https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst) can be used to purge on a per-room basis.
|
|
||||||
|
|
||||||
To make use of this API, **you'll need an admin access token** first. You can find your access token in the setting of some clients (like Element).
|
To make use of this API, **you'll need an admin access token** first. You can find your access token in the setting of some clients (like Element).
|
||||||
Alternatively, you can log in and obtain a new access token like this:
|
Alternatively, you can log in and obtain a new access token like this:
|
||||||
@ -29,6 +25,8 @@ curl \
|
|||||||
https://matrix.DOMAIN/_matrix/client/r0/login
|
https://matrix.DOMAIN/_matrix/client/r0/login
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Synapse's Admin API is not exposed to the internet by default. To expose it you will need to add `matrix_nginx_proxy_proxy_matrix_client_api_forwarded_location_synapse_admin_api_enabled: true` to your `vars.yml` file.
|
||||||
|
|
||||||
Follow the [Purge History API](https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst) documentation page for the actual purging instructions.
|
Follow the [Purge History API](https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst) documentation page for the actual purging instructions.
|
||||||
|
|
||||||
After deleting data, you may wish to run a [`FULL` Postgres `VACUUM`](./maintenance-postgres.md#vacuuming-postgresql).
|
After deleting data, you may wish to run a [`FULL` Postgres `VACUUM`](./maintenance-postgres.md#vacuuming-postgresql).
|
||||||
@ -36,7 +34,7 @@ After deleting data, you may wish to run a [`FULL` Postgres `VACUUM`](./maintena
|
|||||||
|
|
||||||
## Compressing state with rust-synapse-compress-state
|
## Compressing state with rust-synapse-compress-state
|
||||||
|
|
||||||
[rust-synapse-compress-state](https://github.com/matrix-org/rust-synapse-compress-state) can be used to optimize some `_state` tables used by Synapse.
|
[rust-synapse-compress-state](https://github.com/matrix-org/rust-synapse-compress-state) can be used to optimize some `_state` tables used by Synapse. If your server participates in large rooms this is the most effective way to reduce the size of your database.
|
||||||
|
|
||||||
This tool should be safe to use (even when Synapse is running), but it's always a good idea to [make Postgres backups](./maintenance-postgres.md#backing-up-postgresql) first.
|
This tool should be safe to use (even when Synapse is running), but it's always a good idea to [make Postgres backups](./maintenance-postgres.md#backing-up-postgresql) first.
|
||||||
|
|
||||||
@ -54,7 +52,10 @@ After state compression, you may wish to run a [`FULL` Postgres `VACUUM`](./main
|
|||||||
|
|
||||||
## Browse and manipulate the database
|
## Browse and manipulate the database
|
||||||
|
|
||||||
When the [matrix admin API](https://github.com/matrix-org/synapse/tree/master/docs/admin_api) and the other tools do not provide a more convenient way, having a look at synapse's postgresql database can satisfy a lot of admins' needs.
|
When the [Synapse Admin API](https://github.com/matrix-org/synapse/tree/master/docs/admin_api) and the other tools do not provide a more convenient way, having a look at synapse's postgresql database can satisfy a lot of admins' needs.
|
||||||
|
|
||||||
|
Editing the database manually is not recommended or supported by the Synapse developers. If you are going to do so you should [make a database backup](./maintenance-postgres.md#backing-up-postgresql).
|
||||||
|
|
||||||
First, set up an SSH tunnel to your matrix server (skip if it is your local machine):
|
First, set up an SSH tunnel to your matrix server (skip if it is your local machine):
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -1113,7 +1113,9 @@ matrix_ma1sd_synapsesql_connection: //{{ matrix_synapse_database_host }}/{{ matr
|
|||||||
|
|
||||||
matrix_ma1sd_dns_overwrite_enabled: true
|
matrix_ma1sd_dns_overwrite_enabled: true
|
||||||
matrix_ma1sd_dns_overwrite_homeserver_client_name: "{{ matrix_server_fqn_matrix }}"
|
matrix_ma1sd_dns_overwrite_homeserver_client_name: "{{ matrix_server_fqn_matrix }}"
|
||||||
matrix_ma1sd_dns_overwrite_homeserver_client_value: "http://{{ matrix_nginx_proxy_proxy_matrix_client_api_addr_with_container }}"
|
# The `matrix_ma1sd_dns_overwrite_homeserver_client_value` value when matrix_nginx_proxy_enabled is false covers the general case,
|
||||||
|
# but may be inaccurate if matrix-corporal is enabled.
|
||||||
|
matrix_ma1sd_dns_overwrite_homeserver_client_value: "{{ ('http://' + matrix_nginx_proxy_proxy_matrix_client_api_addr_with_container) if matrix_nginx_proxy_enabled else matrix_homeserver_container_url }}"
|
||||||
|
|
||||||
# By default, we send mail through the `matrix-mailer` service.
|
# By default, we send mail through the `matrix-mailer` service.
|
||||||
matrix_ma1sd_threepid_medium_email_identity_from: "{{ matrix_mailer_sender_address }}"
|
matrix_ma1sd_threepid_medium_email_identity_from: "{{ matrix_mailer_sender_address }}"
|
||||||
|
@ -44,6 +44,15 @@
|
|||||||
tags:
|
tags:
|
||||||
- purge-media
|
- purge-media
|
||||||
|
|
||||||
|
# Purge Synapse database if called
|
||||||
|
- include_tasks:
|
||||||
|
file: "purge_database_main.yml"
|
||||||
|
apply:
|
||||||
|
tags: purge-database
|
||||||
|
when: run_setup|bool and matrix_awx_enabled|bool
|
||||||
|
tags:
|
||||||
|
- purge-database
|
||||||
|
|
||||||
# Import configs, media repo from /chroot/backup import
|
# Import configs, media repo from /chroot/backup import
|
||||||
- include_tasks:
|
- include_tasks:
|
||||||
file: "import_awx.yml"
|
file: "import_awx.yml"
|
||||||
|
10
roles/matrix-awx/tasks/purge_database_build_list.yml
Normal file
10
roles/matrix-awx/tasks/purge_database_build_list.yml
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
|
||||||
|
- name: Collect entire room list into stdout
|
||||||
|
shell: |
|
||||||
|
curl -X GET --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" '{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/rooms?from={{ item }}'
|
||||||
|
register: rooms_output
|
||||||
|
|
||||||
|
- name: Print stdout to file
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
echo '{{ rooms_output.stdout }}' >> /tmp/{{ subscription_id }}_room_list_complete.json
|
13
roles/matrix-awx/tasks/purge_database_events.yml
Normal file
13
roles/matrix-awx/tasks/purge_database_events.yml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
|
||||||
|
- name: Purge all rooms with more then N events
|
||||||
|
shell: |
|
||||||
|
curl --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "delete_local_events": false, "purge_up_to_ts": {{ purge_epoche_time.stdout }}000 }' "{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_history/{{ item[1:-1] }}"
|
||||||
|
register: purge_command
|
||||||
|
|
||||||
|
- name: Print output of purge command
|
||||||
|
debug:
|
||||||
|
msg: "{{ purge_command.stdout }}"
|
||||||
|
|
||||||
|
- name: Pause for 5 seconds to let Synapse breathe
|
||||||
|
pause:
|
||||||
|
seconds: 5
|
234
roles/matrix-awx/tasks/purge_database_main.yml
Normal file
234
roles/matrix-awx/tasks/purge_database_main.yml
Normal file
@ -0,0 +1,234 @@
|
|||||||
|
|
||||||
|
- name: Ensure dateutils and curl is installed in AWX
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
yum:
|
||||||
|
name: dateutils
|
||||||
|
state: latest
|
||||||
|
|
||||||
|
- name: Ensure dateutils, curl and jq intalled on target machine
|
||||||
|
apt:
|
||||||
|
pkg:
|
||||||
|
- curl
|
||||||
|
- jq
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Include vars in matrix_vars.yml
|
||||||
|
include_vars:
|
||||||
|
file: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml'
|
||||||
|
no_log: True
|
||||||
|
|
||||||
|
- name: Collect size of Synapse database
|
||||||
|
shell: du -sh /matrix/postgres/data
|
||||||
|
register: db_size_before_stat
|
||||||
|
no_log: True
|
||||||
|
|
||||||
|
- name: Print before size of Synapse database
|
||||||
|
debug:
|
||||||
|
msg: "{{ db_size_before_stat.stdout.split('\n') }}"
|
||||||
|
when: db_size_before_stat is defined
|
||||||
|
|
||||||
|
- name: Collect the internal IP of the matrix-synapse container
|
||||||
|
shell: "/usr/bin/docker inspect --format '{''{range.NetworkSettings.Networks}''}{''{.IPAddress}''}{''{end}''}' matrix-synapse"
|
||||||
|
register: synapse_container_ip
|
||||||
|
|
||||||
|
- name: Collect access token for janitor user
|
||||||
|
shell: |
|
||||||
|
curl -X POST -d '{"type":"m.login.password", "user":"janitor", "password":"{{ matrix_awx_janitor_user_password }}"}' "{{ synapse_container_ip.stdout }}:8008/_matrix/client/r0/login" | jq '.access_token'
|
||||||
|
register: janitors_token
|
||||||
|
no_log: True
|
||||||
|
|
||||||
|
- name: Collect total number of rooms
|
||||||
|
shell: |
|
||||||
|
curl -X GET --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" '{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/rooms' | jq '.total_rooms'
|
||||||
|
when: purge_rooms|bool
|
||||||
|
register: rooms_total
|
||||||
|
|
||||||
|
- name: Print total number of rooms
|
||||||
|
debug:
|
||||||
|
msg: '{{ rooms_total.stdout }}'
|
||||||
|
when: purge_rooms|bool
|
||||||
|
|
||||||
|
- name: Calculate every 100 values for total number of rooms
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
seq 0 100 {{ rooms_total.stdout }}
|
||||||
|
when: purge_rooms|bool
|
||||||
|
register: every_100_rooms
|
||||||
|
|
||||||
|
- name: Ensure room_list_complete.json file exists
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
file:
|
||||||
|
path: /tmp/{{ subscription_id }}_room_list_complete.json
|
||||||
|
state: touch
|
||||||
|
when: purge_rooms|bool
|
||||||
|
|
||||||
|
- name: Build file with total room list
|
||||||
|
include_tasks: purge_database_build_list.yml
|
||||||
|
loop: "{{ every_100_rooms.stdout_lines | flatten(levels=1) }}"
|
||||||
|
when: purge_rooms|bool
|
||||||
|
|
||||||
|
- name: Generate list of rooms with no local users
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
jq 'try .rooms[] | select(.joined_local_members == 0) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_no_local_users.txt
|
||||||
|
when: purge_rooms|bool
|
||||||
|
|
||||||
|
- name: Count number of rooms with no local users
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
wc -l /tmp/{{ subscription_id }}_room_list_no_local_users.txt | awk '{ print $1 }'
|
||||||
|
register: rooms_no_local_total
|
||||||
|
when: purge_rooms|bool
|
||||||
|
|
||||||
|
- name: Setting host fact room_list_no_local_users
|
||||||
|
set_fact:
|
||||||
|
room_list_no_local_users: "{{ lookup('file', '/tmp/{{ subscription_id }}_room_list_no_local_users.txt') }}"
|
||||||
|
no_log: True
|
||||||
|
when: purge_rooms|bool
|
||||||
|
|
||||||
|
- name: Purge all rooms with no local users
|
||||||
|
include_tasks: purge_database_no_local.yml
|
||||||
|
loop: "{{ room_list_no_local_users.splitlines() | flatten(levels=1) }}"
|
||||||
|
when: purge_rooms|bool
|
||||||
|
|
||||||
|
- name: Collect epoche time from date
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
date -d '{{ purge_date }}' +"%s"
|
||||||
|
when: purge_rooms|bool
|
||||||
|
register: purge_epoche_time
|
||||||
|
|
||||||
|
- name: Generate list of rooms with more then N users
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
jq 'try .rooms[] | select(.joined_members > {{ purge_metric_value }}) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_joined_members.txt
|
||||||
|
when: (purge_metric.find("Number of users") != -1) and (purge_rooms|bool)
|
||||||
|
|
||||||
|
- name: Count number of rooms with more then N users
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
wc -l /tmp/{{ subscription_id }}_room_list_joined_members.txt | awk '{ print $1 }'
|
||||||
|
register: rooms_join_members_total
|
||||||
|
when: (purge_metric.find("Number of users") != -1) and (purge_rooms|bool)
|
||||||
|
|
||||||
|
- name: Setting host fact room_list_joined_members
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
set_fact:
|
||||||
|
room_list_joined_members: "{{ lookup('file', '/tmp/{{ subscription_id }}_room_list_joined_members.txt') }}"
|
||||||
|
when: (purge_metric.find("Number of users") != -1) and (purge_rooms|bool)
|
||||||
|
no_log: True
|
||||||
|
|
||||||
|
- name: Purge all rooms with more then N users
|
||||||
|
include_tasks: purge_database_users.yml
|
||||||
|
loop: "{{ room_list_joined_members.splitlines() | flatten(levels=1) }}"
|
||||||
|
when: (purge_metric.find("Number of users") != -1) and (purge_rooms|bool)
|
||||||
|
|
||||||
|
- name: Generate list of rooms with more then N events
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
jq 'try .rooms[] | select(.state_events > {{ purge_metric_value }}) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_state_events.txt
|
||||||
|
when: (purge_metric.find("Number of events") != -1) and (purge_rooms|bool)
|
||||||
|
|
||||||
|
- name: Count number of rooms with more then N users
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
wc -l /tmp/{{ subscription_id }}_room_list_state_events.txt | awk '{ print $1 }'
|
||||||
|
register: rooms_state_events_total
|
||||||
|
when: (purge_metric.find("Number of events") != -1) and (purge_rooms|bool)
|
||||||
|
|
||||||
|
- name: Setting host fact room_list_state_events
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
set_fact:
|
||||||
|
room_list_state_events: "{{ lookup('file', '/tmp/{{ subscription_id }}_room_list_state_events.txt') }}"
|
||||||
|
when: (purge_metric.find("Number of events") != -1) and (purge_rooms|bool)
|
||||||
|
no_log: True
|
||||||
|
|
||||||
|
- name: Purge all rooms with more then N events
|
||||||
|
include_tasks: purge_database_events.yml
|
||||||
|
loop: "{{ room_list_state_events.splitlines() | flatten(levels=1) }}"
|
||||||
|
when: (purge_metric.find("Number of events") != -1) and (purge_rooms|bool)
|
||||||
|
|
||||||
|
- name: Collect AWX admin token the hard way!
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
|
||||||
|
register: tower_token
|
||||||
|
no_log: True
|
||||||
|
|
||||||
|
- name: Execute rust-synapse-compress-state job template
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
awx.awx.tower_job_launch:
|
||||||
|
job_template: "{{ matrix_domain }} - 0 - Deploy/Update a Server"
|
||||||
|
tags: "rust-synapse-compress-state"
|
||||||
|
wait: yes
|
||||||
|
tower_host: "https://{{ tower_host }}"
|
||||||
|
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||||
|
validate_certs: yes
|
||||||
|
register: job
|
||||||
|
|
||||||
|
- name: Stop Synapse service
|
||||||
|
shell: systemctl stop matrix-synapse.service
|
||||||
|
|
||||||
|
- name: Re-index Synapse database
|
||||||
|
shell: docker exec -i matrix-postgres psql "host=127.0.0.1 port=5432 dbname=synapse user=synapse password={{ matrix_synapse_connection_password }}" -c 'REINDEX (VERBOSE) DATABASE synapse'
|
||||||
|
|
||||||
|
- name: Execute run-postgres-vacuum job template
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
awx.awx.tower_job_launch:
|
||||||
|
job_template: "{{ matrix_domain }} - 0 - Deploy/Update a Server"
|
||||||
|
tags: "run-postgres-vacuum,start"
|
||||||
|
wait: yes
|
||||||
|
tower_host: "https://{{ tower_host }}"
|
||||||
|
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||||
|
validate_certs: yes
|
||||||
|
register: job
|
||||||
|
|
||||||
|
- name: Cleanup room_list files
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
rm /tmp/{{ subscription_id }}_room_list*
|
||||||
|
when: purge_rooms|bool
|
||||||
|
ignore_errors: yes
|
||||||
|
|
||||||
|
- name: Collect size of Synapse database
|
||||||
|
shell: du -sh /matrix/postgres/data
|
||||||
|
register: db_size_after_stat
|
||||||
|
no_log: True
|
||||||
|
|
||||||
|
- name: Print total number of rooms processed
|
||||||
|
debug:
|
||||||
|
msg: '{{ rooms_total.stdout }}'
|
||||||
|
when: purge_rooms|bool
|
||||||
|
|
||||||
|
- name: Print the number of rooms purged with no local users
|
||||||
|
debug:
|
||||||
|
msg: '{{ rooms_no_local_total.stdout }}'
|
||||||
|
when: purge_rooms|bool
|
||||||
|
|
||||||
|
- name: Print the number of rooms purged with more then N users
|
||||||
|
debug:
|
||||||
|
msg: '{{ rooms_join_members_total.stdout }}'
|
||||||
|
when: (purge_metric.find("Number of users") != -1) and (purge_rooms|bool)
|
||||||
|
|
||||||
|
- name: Print the number of rooms purged with more then N events
|
||||||
|
debug:
|
||||||
|
msg: '{{ rooms_state_events_total.stdout }}'
|
||||||
|
when: (purge_metric.find("Number of events") != -1) and (purge_rooms|bool)
|
||||||
|
|
||||||
|
- name: Print before purge size of Synapse database
|
||||||
|
debug:
|
||||||
|
msg: "{{ db_size_before_stat.stdout.split('\n') }}"
|
||||||
|
when: db_size_before_stat is defined
|
||||||
|
|
||||||
|
- name: Print after purge size of Synapse database
|
||||||
|
debug:
|
||||||
|
msg: "{{ db_size_after_stat.stdout.split('\n') }}"
|
||||||
|
when: db_size_after_stat is defined
|
||||||
|
|
||||||
|
- name: Set boolean value to exit playbook
|
||||||
|
set_fact:
|
||||||
|
end_playbook: true
|
||||||
|
|
||||||
|
- name: End playbook early if this task is called.
|
||||||
|
meta: end_play
|
||||||
|
when: end_playbook is defined and end_playbook|bool
|
13
roles/matrix-awx/tasks/purge_database_no_local.yml
Normal file
13
roles/matrix-awx/tasks/purge_database_no_local.yml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
|
||||||
|
- name: Purge all rooms with no local users
|
||||||
|
shell: |
|
||||||
|
curl --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "room_id": {{ item }} }' '{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_room'
|
||||||
|
register: purge_command
|
||||||
|
|
||||||
|
- name: Print output of purge command
|
||||||
|
debug:
|
||||||
|
msg: "{{ purge_command.stdout }}"
|
||||||
|
|
||||||
|
- name: Pause for 5 seconds to let Synapse breathe
|
||||||
|
pause:
|
||||||
|
seconds: 5
|
13
roles/matrix-awx/tasks/purge_database_users.yml
Normal file
13
roles/matrix-awx/tasks/purge_database_users.yml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
|
||||||
|
- name: Purge all rooms with more then N users
|
||||||
|
shell: |
|
||||||
|
curl --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "delete_local_events": false, "purge_up_to_ts": {{ purge_epoche_time.stdout }}000 }' "{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_history/{{ item[1:-1] }}"
|
||||||
|
register: purge_command
|
||||||
|
|
||||||
|
- name: Print output of purge command
|
||||||
|
debug:
|
||||||
|
msg: "{{ purge_command.stdout }}"
|
||||||
|
|
||||||
|
- name: Pause for 5 seconds to let Synapse breathe
|
||||||
|
pause:
|
||||||
|
seconds: 5
|
@ -4,7 +4,7 @@
|
|||||||
date -d '{{ item }}' +"%s"
|
date -d '{{ item }}' +"%s"
|
||||||
register: epoche_time
|
register: epoche_time
|
||||||
|
|
||||||
- name: Purge local media to specific date
|
- name: Purge remote media to specific date
|
||||||
shell: |
|
shell: |
|
||||||
curl -X POST --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" '{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_media_cache?before_ts={{ epoche_time.stdout }}'
|
curl -X POST --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" '{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_media_cache?before_ts={{ epoche_time.stdout }}'
|
||||||
register: purge_command
|
register: purge_command
|
||||||
|
@ -3,7 +3,7 @@ matrix_client_element_enabled: true
|
|||||||
matrix_client_element_container_image_self_build: false
|
matrix_client_element_container_image_self_build: false
|
||||||
matrix_client_element_container_image_self_build_repo: "https://github.com/vector-im/riot-web.git"
|
matrix_client_element_container_image_self_build_repo: "https://github.com/vector-im/riot-web.git"
|
||||||
|
|
||||||
matrix_client_element_version: v1.7.26
|
matrix_client_element_version: v1.7.28
|
||||||
matrix_client_element_docker_image: "{{ matrix_client_element_docker_image_name_prefix }}vectorim/element-web:{{ matrix_client_element_version }}"
|
matrix_client_element_docker_image: "{{ matrix_client_element_docker_image_name_prefix }}vectorim/element-web:{{ matrix_client_element_version }}"
|
||||||
matrix_client_element_docker_image_name_prefix: "{{ 'localhost/' if matrix_client_element_container_image_self_build else matrix_container_global_registry_prefix }}"
|
matrix_client_element_docker_image_name_prefix: "{{ 'localhost/' if matrix_client_element_container_image_self_build else matrix_container_global_registry_prefix }}"
|
||||||
matrix_client_element_docker_image_force_pull: "{{ matrix_client_element_docker_image.endswith(':latest') }}"
|
matrix_client_element_docker_image_force_pull: "{{ matrix_client_element_docker_image.endswith(':latest') }}"
|
||||||
|
@ -35,7 +35,25 @@
|
|||||||
with_dict:
|
with_dict:
|
||||||
'matrix_awx_dimension_user_created': 'true'
|
'matrix_awx_dimension_user_created': 'true'
|
||||||
when: not matrix_awx_dimension_user_created|bool
|
when: not matrix_awx_dimension_user_created|bool
|
||||||
|
|
||||||
|
- name: Create user account @mjolnir
|
||||||
|
command: |
|
||||||
|
/usr/local/bin/matrix-synapse-register-user mjolnir {{ matrix_awx_mjolnir_user_password | quote }} 0
|
||||||
|
register: cmd
|
||||||
|
when: not matrix_awx_mjolnir_user_created|bool
|
||||||
|
no_log: True
|
||||||
|
|
||||||
|
- name: Update AWX dimension user created variable
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
lineinfile:
|
||||||
|
path: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml'
|
||||||
|
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||||
|
line: "{{ item.key }}: {{ item.value }}"
|
||||||
|
insertafter: 'AWX Settings'
|
||||||
|
with_dict:
|
||||||
|
'matrix_awx_mjolnir_user_created': 'true'
|
||||||
|
when: not matrix_awx_mjolnir_user_created|bool
|
||||||
|
|
||||||
- name: Ensure /chroot/website location has correct permissions
|
- name: Ensure /chroot/website location has correct permissions
|
||||||
file:
|
file:
|
||||||
path: /chroot/website
|
path: /chroot/website
|
||||||
|
@ -2,7 +2,7 @@ matrix_coturn_enabled: true
|
|||||||
|
|
||||||
matrix_coturn_container_image_self_build: false
|
matrix_coturn_container_image_self_build: false
|
||||||
matrix_coturn_container_image_self_build_repo: "https://github.com/coturn/coturn"
|
matrix_coturn_container_image_self_build_repo: "https://github.com/coturn/coturn"
|
||||||
matrix_coturn_container_image_self_build_repo_version: "upstream/{{ matrix_coturn_version }}"
|
matrix_coturn_container_image_self_build_repo_version: "docker/{{ matrix_coturn_version }}-r0"
|
||||||
matrix_coturn_container_image_self_build_repo_dockerfile_path: "docker/coturn/alpine/Dockerfile"
|
matrix_coturn_container_image_self_build_repo_dockerfile_path: "docker/coturn/alpine/Dockerfile"
|
||||||
|
|
||||||
matrix_coturn_version: 4.5.2
|
matrix_coturn_version: 4.5.2
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
|
|
||||||
matrix_grafana_enabled: false
|
matrix_grafana_enabled: false
|
||||||
|
|
||||||
matrix_grafana_version: 7.5.5
|
matrix_grafana_version: 7.5.6
|
||||||
matrix_grafana_docker_image: "{{ matrix_container_global_registry_prefix }}grafana/grafana:{{ matrix_grafana_version }}"
|
matrix_grafana_docker_image: "{{ matrix_container_global_registry_prefix }}grafana/grafana:{{ matrix_grafana_version }}"
|
||||||
matrix_grafana_docker_image_force_pull: "{{ matrix_grafana_docker_image.endswith(':latest') }}"
|
matrix_grafana_docker_image_force_pull: "{{ matrix_grafana_docker_image.endswith(':latest') }}"
|
||||||
|
|
||||||
@ -37,6 +37,13 @@ matrix_grafana_default_admin_password: admin
|
|||||||
# [Content Security Policy](https://grafana.com/docs/grafana/latest/administration/configuration/#content_security_policy)
|
# [Content Security Policy](https://grafana.com/docs/grafana/latest/administration/configuration/#content_security_policy)
|
||||||
matrix_grafana_content_security_policy: true
|
matrix_grafana_content_security_policy: true
|
||||||
|
|
||||||
|
# specify content security policy template to customized template
|
||||||
|
# added 'unsafe-inline' (ignored by browsers supporting nonces/hashes) to be backward compatible with older browsers.
|
||||||
|
# added https: and http: url schemes (ignored by browsers supporting 'strict-dynamic') to be backward compatible with older browsers.
|
||||||
|
# [Content Security Policy Browser Test] (https://content-security-policy.com/browser-test/)
|
||||||
|
# [Content Security Policy Reference](https://content-security-policy.com/script-src/)
|
||||||
|
matrix_grafana_content_security_policy_customized: true
|
||||||
|
|
||||||
# A list of extra arguments to pass to the container
|
# A list of extra arguments to pass to the container
|
||||||
matrix_grafana_container_extra_arguments: []
|
matrix_grafana_container_extra_arguments: []
|
||||||
|
|
||||||
|
@ -8,6 +8,11 @@ admin_password = """{{ matrix_grafana_default_admin_password }}"""
|
|||||||
# specify content_security_policy to add the Content-Security-Policy header to your requests
|
# specify content_security_policy to add the Content-Security-Policy header to your requests
|
||||||
content_security_policy = "{{ matrix_grafana_content_security_policy }}"
|
content_security_policy = "{{ matrix_grafana_content_security_policy }}"
|
||||||
|
|
||||||
|
# specify content security policy template to customized template
|
||||||
|
{% if matrix_grafana_content_security_policy_customized %}
|
||||||
|
content_security_policy_template = """script-src http: https: 'unsafe-inline' 'unsafe-eval' 'strict-dynamic' $NONCE;object-src 'none';font-src 'self';style-src 'self' 'unsafe-inline';img-src 'self' data:;base-uri 'self';connect-src 'self' grafana.com;manifest-src 'self';media-src 'none';form-action 'self';"""
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
[auth.anonymous]
|
[auth.anonymous]
|
||||||
# enable anonymous access
|
# enable anonymous access
|
||||||
enabled = {{ matrix_grafana_anonymous_access }}
|
enabled = {{ matrix_grafana_anonymous_access }}
|
||||||
|
@ -52,7 +52,7 @@ matrix_jitsi_jibri_recorder_password: ''
|
|||||||
|
|
||||||
matrix_jitsi_enable_lobby: false
|
matrix_jitsi_enable_lobby: false
|
||||||
|
|
||||||
matrix_jitsi_version: stable-5142
|
matrix_jitsi_version: stable-5765-1
|
||||||
matrix_jitsi_container_image_tag: "{{ matrix_jitsi_version }}" # for backward-compatibility
|
matrix_jitsi_container_image_tag: "{{ matrix_jitsi_version }}" # for backward-compatibility
|
||||||
|
|
||||||
matrix_jitsi_web_docker_image: "{{ matrix_container_global_registry_prefix }}jitsi/web:{{ matrix_jitsi_container_image_tag }}"
|
matrix_jitsi_web_docker_image: "{{ matrix_container_global_registry_prefix }}jitsi/web:{{ matrix_jitsi_container_image_tag }}"
|
||||||
|
@ -3,6 +3,8 @@ AUTH_TYPE={{ matrix_jitsi_auth_type }}
|
|||||||
ENABLE_AUTH={{ 1 if matrix_jitsi_enable_auth else 0 }}
|
ENABLE_AUTH={{ 1 if matrix_jitsi_enable_auth else 0 }}
|
||||||
ENABLE_GUESTS={{ 1 if matrix_jitsi_enable_guests else 0 }}
|
ENABLE_GUESTS={{ 1 if matrix_jitsi_enable_guests else 0 }}
|
||||||
|
|
||||||
|
PUBLIC_URL={{ matrix_jitsi_web_public_url }}
|
||||||
|
|
||||||
LDAP_URL={{ matrix_jitsi_ldap_url }}
|
LDAP_URL={{ matrix_jitsi_ldap_url }}
|
||||||
LDAP_BASE={{ matrix_jitsi_ldap_base }}
|
LDAP_BASE={{ matrix_jitsi_ldap_base }}
|
||||||
LDAP_BINDDN={{ matrix_jitsi_ldap_binddn }}
|
LDAP_BINDDN={{ matrix_jitsi_ldap_binddn }}
|
||||||
|
@ -7,7 +7,7 @@ matrix_mailer_container_image_self_build_repository_url: "https://github.com/dev
|
|||||||
matrix_mailer_container_image_self_build_src_files_path: "{{ matrix_mailer_base_path }}/docker-src"
|
matrix_mailer_container_image_self_build_src_files_path: "{{ matrix_mailer_base_path }}/docker-src"
|
||||||
matrix_mailer_container_image_self_build_version: "{{ matrix_mailer_docker_image.split(':')[1] }}"
|
matrix_mailer_container_image_self_build_version: "{{ matrix_mailer_docker_image.split(':')[1] }}"
|
||||||
|
|
||||||
matrix_mailer_version: 4.94-r0
|
matrix_mailer_version: 4.94.2-r0-1
|
||||||
matrix_mailer_docker_image: "{{ matrix_mailer_docker_image_name_prefix }}devture/exim-relay:{{ matrix_mailer_version }}"
|
matrix_mailer_docker_image: "{{ matrix_mailer_docker_image_name_prefix }}devture/exim-relay:{{ matrix_mailer_version }}"
|
||||||
matrix_mailer_docker_image_name_prefix: "{{ 'localhost/' if matrix_mailer_container_image_self_build else matrix_container_global_registry_prefix }}"
|
matrix_mailer_docker_image_name_prefix: "{{ 'localhost/' if matrix_mailer_container_image_self_build else matrix_container_global_registry_prefix }}"
|
||||||
matrix_mailer_docker_image_force_pull: "{{ matrix_mailer_docker_image.endswith(':latest') }}"
|
matrix_mailer_docker_image_force_pull: "{{ matrix_mailer_docker_image.endswith(':latest') }}"
|
||||||
|
@ -18,7 +18,6 @@ ExecStart={{ matrix_host_command_docker }} run --rm --name matrix-mailer \
|
|||||||
--user={{ matrix_mailer_container_user_uid }}:{{ matrix_mailer_container_user_gid }} \
|
--user={{ matrix_mailer_container_user_uid }}:{{ matrix_mailer_container_user_gid }} \
|
||||||
--cap-drop=ALL \
|
--cap-drop=ALL \
|
||||||
--read-only \
|
--read-only \
|
||||||
--init \
|
|
||||||
--tmpfs=/var/spool/exim:rw,noexec,nosuid,size=100m \
|
--tmpfs=/var/spool/exim:rw,noexec,nosuid,size=100m \
|
||||||
--network={{ matrix_docker_network }} \
|
--network={{ matrix_docker_network }} \
|
||||||
--env-file={{ matrix_mailer_base_path }}/env-mailer \
|
--env-file={{ matrix_mailer_base_path }}/env-mailer \
|
||||||
|
@ -223,6 +223,7 @@ matrix_nginx_proxy_proxy_matrix_federation_api_addr_sans_container: "localhost:1
|
|||||||
matrix_nginx_proxy_proxy_matrix_federation_api_client_max_body_size_mb: "{{ (matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb | int) * 3 }}"
|
matrix_nginx_proxy_proxy_matrix_federation_api_client_max_body_size_mb: "{{ (matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb | int) * 3 }}"
|
||||||
matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate: "{{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/fullchain.pem"
|
matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate: "{{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/fullchain.pem"
|
||||||
matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate_key: "{{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/privkey.pem"
|
matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate_key: "{{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/privkey.pem"
|
||||||
|
matrix_nginx_proxy_proxy_matrix_federation_api_ssl_trusted_certificate: "{{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/chain.pem"
|
||||||
|
|
||||||
# The tmpfs at /tmp needs to be large enough to handle multiple concurrent file uploads.
|
# The tmpfs at /tmp needs to be large enough to handle multiple concurrent file uploads.
|
||||||
matrix_nginx_proxy_tmp_directory_size_mb: "{{ (matrix_nginx_proxy_proxy_matrix_federation_api_client_max_body_size_mb | int) * 50 }}"
|
matrix_nginx_proxy_tmp_directory_size_mb: "{{ (matrix_nginx_proxy_proxy_matrix_federation_api_client_max_body_size_mb | int) * 50 }}"
|
||||||
@ -385,6 +386,19 @@ matrix_ssl_log_dir_path: "{{ matrix_ssl_base_path }}/log"
|
|||||||
matrix_ssl_pre_obtaining_required_service_name: ~
|
matrix_ssl_pre_obtaining_required_service_name: ~
|
||||||
matrix_ssl_pre_obtaining_required_service_start_wait_time_seconds: 60
|
matrix_ssl_pre_obtaining_required_service_start_wait_time_seconds: 60
|
||||||
|
|
||||||
|
# OCSP Stapling eliminating the need for clients to contact the CA, with the aim of improving both security and performance.
|
||||||
|
# OCSP stapling can provide a performance boost of up to 30%
|
||||||
|
# nginx web server supports OCSP stapling since version 1.3.7.
|
||||||
|
#
|
||||||
|
# *warning* Nginx is lazy loading OCSP responses, which means that for the first few web requests it is unable to add the OCSP response.
|
||||||
|
# set matrix_nginx_proxy_ocsp_stapling_enabled false to disable OCSP Stapling
|
||||||
|
#
|
||||||
|
# Learn more about what it is here:
|
||||||
|
# - https://en.wikipedia.org/wiki/OCSP_stapling
|
||||||
|
# - https://blog.cloudflare.com/high-reliability-ocsp-stapling/
|
||||||
|
# - https://blog.mozilla.org/security/2013/07/29/ocsp-stapling-in-firefox/
|
||||||
|
matrix_nginx_proxy_ocsp_stapling_enabled: true
|
||||||
|
|
||||||
# nginx status page configurations.
|
# nginx status page configurations.
|
||||||
matrix_nginx_proxy_proxy_matrix_nginx_status_enabled: false
|
matrix_nginx_proxy_proxy_matrix_nginx_status_enabled: false
|
||||||
matrix_nginx_proxy_proxy_matrix_nginx_status_allowed_addresses: ['{{ ansible_default_ipv4.address }}']
|
matrix_nginx_proxy_proxy_matrix_nginx_status_allowed_addresses: ['{{ ansible_default_ipv4.address }}']
|
||||||
|
@ -69,6 +69,12 @@ server {
|
|||||||
ssl_ciphers {{ matrix_nginx_proxy_ssl_ciphers }};
|
ssl_ciphers {{ matrix_nginx_proxy_ssl_ciphers }};
|
||||||
{% endif %}
|
{% endif %}
|
||||||
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
||||||
|
|
||||||
|
{% if matrix_nginx_proxy_ocsp_stapling_enabled %}
|
||||||
|
ssl_stapling on;
|
||||||
|
ssl_stapling_verify on;
|
||||||
|
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_base_domain_hostname }}/chain.pem;
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{{ render_vhost_directives() }}
|
{{ render_vhost_directives() }}
|
||||||
}
|
}
|
||||||
|
@ -74,6 +74,12 @@ server {
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
||||||
|
|
||||||
|
{% if matrix_nginx_proxy_ocsp_stapling_enabled %}
|
||||||
|
ssl_stapling on;
|
||||||
|
ssl_stapling_verify on;
|
||||||
|
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_bot_go_neb_hostname }}/chain.pem;
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{{ render_vhost_directives() }}
|
{{ render_vhost_directives() }}
|
||||||
}
|
}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
@ -79,6 +79,12 @@ server {
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
||||||
|
|
||||||
|
{% if matrix_nginx_proxy_ocsp_stapling_enabled %}
|
||||||
|
ssl_stapling on;
|
||||||
|
ssl_stapling_verify on;
|
||||||
|
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_element_hostname }}/chain.pem;
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{{ render_vhost_directives() }}
|
{{ render_vhost_directives() }}
|
||||||
}
|
}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
@ -77,6 +77,12 @@ server {
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
||||||
|
|
||||||
|
{% if matrix_nginx_proxy_ocsp_stapling_enabled %}
|
||||||
|
ssl_stapling on;
|
||||||
|
ssl_stapling_verify on;
|
||||||
|
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_dimension_hostname }}/chain.pem;
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{{ render_vhost_directives() }}
|
{{ render_vhost_directives() }}
|
||||||
}
|
}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
@ -136,7 +136,13 @@
|
|||||||
proxy_max_temp_file_size 0;
|
proxy_max_temp_file_size 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
location / {
|
{#
|
||||||
|
We only handle the root URI for this redirect or homepage serving.
|
||||||
|
Unhandled URIs (mostly by `matrix_nginx_proxy_proxy_matrix_client_api_forwarded_location_prefix_regexes` above) should result in a 404,
|
||||||
|
instead of causing a redirect.
|
||||||
|
See: https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/1058
|
||||||
|
#}
|
||||||
|
location ~* ^/$ {
|
||||||
{% if matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain %}
|
{% if matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain %}
|
||||||
return 302 $scheme://{{ matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain }}$request_uri;
|
return 302 $scheme://{{ matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain }}$request_uri;
|
||||||
{% else %}
|
{% else %}
|
||||||
@ -196,6 +202,12 @@ server {
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
||||||
|
|
||||||
|
{% if matrix_nginx_proxy_ocsp_stapling_enabled %}
|
||||||
|
ssl_stapling on;
|
||||||
|
ssl_stapling_verify on;
|
||||||
|
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/chain.pem;
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{{ render_vhost_directives() }}
|
{{ render_vhost_directives() }}
|
||||||
}
|
}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
@ -230,6 +242,12 @@ server {
|
|||||||
ssl_ciphers {{ matrix_nginx_proxy_ssl_ciphers }};
|
ssl_ciphers {{ matrix_nginx_proxy_ssl_ciphers }};
|
||||||
{% endif %}
|
{% endif %}
|
||||||
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
||||||
|
|
||||||
|
{% if matrix_nginx_proxy_ocsp_stapling_enabled %}
|
||||||
|
ssl_stapling on;
|
||||||
|
ssl_stapling_verify on;
|
||||||
|
ssl_trusted_certificate {{ matrix_nginx_proxy_proxy_matrix_federation_api_ssl_trusted_certificate }};
|
||||||
|
{% endif %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
location / {
|
location / {
|
||||||
|
@ -10,6 +10,7 @@
|
|||||||
# add_header X-Content-Type-Options nosniff;
|
# add_header X-Content-Type-Options nosniff;
|
||||||
# add_header X-Frame-Options SAMEORIGIN;
|
# add_header X-Frame-Options SAMEORIGIN;
|
||||||
add_header Referrer-Policy "strict-origin-when-cross-origin";
|
add_header Referrer-Policy "strict-origin-when-cross-origin";
|
||||||
|
|
||||||
{% if matrix_nginx_proxy_floc_optout_enabled %}
|
{% if matrix_nginx_proxy_floc_optout_enabled %}
|
||||||
add_header Permissions-Policy interest-cohort=() always;
|
add_header Permissions-Policy interest-cohort=() always;
|
||||||
{% endif %}
|
{% endif %}
|
||||||
@ -84,6 +85,12 @@ server {
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
||||||
|
|
||||||
|
{% if matrix_nginx_proxy_ocsp_stapling_enabled %}
|
||||||
|
ssl_stapling on;
|
||||||
|
ssl_stapling_verify on;
|
||||||
|
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_grafana_hostname }}/chain.pem;
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{{ render_vhost_directives() }}
|
{{ render_vhost_directives() }}
|
||||||
}
|
}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
@ -49,6 +49,27 @@
|
|||||||
|
|
||||||
tcp_nodelay on;
|
tcp_nodelay on;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# XMPP websocket
|
||||||
|
location = /xmpp-websocket {
|
||||||
|
{% if matrix_nginx_proxy_enabled %}
|
||||||
|
resolver 127.0.0.11 valid=5s;
|
||||||
|
set $backend {{ matrix_jitsi_xmpp_bosh_url_base }};
|
||||||
|
proxy_pass $backend/xmpp-websocket;
|
||||||
|
{% else %}
|
||||||
|
{# Generic configuration for use outside of our container setup #}
|
||||||
|
proxy_pass http://127.0.0.1:5280;
|
||||||
|
{% endif %}
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
|
||||||
|
proxy_http_version 1.1;
|
||||||
|
proxy_read_timeout 900s;
|
||||||
|
proxy_set_header Connection "upgrade";
|
||||||
|
proxy_set_header Upgrade $http_upgrade;
|
||||||
|
proxy_set_header X-Forwarded-For $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
tcp_nodelay on;
|
||||||
|
}
|
||||||
{% endmacro %}
|
{% endmacro %}
|
||||||
|
|
||||||
server {
|
server {
|
||||||
@ -98,6 +119,12 @@ server {
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
||||||
|
|
||||||
|
{% if matrix_nginx_proxy_ocsp_stapling_enabled %}
|
||||||
|
ssl_stapling on;
|
||||||
|
ssl_stapling_verify on;
|
||||||
|
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_jitsi_hostname }}/chain.pem;
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{{ render_vhost_directives() }}
|
{{ render_vhost_directives() }}
|
||||||
}
|
}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
@ -62,6 +62,12 @@ server {
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
||||||
|
|
||||||
|
{% if matrix_nginx_proxy_ocsp_stapling_enabled %}
|
||||||
|
ssl_stapling on;
|
||||||
|
ssl_stapling_verify on;
|
||||||
|
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_riot_compat_redirect_hostname }}/chain.pem;
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{{ render_vhost_directives() }}
|
{{ render_vhost_directives() }}
|
||||||
}
|
}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
@ -76,6 +76,12 @@ server {
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
|
||||||
|
|
||||||
|
{% if matrix_nginx_proxy_ocsp_stapling_enabled %}
|
||||||
|
ssl_stapling on;
|
||||||
|
ssl_stapling_verify on;
|
||||||
|
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_sygnal_hostname }}/chain.pem;
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{{ render_vhost_directives() }}
|
{{ render_vhost_directives() }}
|
||||||
}
|
}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
|
|
||||||
matrix_prometheus_enabled: false
|
matrix_prometheus_enabled: false
|
||||||
|
|
||||||
matrix_prometheus_version: v2.26.0
|
matrix_prometheus_version: v2.27.0
|
||||||
matrix_prometheus_docker_image: "{{ matrix_container_global_registry_prefix }}prom/prometheus:{{ matrix_prometheus_version }}"
|
matrix_prometheus_docker_image: "{{ matrix_container_global_registry_prefix }}prom/prometheus:{{ matrix_prometheus_version }}"
|
||||||
matrix_prometheus_docker_image_force_pull: "{{ matrix_prometheus_docker_image.endswith(':latest') }}"
|
matrix_prometheus_docker_image_force_pull: "{{ matrix_prometheus_docker_image.endswith(':latest') }}"
|
||||||
|
|
||||||
|
@ -8,7 +8,7 @@ matrix_synapse_admin_container_self_build_repo: "https://github.com/Awesome-Tech
|
|||||||
|
|
||||||
matrix_synapse_admin_docker_src_files_path: "{{ matrix_base_data_path }}/synapse-admin/docker-src"
|
matrix_synapse_admin_docker_src_files_path: "{{ matrix_base_data_path }}/synapse-admin/docker-src"
|
||||||
|
|
||||||
matrix_synapse_admin_version: 0.7.2
|
matrix_synapse_admin_version: latest
|
||||||
matrix_synapse_admin_docker_image: "{{ matrix_synapse_admin_docker_image_name_prefix }}awesometechnologies/synapse-admin:{{ matrix_synapse_admin_version }}"
|
matrix_synapse_admin_docker_image: "{{ matrix_synapse_admin_docker_image_name_prefix }}awesometechnologies/synapse-admin:{{ matrix_synapse_admin_version }}"
|
||||||
matrix_synapse_admin_docker_image_name_prefix: "{{ 'localhost/' if matrix_synapse_admin_container_self_build else matrix_container_global_registry_prefix }}"
|
matrix_synapse_admin_docker_image_name_prefix: "{{ 'localhost/' if matrix_synapse_admin_container_self_build else matrix_container_global_registry_prefix }}"
|
||||||
matrix_synapse_admin_docker_image_force_pull: "{{ matrix_synapse_admin_docker_image.endswith(':latest') }}"
|
matrix_synapse_admin_docker_image_force_pull: "{{ matrix_synapse_admin_docker_image.endswith(':latest') }}"
|
||||||
|
@ -15,8 +15,8 @@ matrix_synapse_docker_image_name_prefix: "{{ 'localhost/' if matrix_synapse_cont
|
|||||||
# amd64 gets released first.
|
# amd64 gets released first.
|
||||||
# arm32 relies on self-building, so the same version can be built immediately.
|
# arm32 relies on self-building, so the same version can be built immediately.
|
||||||
# arm64 users need to wait for a prebuilt image to become available.
|
# arm64 users need to wait for a prebuilt image to become available.
|
||||||
matrix_synapse_version: v1.33.1
|
matrix_synapse_version: v1.34.0
|
||||||
matrix_synapse_version_arm64: v1.33.1
|
matrix_synapse_version_arm64: v1.34.0
|
||||||
matrix_synapse_docker_image_tag: "{{ matrix_synapse_version if matrix_architecture in ['arm32', 'amd64'] else matrix_synapse_version_arm64 }}"
|
matrix_synapse_docker_image_tag: "{{ matrix_synapse_version if matrix_architecture in ['arm32', 'amd64'] else matrix_synapse_version_arm64 }}"
|
||||||
matrix_synapse_docker_image_force_pull: "{{ matrix_synapse_docker_image.endswith(':latest') }}"
|
matrix_synapse_docker_image_force_pull: "{{ matrix_synapse_docker_image.endswith(':latest') }}"
|
||||||
|
|
||||||
@ -454,6 +454,7 @@ matrix_synapse_sentry_dsn: ""
|
|||||||
|
|
||||||
# Postgres database information
|
# Postgres database information
|
||||||
matrix_synapse_database_host: "matrix-postgres"
|
matrix_synapse_database_host: "matrix-postgres"
|
||||||
|
matrix_synapse_database_port: 5432
|
||||||
matrix_synapse_database_user: "synapse"
|
matrix_synapse_database_user: "synapse"
|
||||||
matrix_synapse_database_password: ""
|
matrix_synapse_database_password: ""
|
||||||
matrix_synapse_database_database: "synapse"
|
matrix_synapse_database_database: "synapse"
|
||||||
|
@ -10,7 +10,7 @@
|
|||||||
|
|
||||||
- name: Set matrix_synapse_rust_synapse_compress_state_find_rooms_command_wait_time, if not provided
|
- name: Set matrix_synapse_rust_synapse_compress_state_find_rooms_command_wait_time, if not provided
|
||||||
set_fact:
|
set_fact:
|
||||||
matrix_synapse_rust_synapse_compress_state_find_rooms_command_wait_time: 15
|
matrix_synapse_rust_synapse_compress_state_find_rooms_command_wait_time: 180
|
||||||
when: "matrix_synapse_rust_synapse_compress_state_find_rooms_command_wait_time|default('') == ''"
|
when: "matrix_synapse_rust_synapse_compress_state_find_rooms_command_wait_time|default('') == ''"
|
||||||
|
|
||||||
- name: Set matrix_synapse_rust_synapse_compress_state_compress_room_time, if not provided
|
- name: Set matrix_synapse_rust_synapse_compress_state_compress_room_time, if not provided
|
||||||
|
@ -128,6 +128,16 @@ default_room_version: {{ matrix_synapse_default_room_version|to_json }}
|
|||||||
#
|
#
|
||||||
#gc_thresholds: [700, 10, 10]
|
#gc_thresholds: [700, 10, 10]
|
||||||
|
|
||||||
|
# The minimum time in seconds between each GC for a generation, regardless of
|
||||||
|
# the GC thresholds. This ensures that we don't do GC too frequently.
|
||||||
|
#
|
||||||
|
# A value of `[1s, 10s, 30s]` indicates that a second must pass between consecutive
|
||||||
|
# generation 0 GCs, etc.
|
||||||
|
#
|
||||||
|
# Defaults to `[1s, 10s, 30s]`.
|
||||||
|
#
|
||||||
|
#gc_min_interval: [0.5s, 30s, 1m]
|
||||||
|
|
||||||
# Set the limit on the returned events in the timeline in the get
|
# Set the limit on the returned events in the timeline in the get
|
||||||
# and sync operations. The default value is 100. -1 means no upper limit.
|
# and sync operations. The default value is 100. -1 means no upper limit.
|
||||||
#
|
#
|
||||||
@ -757,6 +767,12 @@ federation_domain_whitelist: {{ matrix_synapse_federation_domain_whitelist|to_js
|
|||||||
#
|
#
|
||||||
#allow_profile_lookup_over_federation: false
|
#allow_profile_lookup_over_federation: false
|
||||||
|
|
||||||
|
# Uncomment to disable device display name lookup over federation. By default, the
|
||||||
|
# Federation API allows other homeservers to obtain device display names of any user
|
||||||
|
# on this homeserver. Defaults to 'true'.
|
||||||
|
#
|
||||||
|
#allow_device_name_lookup_over_federation: false
|
||||||
|
|
||||||
|
|
||||||
## Caching ##
|
## Caching ##
|
||||||
|
|
||||||
@ -813,6 +829,7 @@ database:
|
|||||||
password: {{ matrix_synapse_database_password|string|to_json }}
|
password: {{ matrix_synapse_database_password|string|to_json }}
|
||||||
database: "{{ matrix_synapse_database_database }}"
|
database: "{{ matrix_synapse_database_database }}"
|
||||||
host: "{{ matrix_synapse_database_host }}"
|
host: "{{ matrix_synapse_database_host }}"
|
||||||
|
port: {{ matrix_synapse_database_port }}
|
||||||
cp_min: 5
|
cp_min: 5
|
||||||
cp_max: 10
|
cp_max: 10
|
||||||
|
|
||||||
@ -1519,6 +1536,7 @@ room_prejoin_state:
|
|||||||
# - m.room.avatar
|
# - m.room.avatar
|
||||||
# - m.room.encryption
|
# - m.room.encryption
|
||||||
# - m.room.name
|
# - m.room.name
|
||||||
|
# - m.room.create
|
||||||
#
|
#
|
||||||
# Uncomment the following to disable these defaults (so that only the event
|
# Uncomment the following to disable these defaults (so that only the event
|
||||||
# types listed in 'additional_event_types' are shared). Defaults to 'false'.
|
# types listed in 'additional_event_types' are shared). Defaults to 'false'.
|
||||||
|
Loading…
Reference in New Issue
Block a user