Merge remote-tracking branch 'origin/master' into synapse-workers

This commit is contained in:
Marcel Partap
2020-12-01 21:24:26 +01:00
101 changed files with 941 additions and 1127 deletions

View File

@ -1,11 +1,10 @@
# Setting up matrix-sms-bridge (optional)
The playbook can install and configure
[matrix-sms-bridge](https://github.com/benkuly/matrix-sms-bridge) for you.
The playbook can install and configure [matrix-sms-bridge](https://github.com/benkuly/matrix-sms-bridge) for you.
See the project page to learn what it does and why it might be useful to you.
First you need to ensure, that the bridge has unix read and write rights to your modem. On debian based distributions there is nothing to do. On others distributions you either add a group `dialout` to your host and assign it to your modem or you give the matrix user or group access to your modem.
**The bridge uses [android-sms-gateway-server](https://github.com/RebekkaMa/android-sms-gateway-server). You need to configure it first.**
To enable the bridge just use the following
playbook configuration:
@ -13,16 +12,23 @@ playbook configuration:
```yaml
matrix_sms_bridge_enabled: true
matrix_sms_bridge_gammu_modem: "/dev/serial/by-id/myDeviceId"
# generate a secret passwort e.g. with pwgen -s 64 1
matrix_sms_bridge_database_password: ""
# (optional) a room id to a default room
# (optional but recommended) a room id to a default room
matrix_sms_bridge_default_room: ""
# (optional) gammu reset frequencies (see https://wammu.eu/docs/manual/smsd/config.html#option-ResetFrequency)
matrix_sms_bridge_gammu_reset_frequency: 3600
matrix_sms_bridge_gammu_hard_reset_frequency: 0
# (optional) group with unix read and write rights to modem
matrix_sms_bridge_modem_group: 'dialout'
# (optional but recommended) configure your server location
matrix_sms_bridge_default_region: DE
matrix_sms_bridge_default_timezone: Europe/Berlin
# Settings to connect to android-sms-gateway-server
matrix_sms_bridge_provider_android_baseurl: https://192.168.24.24:9090
matrix_sms_bridge_provider_android_username: admin
matrix_sms_bridge_provider_android_password: supeSecretPassword
# (optional) if your android-sms-gateway-server uses a self signed vertificate, the bridge needs a "truststore". This can be the certificate itself.
matrix_sms_bridge_provider_android_truststore_local_path: android-sms-gateway-server.p12
matrix_sms_bridge_provider_android_truststore_password: 123
```

View File

@ -91,44 +91,33 @@ matrix_jitsi_jvb_container_extra_arguments:
## (Optional) Fine tune Jitsi
You may want to suspend unused video layers until they are requested again, to save up resources on both server and clients.
Sample **additional** `inventory/host_vars/matrix.DOMAIN/vars.yml` configuration to save up resources (explained below):
```yaml
matrix_jitsi_web_custom_config_extension: |
config.enableLayerSuspension = true;
config.disableAudioLevels = true;
# Limit the number of video feeds forwarded to each client
config.channelLastN = 4;
matrix_jitsi_web_config_resolution_width_ideal_and_max: 480
matrix_jitsi_web_config_resolution_height_ideal_and_max: 240
```
You may want to **suspend unused video layers** until they are requested again, to save up resources on both server and clients.
Read more on this feature [here](https://jitsi.org/blog/new-off-stage-layer-suppression-feature/)
For this add this line to your `inventory/host_vars/matrix.DOMAIN/vars.yml` configuration:
```yaml
matrix_jitsi_web_config_enableLayerSuspension: true
```
You may wish to **disable audio levels** to avoid excessive refresh of the client-side page and decrease the CPU consumption involved.
You may wish to disable audio levels to avoid excessive refresh of the client-side page and decrease the CPU consumption involved.
For this add this line to your `inventory/host_vars/matrix.DOMAIN/vars.yml` configuration:
```yaml
matrix_jitsi_web_config_disableAudioLevels: true
```
You may want to limit the number of video feeds forwarded to each client, to save up resources on both server and clients. As clients bandwidth and CPU may not bear the load, use this setting to avoid lag and crashes.
You may want to **limit the number of video feeds forwarded to each client**, to save up resources on both server and clients. As clients bandwidth and CPU may not bear the load, use this setting to avoid lag and crashes.
This feature is found by default in other webconference applications such as Office 365 Teams (limit is set to 4).
Read how it works [here](https://github.com/jitsi/jitsi-videobridge/blob/master/doc/last-n.md) and performance evaluation on this [study](https://jitsi.org/wp-content/uploads/2016/12/nossdav2015lastn.pdf)
For this add this line to your `inventory/host_vars/matrix.DOMAIN/vars.yml` configuration:
Read how it works [here](https://github.com/jitsi/jitsi-videobridge/blob/master/doc/last-n.md) and performance evaluation on this [study](https://jitsi.org/wp-content/uploads/2016/12/nossdav2015lastn.pdf).
```yaml
matrix_jitsi_web_config_channelLastN: 4
```
You may want to **limit the maximum video resolution**, to save up resources on both server and clients.
To enable the variables that allow you to manage the video configuration you must add the following line to your `inventory/host_vars/matrix.DOMAIN/vars.yml` configuration:
```yaml
matrix_jitsi_web_config_constraints_enabled: true
```
You may want to limit the maximum video resolution, to save up resources on both server and clients.
For example, to set resolution to 480.
For this add this two lines to your `inventory/host_vars/matrix.DOMAIN/vars.yml` configuration:
```yaml
matrix_jitsi_web_config_constraints_video_height_ideal: 480
matrix_jitsi_web_config_constraints_video_height_max: 480
```
## Apply changes

View File

@ -113,7 +113,7 @@ With this, nginx would still be in use, but it would not bother with anything SS
All services would be served locally on `127.0.0.1:81` and `127.0.0.1:8449` (as per the example configuration above).
You can then set up another reverse-proxy server on ports 80/443/8448 for all of the expected domains and make traffic go to these local ports.
The expected domains vary depending on the services you have enabled (`matrix.DOMAIN` for sure; `element.DOMAIN` and `dimension.DOMAIN` are optional).
The expected domains vary depending on the services you have enabled (`matrix.DOMAIN` for sure; `element.DOMAIN`, `dimension.DOMAIN` and `jitsi.DOMAIN` are optional).
### Sample configuration for running behind Traefik 2.0
@ -144,7 +144,7 @@ matrix_nginx_proxy_container_extra_arguments:
- '--label "traefik.enable=true"'
# The Nginx proxy container will receive traffic from these subdomains
- '--label "traefik.http.routers.matrix-nginx-proxy.rule=Host(`{{ matrix_server_fqn_matrix }}`,`{{ matrix_server_fqn_element }}`,`{{ matrix_server_fqn_dimension }}`)"'
- '--label "traefik.http.routers.matrix-nginx-proxy.rule=Host(`{{ matrix_server_fqn_matrix }}`,`{{ matrix_server_fqn_element }}`,`{{ matrix_server_fqn_dimension }},`{{ matrix_server_fqn_jitsi }}`)"'
# (The 'web-secure' entrypoint must bind to port 443 in Traefik config)
- '--label "traefik.http.routers.matrix-nginx-proxy.entrypoints=web-secure"'
@ -172,7 +172,7 @@ matrix_synapse_container_extra_arguments:
- '--label "traefik.http.services.matrix-synapse.loadbalancer.server.port=8048"'
```
This method uses labels attached to the Nginx and Synapse containers to provide the Traefik Docker provider with the information it needs to proxy `matrix.DOMAIN`, `element.DOMAIN`, and `dimension.DOMAIN`. Some [static configuration](https://docs.traefik.io/v2.0/reference/static-configuration/file/) is required in Traefik; namely, having endpoints on ports 443 and 8448 and having a certificate resolver.
This method uses labels attached to the Nginx and Synapse containers to provide the Traefik Docker provider with the information it needs to proxy `matrix.DOMAIN`, `element.DOMAIN`, `dimension.DOMAIN` and `jitsi.DOMAIN`. Some [static configuration](https://docs.traefik.io/v2.0/reference/static-configuration/file/) is required in Traefik; namely, having endpoints on ports 443 and 8448 and having a certificate resolver.
Note that this configuration on its own does **not** redirect traffic on port 80 (plain HTTP) to port 443 for HTTPS, which may cause some issues, since the built-in Nginx proxy usually does this. If you are not already doing this in Traefik, it can be added to Traefik in a [file provider](https://docs.traefik.io/v2.0/providers/file/) as follows:
@ -205,7 +205,7 @@ services:
image: "traefik:v2.3"
restart: always
container_name: "traefik"
networks:
networks:
- traefik
command:
- "--api.insecure=true"

View File

@ -1,24 +1,40 @@
# Uninstalling
**Note**: If you have some trouble with your installation configuration, you can just [re-run the playbook](installing.md) and it will try to set things up again. You don't need to uninstall and install fresh.
**Warnings**:
However, if you've installed this on some server where you have other stuff you wish to preserve, and now want get rid of Matrix, it's enough to do these:
- If your server federates with others, make sure to **leave any federated rooms before nuking your Matrix server's data**. Otherwise, the next time you set up a Matrix server for this domain (regardless of the installation method you use), you'll encounter trouble federating.
- ensure all Matrix services are stopped (`systemctl stop 'matrix*'`)
- If you have some trouble with your installation, you can just [re-run the playbook](installing.md) and it will try to set things up again. **Uninstalling and then installing anew rarely solves anything**.
- delete the Matrix-related systemd .service files (`rm -f /etc/systemd/system/matrix*`) and reload systemd (`systemctl daemon-reload`)
-----------------
## Uninstalling using a script
Installing places a `/usr/local/bin/matrix-remove-all` script on the server.
You can run it to to have it uninstall things for you automatically (see below). **Use with caution!**
## Uninstalling manually
If you prefer to uninstall manually, run these commands (most are meant to be executed on the Matrix server itself):
- ensure all Matrix services are stopped: `ansible-playbook -i inventory/hosts setup.yml --tags=stop` (if you can't get Ansible working to run this command, you can run `systemctl stop 'matrix*'` manually on the server)
- delete the Matrix-related systemd `.service` files (`rm -f /etc/systemd/system/matrix*.service`) and reload systemd (`systemctl daemon-reload`)
- delete all Matrix-related cronjobs (`rm -f /etc/cron.d/matrix*`)
- delete some helper scripts (`rm -f /usr/local/bin/matrix*`)
- delete some cached Docker images (or just delete them all: `docker rmi $(docker images -aq)`)
- delete some cached Docker images (`docker system prune -a`) or just delete them all (`docker rmi $(docker images -aq)`)
- delete the Docker network: `docker network rm matrix`
- delete the Docker network: `docker network rm matrix` (might have been deleted already if you ran the `docker system prune` command)
- uninstall Docker itself, if necessary
- delete the `/matrix` directory (`rm -rf /matrix`)
The script `/usr/local/bin/matrix-remove-all` performs all these steps (**use with caution!**).