Compare commits
16 Commits
eed3f8e6c5
...
f77e682b80
Author | SHA1 | Date | |
---|---|---|---|
f77e682b80 | |||
38b916094e | |||
cfb8af623a | |||
|
d51a9118d3 | ||
|
9ab7b9fa58 | ||
|
20bc3eb24b | ||
|
42352b491c | ||
|
971a751a5e | ||
|
60c745a862 | ||
|
229b93d7c8 | ||
|
3f0e8122ec | ||
|
27e1451cbc | ||
|
d584b44f10 | ||
|
89094d0126 | ||
|
c2c68f814b | ||
|
1472958e25 |
@ -12,7 +12,8 @@ If your database name differs, be sure to change `matrix_synapse_database_databa
|
|||||||
|
|
||||||
The playbook supports importing Postgres dump files in **text** (e.g. `pg_dump > dump.sql`) or **gzipped** formats (e.g. `pg_dump | gzip -c > dump.sql.gz`).
|
The playbook supports importing Postgres dump files in **text** (e.g. `pg_dump > dump.sql`) or **gzipped** formats (e.g. `pg_dump | gzip -c > dump.sql.gz`).
|
||||||
|
|
||||||
Importing multiple databases (as dumped by `pg_dumpall`) is also supported.
|
Importing multiple databases (as dumped by `pg_dumpall`) is also supported.
|
||||||
|
But the migration might be a good moment, to "reset" a not properly working bridge. Be aware, that it might affect all users (new link to bridge, new roomes, ...)
|
||||||
|
|
||||||
Before doing the actual import, **you need to upload your Postgres dump file to the server** (any path is okay).
|
Before doing the actual import, **you need to upload your Postgres dump file to the server** (any path is okay).
|
||||||
|
|
||||||
@ -32,6 +33,7 @@ ansible-playbook -i inventory/hosts setup.yml \
|
|||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Table Ownership
|
||||||
A table ownership issue can occur if you are importing from a Synapse installation which was both:
|
A table ownership issue can occur if you are importing from a Synapse installation which was both:
|
||||||
|
|
||||||
- migrated from SQLite to Postgres, and
|
- migrated from SQLite to Postgres, and
|
||||||
@ -48,7 +50,7 @@ where `synapse_user` is the database username from the previous Synapse installa
|
|||||||
This can be verified by examining the dump for ALTER TABLE statements which set OWNER TO that username:
|
This can be verified by examining the dump for ALTER TABLE statements which set OWNER TO that username:
|
||||||
|
|
||||||
```Shell
|
```Shell
|
||||||
$ grep "ALTER TABLE" homeserver.sql"
|
$ grep "ALTER TABLE" homeserver.sql
|
||||||
ALTER TABLE public.access_tokens OWNER TO synapse_user;
|
ALTER TABLE public.access_tokens OWNER TO synapse_user;
|
||||||
ALTER TABLE public.account_data OWNER TO synapse_user;
|
ALTER TABLE public.account_data OWNER TO synapse_user;
|
||||||
ALTER TABLE public.account_data_max_stream_id OWNER TO synapse_user;
|
ALTER TABLE public.account_data_max_stream_id OWNER TO synapse_user;
|
||||||
@ -60,10 +62,10 @@ ALTER TABLE public.application_services_state OWNER TO synapse_user;
|
|||||||
It can be worked around by changing the username to `synapse`, for example by using `sed`:
|
It can be worked around by changing the username to `synapse`, for example by using `sed`:
|
||||||
|
|
||||||
```Shell
|
```Shell
|
||||||
$ sed -i "s/synapse_user/synapse/g" homeserver.sql
|
$ sed -i "s/OWNER TO synapse_user;/OWNER TO synapse;/g" homeserver.sql
|
||||||
```
|
```
|
||||||
|
|
||||||
This uses sed to perform an 'in-place' (`-i`) replacement globally (`/g`), searching for `synapse user` and replacing with `synapse` (`s/synapse_user/synapse`). If your database username was different, change `synapse_user` to that username instead.
|
This uses sed to perform an 'in-place' (`-i`) replacement globally (`/g`), searching for `synapse_user` and replacing with `synapse` (`s/synapse_user/synapse`). If your database username was different, change `synapse_user` to that username instead. Expand search/replace statement as shown in example above, in case of old user name like `matrix` - replacing `matrix` only would... well - you can imagine.
|
||||||
|
|
||||||
Note that if the previous import failed with an error it may have made changes which are incompatible with re-running the import task right away; if you do so it may fail with an error such as:
|
Note that if the previous import failed with an error it may have made changes which are incompatible with re-running the import task right away; if you do so it may fail with an error such as:
|
||||||
|
|
||||||
@ -71,6 +73,8 @@ Note that if the previous import failed with an error it may have made changes w
|
|||||||
ERROR: relation \"access_tokens\" already exists
|
ERROR: relation \"access_tokens\" already exists
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Repeat import
|
||||||
|
|
||||||
In this case you can use the command suggested in the import task to clear the database before retrying the import:
|
In this case you can use the command suggested in the import task to clear the database before retrying the import:
|
||||||
|
|
||||||
```Shell
|
```Shell
|
||||||
@ -79,4 +83,20 @@ In this case you can use the command suggested in the import task to clear the d
|
|||||||
# systemctl start matrix-postgres
|
# systemctl start matrix-postgres
|
||||||
```
|
```
|
||||||
|
|
||||||
Once the database is clear and the ownership of the tables has been fixed in the SQL file, the import task should succeed.
|
Now on your local machine run `ansible-playbook -i inventory/hosts setup.yml --tags=setup-postgres` to prepare the database roles etc.
|
||||||
|
|
||||||
|
If not, you probably get this error. `synapse` is the correct table owner, but the role is missing in database.
|
||||||
|
```
|
||||||
|
"ERROR: role synapse does not exist"
|
||||||
|
```
|
||||||
|
|
||||||
|
Once the database is clear and the ownership of the tables has been fixed in the SQL file, the import task should succeed.
|
||||||
|
Check, if `--dbname` is set to `synapse` (not `matrix`) and replace paths (or even better, copy this line from your terminal)
|
||||||
|
|
||||||
|
```
|
||||||
|
/usr/bin/env docker run --rm --name matrix-postgres-import --log-driver=none --user=998:1001 --cap-drop=ALL --network=matrix --env-file=/matrix/postgres/env-postgres-psql --mount type=bind,src=/migration/synapse_dump.sql,dst=/synapse_dump.sql,ro --entrypoint=/bin/sh docker.io/postgres:14.1-alpine -c "cat /synapse_dump.sql | grep -vE '^(CREATE|ALTER) ROLE (matrix)(;| WITH)' | grep -vE '^CREATE DATABASE (matrix)\s' | psql -v ON_ERROR_STOP=1 -h matrix-postgres --dbname=synapse"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Hints
|
||||||
|
|
||||||
|
To open psql terminal run `/usr/local/bin/matrix-postgres-cli`
|
||||||
|
@ -15,6 +15,8 @@ homeserver:
|
|||||||
# If set, the bridge will make POST requests to this URL whenever a user's Signal connection state changes.
|
# If set, the bridge will make POST requests to this URL whenever a user's Signal connection state changes.
|
||||||
# The bridge will use the appservice as_token to authorize requests.
|
# The bridge will use the appservice as_token to authorize requests.
|
||||||
status_endpoint: null
|
status_endpoint: null
|
||||||
|
# Endpoint for reporting per-message status.
|
||||||
|
message_send_checkpoint_endpoint: null
|
||||||
|
|
||||||
# Application service host/registration related details
|
# Application service host/registration related details
|
||||||
# Changing these values requires regeneration of the registration.
|
# Changing these values requires regeneration of the registration.
|
||||||
@ -32,25 +34,19 @@ appservice:
|
|||||||
# Usually 1 is enough, but on high-traffic bridges you might need to increase this to avoid 413s
|
# Usually 1 is enough, but on high-traffic bridges you might need to increase this to avoid 413s
|
||||||
max_body_size: 1
|
max_body_size: 1
|
||||||
|
|
||||||
# The full URI to the database. Only Postgres is currently supported.
|
# The full URI to the database. SQLite and Postgres are supported.
|
||||||
|
# Format examples:
|
||||||
|
# SQLite: sqlite:///filename.db
|
||||||
|
# Postgres: postgres://username:password@hostname/dbname
|
||||||
database: {{ matrix_mautrix_signal_database_connection_string }}
|
database: {{ matrix_mautrix_signal_database_connection_string }}
|
||||||
# Additional arguments for asyncpg.create_pool()
|
# Additional arguments for asyncpg.create_pool() or sqlite3.connect()
|
||||||
# https://magicstack.github.io/asyncpg/current/api/index.html#asyncpg.pool.create_pool
|
# https://magicstack.github.io/asyncpg/current/api/index.html#asyncpg.pool.create_pool
|
||||||
|
# https://docs.python.org/3/library/sqlite3.html#sqlite3.connect
|
||||||
|
# For sqlite, min_size is used as the connection thread pool size and max_size is ignored.
|
||||||
database_opts:
|
database_opts:
|
||||||
min_size: 5
|
min_size: 5
|
||||||
max_size: 10
|
max_size: 10
|
||||||
|
|
||||||
# Provisioning API part of the web server for automated portal creation and fetching information.
|
|
||||||
# Used by things like mautrix-manager (https://github.com/tulir/mautrix-manager).
|
|
||||||
provisioning:
|
|
||||||
# Whether or not the provisioning API should be enabled.
|
|
||||||
enabled: true
|
|
||||||
# The prefix to use in the provisioning API endpoints.
|
|
||||||
prefix: /_matrix/provision/v1
|
|
||||||
# The shared secret to authorize users of the API.
|
|
||||||
# Set to "generate" to generate and save a new token.
|
|
||||||
shared_secret: generate
|
|
||||||
|
|
||||||
# The unique ID of this appservice.
|
# The unique ID of this appservice.
|
||||||
id: signal
|
id: signal
|
||||||
# Username of the appservice bot.
|
# Username of the appservice bot.
|
||||||
@ -66,7 +62,12 @@ appservice:
|
|||||||
# Example: "+signal:example.com". Set to false to disable.
|
# Example: "+signal:example.com". Set to false to disable.
|
||||||
community_id: false
|
community_id: false
|
||||||
|
|
||||||
# Authentication tokens for AS <-> HS communication.
|
# Whether or not to receive ephemeral events via appservice transactions.
|
||||||
|
# Requires MSC2409 support (i.e. Synapse 1.22+).
|
||||||
|
# You should disable bridge -> sync_with_custom_puppets when this is enabled.
|
||||||
|
ephemeral_events: false
|
||||||
|
|
||||||
|
# Authentication tokens for AS <-> HS communication. Autogenerated; do not modify.
|
||||||
as_token: "{{ matrix_mautrix_signal_appservice_token }}"
|
as_token: "{{ matrix_mautrix_signal_appservice_token }}"
|
||||||
hs_token: "{{ matrix_mautrix_signal_homeserver_token }}"
|
hs_token: "{{ matrix_mautrix_signal_homeserver_token }}"
|
||||||
|
|
||||||
@ -75,6 +76,17 @@ metrics:
|
|||||||
enabled: false
|
enabled: false
|
||||||
listen_port: 8000
|
listen_port: 8000
|
||||||
|
|
||||||
|
# Manhole config.
|
||||||
|
manhole:
|
||||||
|
# Whether or not opening the manhole is allowed.
|
||||||
|
enabled: false
|
||||||
|
# The path for the unix socket.
|
||||||
|
path: /var/tmp/mautrix-signal.manhole
|
||||||
|
# The list of UIDs who can be added to the whitelist.
|
||||||
|
# If empty, any UIDs can be specified in the open-manhole command.
|
||||||
|
whitelist:
|
||||||
|
- 0
|
||||||
|
|
||||||
signal:
|
signal:
|
||||||
# Path to signald unix socket
|
# Path to signald unix socket
|
||||||
socket_path: /signald/signald.sock
|
socket_path: /signald/signald.sock
|
||||||
@ -91,6 +103,8 @@ signal:
|
|||||||
delete_unknown_accounts_on_start: false
|
delete_unknown_accounts_on_start: false
|
||||||
# Whether or not message attachments should be removed from disk after they're bridged.
|
# Whether or not message attachments should be removed from disk after they're bridged.
|
||||||
remove_file_after_handling: true
|
remove_file_after_handling: true
|
||||||
|
# Whether or not users can register a primary device
|
||||||
|
registration_enabled: true
|
||||||
|
|
||||||
# Bridge config
|
# Bridge config
|
||||||
bridge:
|
bridge:
|
||||||
@ -102,6 +116,7 @@ bridge:
|
|||||||
# available variable in displayname_preference. The variables in displayname_preference
|
# available variable in displayname_preference. The variables in displayname_preference
|
||||||
# can also be used here directly.
|
# can also be used here directly.
|
||||||
displayname_template: "{displayname} (Signal)"
|
displayname_template: "{displayname} (Signal)"
|
||||||
|
# Whether or not contact list displaynames should be used.
|
||||||
# Possible values: disallow, allow, prefer
|
# Possible values: disallow, allow, prefer
|
||||||
#
|
#
|
||||||
# Multi-user instances are recommended to disallow contact list names, as otherwise there can
|
# Multi-user instances are recommended to disallow contact list names, as otherwise there can
|
||||||
@ -140,7 +155,7 @@ bridge:
|
|||||||
# If false, created portal rooms will never be federated.
|
# If false, created portal rooms will never be federated.
|
||||||
federate_rooms: true
|
federate_rooms: true
|
||||||
# End-to-bridge encryption support options. You must install the e2be optional dependency for
|
# End-to-bridge encryption support options. You must install the e2be optional dependency for
|
||||||
# this to work. See https://docs.mau.fi/bridges/general/end-to-bridge-encryption.html
|
# this to work. See https://github.com/tulir/mautrix-telegram/wiki/End‐to‐bridge-encryption
|
||||||
encryption:
|
encryption:
|
||||||
# Allow encryption, work in group chat rooms with e2ee enabled
|
# Allow encryption, work in group chat rooms with e2ee enabled
|
||||||
allow: false
|
allow: false
|
||||||
@ -173,12 +188,38 @@ bridge:
|
|||||||
# This field will automatically be changed back to false after it,
|
# This field will automatically be changed back to false after it,
|
||||||
# except if the config file is not writable.
|
# except if the config file is not writable.
|
||||||
resend_bridge_info: false
|
resend_bridge_info: false
|
||||||
# Interval at which to resync contacts.
|
# Interval at which to resync contacts (in seconds).
|
||||||
periodic_sync: 0
|
periodic_sync: 0
|
||||||
|
|
||||||
|
# Provisioning API part of the web server for automated portal creation and fetching information.
|
||||||
|
# Used by things like mautrix-manager (https://github.com/tulir/mautrix-manager).
|
||||||
|
provisioning:
|
||||||
|
# Whether or not the provisioning API should be enabled.
|
||||||
|
enabled: true
|
||||||
|
# The prefix to use in the provisioning API endpoints.
|
||||||
|
prefix: /_matrix/provision/v1
|
||||||
|
# The shared secret to authorize users of the API.
|
||||||
|
# Set to "generate" to generate and save a new token.
|
||||||
|
shared_secret: generate
|
||||||
|
|
||||||
# The prefix for commands. Only required in non-management rooms.
|
# The prefix for commands. Only required in non-management rooms.
|
||||||
command_prefix: "!signal"
|
command_prefix: "!signal"
|
||||||
|
|
||||||
|
# Messages sent upon joining a management room.
|
||||||
|
# Markdown is supported. The defaults are listed below.
|
||||||
|
management_room_text:
|
||||||
|
# Sent when joining a room.
|
||||||
|
welcome: "Hello, I'm a Signal bridge bot."
|
||||||
|
# Sent when joining a management room and the user is already logged in.
|
||||||
|
welcome_connected: "Use `help` for help."
|
||||||
|
# Sent when joining a management room and the user is not logged in.
|
||||||
|
welcome_unconnected: "Use `help` for help or `register` to log in."
|
||||||
|
# Optional extra text sent when joining a management room.
|
||||||
|
additional_help: ""
|
||||||
|
|
||||||
|
# Send each message separately (for readability in some clients)
|
||||||
|
management_room_multiple_messages: false
|
||||||
|
|
||||||
# Permissions for using the bridge.
|
# Permissions for using the bridge.
|
||||||
# Permitted values:
|
# Permitted values:
|
||||||
# relay - Allowed to be relayed through the bridge, no access to commands.
|
# relay - Allowed to be relayed through the bridge, no access to commands.
|
||||||
|
@ -22,7 +22,7 @@ matrix_corporal_container_extra_arguments: []
|
|||||||
# List of systemd services that matrix-corporal.service depends on
|
# List of systemd services that matrix-corporal.service depends on
|
||||||
matrix_corporal_systemd_required_services_list: ['docker.service']
|
matrix_corporal_systemd_required_services_list: ['docker.service']
|
||||||
|
|
||||||
matrix_corporal_version: 2.2.1
|
matrix_corporal_version: 2.2.2
|
||||||
matrix_corporal_docker_image: "{{ matrix_corporal_docker_image_name_prefix }}devture/matrix-corporal:{{ matrix_corporal_docker_image_tag }}"
|
matrix_corporal_docker_image: "{{ matrix_corporal_docker_image_name_prefix }}devture/matrix-corporal:{{ matrix_corporal_docker_image_tag }}"
|
||||||
matrix_corporal_docker_image_name_prefix: "{{ 'localhost/' if matrix_corporal_container_image_self_build else matrix_container_global_registry_prefix }}"
|
matrix_corporal_docker_image_name_prefix: "{{ 'localhost/' if matrix_corporal_container_image_self_build else matrix_container_global_registry_prefix }}"
|
||||||
matrix_corporal_docker_image_tag: "{{ matrix_corporal_version }}" # for backward-compatibility
|
matrix_corporal_docker_image_tag: "{{ matrix_corporal_version }}" # for backward-compatibility
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
|
|
||||||
matrix_grafana_enabled: false
|
matrix_grafana_enabled: false
|
||||||
|
|
||||||
matrix_grafana_version: 8.2.2
|
matrix_grafana_version: 8.3.0
|
||||||
matrix_grafana_docker_image: "{{ matrix_container_global_registry_prefix }}grafana/grafana:{{ matrix_grafana_version }}"
|
matrix_grafana_docker_image: "{{ matrix_container_global_registry_prefix }}grafana/grafana:{{ matrix_grafana_version }}"
|
||||||
matrix_grafana_docker_image_force_pull: "{{ matrix_grafana_docker_image.endswith(':latest') }}"
|
matrix_grafana_docker_image_force_pull: "{{ matrix_grafana_docker_image.endswith(':latest') }}"
|
||||||
|
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
|
|
||||||
matrix_prometheus_enabled: false
|
matrix_prometheus_enabled: false
|
||||||
|
|
||||||
matrix_prometheus_version: v2.30.3
|
matrix_prometheus_version: v2.31.1
|
||||||
matrix_prometheus_docker_image: "{{ matrix_container_global_registry_prefix }}prom/prometheus:{{ matrix_prometheus_version }}"
|
matrix_prometheus_docker_image: "{{ matrix_container_global_registry_prefix }}prom/prometheus:{{ matrix_prometheus_version }}"
|
||||||
matrix_prometheus_docker_image_force_pull: "{{ matrix_prometheus_docker_image.endswith(':latest') }}"
|
matrix_prometheus_docker_image_force_pull: "{{ matrix_prometheus_docker_image.endswith(':latest') }}"
|
||||||
|
|
||||||
|
@ -5,7 +5,7 @@ matrix_redis_connection_password: ""
|
|||||||
matrix_redis_base_path: "{{ matrix_base_data_path }}/redis"
|
matrix_redis_base_path: "{{ matrix_base_data_path }}/redis"
|
||||||
matrix_redis_data_path: "{{ matrix_redis_base_path }}/data"
|
matrix_redis_data_path: "{{ matrix_redis_base_path }}/data"
|
||||||
|
|
||||||
matrix_redis_version: 6.2.4-alpine
|
matrix_redis_version: 6.2.6-alpine
|
||||||
matrix_redis_docker_image_v6: "{{ matrix_container_global_registry_prefix }}redis:{{ matrix_redis_version }}"
|
matrix_redis_docker_image_v6: "{{ matrix_container_global_registry_prefix }}redis:{{ matrix_redis_version }}"
|
||||||
matrix_redis_docker_image_latest: "{{ matrix_redis_docker_image_v6 }}"
|
matrix_redis_docker_image_latest: "{{ matrix_redis_docker_image_v6 }}"
|
||||||
matrix_redis_docker_image_to_use: '{{ matrix_redis_docker_image_latest }}'
|
matrix_redis_docker_image_to_use: '{{ matrix_redis_docker_image_latest }}'
|
||||||
|
@ -15,8 +15,8 @@ matrix_synapse_docker_image_name_prefix: "{{ 'localhost/' if matrix_synapse_cont
|
|||||||
# amd64 gets released first.
|
# amd64 gets released first.
|
||||||
# arm32 relies on self-building, so the same version can be built immediately.
|
# arm32 relies on self-building, so the same version can be built immediately.
|
||||||
# arm64 users need to wait for a prebuilt image to become available.
|
# arm64 users need to wait for a prebuilt image to become available.
|
||||||
matrix_synapse_version: v1.47.1
|
matrix_synapse_version: v1.48.0
|
||||||
matrix_synapse_version_arm64: v1.47.1
|
matrix_synapse_version_arm64: v1.48.0
|
||||||
matrix_synapse_docker_image_tag: "{{ matrix_synapse_version if matrix_architecture in ['arm32', 'amd64'] else matrix_synapse_version_arm64 }}"
|
matrix_synapse_docker_image_tag: "{{ matrix_synapse_version if matrix_architecture in ['arm32', 'amd64'] else matrix_synapse_version_arm64 }}"
|
||||||
matrix_synapse_docker_image_force_pull: "{{ matrix_synapse_docker_image.endswith(':latest') }}"
|
matrix_synapse_docker_image_force_pull: "{{ matrix_synapse_docker_image.endswith(':latest') }}"
|
||||||
|
|
||||||
|
@ -667,8 +667,8 @@ tls_private_key_path: {{ matrix_synapse_tls_private_key_path|to_json }}
|
|||||||
#
|
#
|
||||||
#federation_certificate_verification_whitelist:
|
#federation_certificate_verification_whitelist:
|
||||||
# - lon.example.com
|
# - lon.example.com
|
||||||
# - *.domain.com
|
# - "*.domain.com"
|
||||||
# - *.onion
|
# - "*.onion"
|
||||||
|
|
||||||
# List of custom certificate authorities for federation traffic.
|
# List of custom certificate authorities for federation traffic.
|
||||||
#
|
#
|
||||||
@ -2229,6 +2229,12 @@ sso:
|
|||||||
#
|
#
|
||||||
#algorithm: "provided-by-your-issuer"
|
#algorithm: "provided-by-your-issuer"
|
||||||
|
|
||||||
|
# Name of the claim containing a unique identifier for the user.
|
||||||
|
#
|
||||||
|
# Optional, defaults to `sub`.
|
||||||
|
#
|
||||||
|
#subject_claim: "sub"
|
||||||
|
|
||||||
# The issuer to validate the "iss" claim against.
|
# The issuer to validate the "iss" claim against.
|
||||||
#
|
#
|
||||||
# Optional, if provided the "iss" claim will be required and
|
# Optional, if provided the "iss" claim will be required and
|
||||||
@ -2637,8 +2643,8 @@ user_directory:
|
|||||||
# indexes were (re)built was before Synapse 1.44, you'll have to
|
# indexes were (re)built was before Synapse 1.44, you'll have to
|
||||||
# rebuild the indexes in order to search through all known users.
|
# rebuild the indexes in order to search through all known users.
|
||||||
# These indexes are built the first time Synapse starts; admins can
|
# These indexes are built the first time Synapse starts; admins can
|
||||||
# manually trigger a rebuild following the instructions at
|
# manually trigger a rebuild via API following the instructions at
|
||||||
# https://matrix-org.github.io/synapse/latest/user_directory.html
|
# https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/background_updates.html#run
|
||||||
#
|
#
|
||||||
# Uncomment to return search results containing all known users, even if that
|
# Uncomment to return search results containing all known users, even if that
|
||||||
# user does not share a room with the requester.
|
# user does not share a room with the requester.
|
||||||
|
@ -5,10 +5,10 @@ matrix_synapse_workers_generic_worker_endpoints:
|
|||||||
# expressions:
|
# expressions:
|
||||||
|
|
||||||
# Sync requests
|
# Sync requests
|
||||||
- ^/_matrix/client/(v2_alpha|r0|v3)/sync$
|
- ^/_matrix/client/(v2_alpha|r0)/sync$
|
||||||
- ^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$
|
- ^/_matrix/client/(api/v1|v2_alpha|r0)/events$
|
||||||
- ^/_matrix/client/(api/v1|r0|v3)/initialSync$
|
- ^/_matrix/client/(api/v1|r0)/initialSync$
|
||||||
- ^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
|
- ^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
|
||||||
|
|
||||||
# Federation requests
|
# Federation requests
|
||||||
- ^/_matrix/federation/v1/event/
|
- ^/_matrix/federation/v1/event/
|
||||||
@ -63,7 +63,7 @@ matrix_synapse_workers_generic_worker_endpoints:
|
|||||||
|
|
||||||
# Registration/login requests
|
# Registration/login requests
|
||||||
- ^/_matrix/client/(api/v1|r0|v3|unstable)/login$
|
- ^/_matrix/client/(api/v1|r0|v3|unstable)/login$
|
||||||
- ^/_matrix/client/(r0|v3|unstable)/register$
|
- ^/_matrix/client/(r0|unstable)/register$
|
||||||
- ^/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity$
|
- ^/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity$
|
||||||
|
|
||||||
# Event sending requests
|
# Event sending requests
|
||||||
|
Loading…
Reference in New Issue
Block a user