**HSTS Preloading:**
In its strongest and recommended form, the [HSTS policy](https://www.chromium.org/hsts) includes all subdomains, and indicates a willingness to be “preloaded” into browsers:
`Strict-Transport-Security: max-age=31536000; includeSubDomains; preload`
**X-Xss-Protection:**
`1; mode=block` which tells the browser to block the response if it detects an attack rather than sanitising the script.
This variable was previously undefined in the role and was only getting
defined via `group_vars/matrix_servers`.
We now properly initialize it (and its good default value) in the role
itself.
Self-checks against the .well-known URIs look for the HTTP header
"Access-Control-Allow-Origin" indicating that the remode endpoint
supports CORS. But the remote server is not required to include
said header in the response if the HTTP request does not include
the "Origin" header. This is in accordance with the specification
[1] stating: 'A CORS request is an HTTP request that includes an
"Origin" header.'
This is in fact true for Gitlab pages hosting and that's why the
issue was identified.
Let's specify "Origin" header in the respective uri tasks performing
the HTTP request and ensure a CORS request.
[1] https://fetch.spec.whatwg.org/#http-requests
We have a flow like this:
1. matrix.DOMAIN vhost (matrix-domain.conf)
2. matrix-synapse vhost (matrix-synapse.conf); or matrix-corporal container, if enabled
3. (optional) matrix-synapse vhost (matrix-synapse.conf), if matrix-corporal enabled
4. matrix-synapse container
We are setting `X-Forwarded-For` correctly in step #1, but were
overwriting it in step #2 with something inaccurate.
Not doing anything in step #2 is better than doing the wrong thing.
It's probably best if we append another reverse-proxy address there
though, although what we're doing now (with this patch) seems to yield
the correct result (when matrix-corporal is not enabled).
When matrix-corporal is enabled, we still seem to do the wrong thing for
some reason. It's something to be fixed later on.
People who were disabling matrix-nginx-proxy (in favor of their own
nginx webserver) and also overriding `matrix_federation_public_port`,
found that the generated nginx configuration still hardcoded `8448`,
which forced their nginx server to use that, regardless of the fact
that `matrix_federation_public_port` was pointing elsewhere.
We now allow for the in-container federation port to be configurable,
and also automatically wire things properly.
We're talking about a webserver running on the same machine, which
imports the configuration files generated by the `matrix-nginx-proxy`
in the `/matrix/nginx-proxy/conf.d` directory.
Users who run an nginx webserver on some other machine will need to do
something different.
This give us the possibility to run multiple instances of
workers that that don't expose a port.
Right now, we don't support that, but in the future we could
run multiple `federation_sender` or `pusher` workers, without
them fighting over naming (previously, they'd all be named
something like `matrix-synapse-worker-pusher-0`, because
they'd all define `port` as `0`).
I felt that adding another variable was probably going to be the easiest way to do this. I may end up adding another variable to enable this feature, for consistency with some of the other things.
These are just defensive cleanup tasks that we run.
In the good case, there's nothing to kill or remove, so they trigger an
error like this:
> Error response from daemon: Cannot kill container: something: No such container: something
and:
> Error: No such container: something
People often ask us if this is a problem, so instead of always having to
answer with "no, this is to be expected", we'd rather eliminate it now
and make logs cleaner.
In the event that:
- a container is really stuck and needs cleanup using kill/rm
- and cleanup fails, and we fail to report it because of error
suppression (`2>/dev/null`)
.. we'd still get an error when launching ("container name already in use .."),
so it shouldn't be too hard to investigate.
This switches the `docker exec` method of spawning
Synapse workers inside the `matrix-synapse` container with
dedicated containers for each worker.
We also have dedicated systemd services for each worker,
so this are now:
- more consistent with everything else (we don't use systemd
instantiated services anywhere)
- we don't need the "parse systemd instance name into worker name +
port" part
- we don't need to keep track of PIDs manually
- we don't need jq (less depenendencies)
- workers dying would be restarted by systemd correctly, like any other
service
- `docker ps` shows each worker separately and we can observe resource
usage
We do this by creating one more layer of indirection.
First we reach some generic vhost handling matrix.DOMAIN.
A bunch of override rules are added there (capturing traffic to send to
ma1sd, etc). nginx-status and similar generic things also live there.
We then proxy to the homeserver on some other vhost (only Synapse being
available right now, but repointing this to Dendrite or other will be
possible in the future).
Then that homeserver-specific vhost does its thing to proxy to the
homeserver. It may or may not use workers, etc.
Without matrix-corporal, the flow is now:
1. matrix.DOMAIN (matrix-nginx-proxy/matrix-domain.conf)
2. matrix-nginx-proxy/matrix-synapse.conf
3. matrix-synapse
With matrix-corporal enabled, it becomes:
1. matrix.DOMAIN (matrix-nginx-proxy/matrix-domain.conf)
2. matrix-corporal
3. matrix-nginx-proxy/matrix-synapse.conf
4. matrix-synapse
(matrix-corporal gets injected at step 2).
I guess it didn't hurt to do it until now, but it's not great serving
federation APIs on the client-server API port, etc.
matrix-corporal doesn't work yet (still something to be solved in the
future), but its firewalling operations will also be sabotaged
by Client-Server APIs being served on the federation port (it's a way to get around its firewalling).