People often report and ask about these "failures".
More-so previously, when the `docker kill/rm` output was collected,
but it still happens now when people do `systemctl status
matrix-something` and notice that it says "FAILURE".
Suppressing to avoid further time being wasted on saying "this is
expected".
The `to_nice_yaml` helper will by default wrap any string YAML values on
the first space after column 80. This can in worst case yield invalid
YAML syntax. More details in Ansible's documentation here:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#formatting-data-yaml-and-json
In short, you need to explicitly provide a custom width argument of a
high number of some kind to avoid the line wrapping.
Not hardcoding 'CentOS' and using the OS family ('RedHat') instead,
we now behave better on Rockylinux and AlmaLinux, etc.
With that said, we may or may not fully support CentOS/Rockylinux/AlmaLinux v8 yet.
Certain things were improved in
https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/300.
v8 support is discussed here: https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/300
Certain things (firewalld?) may still be problematic. This patch does not try to address those.
If the remaining issues are confirmed to be fixed in the future, we can mark v8 as supported.
Reverts b1b4ba501f, 90c9801c56, a3c84f78ca, ..
I haven't really traced it (yet), but on some servers, I'm observing
`ansible-playbook ... --tags=start` completing very slowly, waiting
to stop services. I can't reproduce this on all Matrix servers I manage.
I suspect that either the systemd version is to blame or that some
specific service is not responding well to some `docker kill/rm` command.
`ExecStop` seems to work great in all cases and it's what we've been
using for a very long time, so I'm reverting to that.
Until now, we were leaving services "enabled"
(symlinks in /etc/systemd/system/multi-user.target.wants/).
We clean these up now. Broken symlinks may still exist in older
installations that enabled/disabled services. We're not taking care
to fix these up. It's just a cosmetic defect anyway.
These are just defensive cleanup tasks that we run.
In the good case, there's nothing to kill or remove, so they trigger an
error like this:
> Error response from daemon: Cannot kill container: something: No such container: something
and:
> Error: No such container: something
People often ask us if this is a problem, so instead of always having to
answer with "no, this is to be expected", we'd rather eliminate it now
and make logs cleaner.
In the event that:
- a container is really stuck and needs cleanup using kill/rm
- and cleanup fails, and we fail to report it because of error
suppression (`2>/dev/null`)
.. we'd still get an error when launching ("container name already in use .."),
so it shouldn't be too hard to investigate.
This reverts commit 2a25b63bb6.
Looking at other roles, we trigger building regardless of this.
It's better to always trigger it, because it's less fragile.
If the build fails and we only trigger it on "git changes"
then we won't trigger it for a while. That's not good.
Triggering it each and every time may seem like a waste,
but it supposedly runs quickly due to Docker caching.
The Docker 19.04 -> 20.10 upgrade contains the following change
in `/usr/lib/systemd/system/docker.service`:
```
-BindsTo=containerd.service
-After=network-online.target firewalld.service containerd.service
+After=network-online.target firewalld.service containerd.service multi-user.target
-Requires=docker.socket
+Requires=docker.socket containerd.service
Wants=network-online.target
```
The `multi-user.target` requirement in `After` seems to be in conflict
with our `WantedBy=multi-user.target` and `After=docker.service` /
`Requires=docker.service` definitions, causing the following error on
startup for all of our systemd services:
> Job matrix-synapse.service/start deleted to break ordering cycle starting with multi-user.target/start
A workaround which appears to work is to add `DefaultDependencies=no`
to all of our services.
We've had a report of the `connection` value getting cut off,
supposedly because it contains something that breaks off the string.
Using `|to_json` takes care of it.